entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 10
200
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 2
817k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2306.08979v1
|
20230615091942
|
Ranking and Selection in Large-Scale Inference of Heteroscedastic Units
|
[
"Bowen Gang",
"Luella Fu",
"Gareth James",
"Wenguang Sun"
] |
stat.ME
|
[
"stat.ME"
] |
theoremTheorem
corollaryCorollary
propositionProposition
lemmaLemma
exampleExample
remarkRemark
definitionDefinition
assumptionAssumption
conditionCondition
|
http://arxiv.org/abs/2306.03966v1
|
20230606190101
|
The Structure and Dynamics of Massive High-$z$ Cosmic-Web Filaments: Three Radial Zones in Filament Cross-Sections
|
[
"Yue Samuel Lu",
"Nir Mandelker",
"S. Peng Oh",
"Avishai Dekel",
"Frank C. van den Bosch",
"Volker Springel",
"Daisuke Nagai",
"Freeke van de Voort"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"astro-ph.GA"
] |
[3]#2
[3]##3
|
http://arxiv.org/abs/2306.02289v2
|
20230604073418
|
Evaluating the Impact of Community Oversight for Managing Mobile Privacy and Security
|
[
"Mamtaj Akter",
"Madiha Tabassum",
"Nazmus Sakib Miazi",
"Leena Alghamdi",
"Jess Kropczynski",
"Pamela Wisniewski",
"Heather Lipford"
] |
cs.HC
|
[
"cs.HC"
] |
Author name(s) for PDF metadata. Don't forget to anonymize for submission!
Evaluating the Impact of Community Oversight for Managing Mobile
Privacy and Security
Mamtaj Akter
Vanderbilt University
Madiha Tabassum
Northeastern University
Nazmus Sakib Miazi
Northeastern University
Leena Alghamdi
University of Central Florida
Jess Kropczynski
University of Cincinnati
Pamela J. Wisniewski
Vanderbilt University
Heather Lipford
University of North Carolina, Charlotte
============================================================================================================================================================================================================================================================================================================================
Mobile privacy and security can be a collaborative process where individuals seek advice and help from their trusted communities. To support such collective privacy and security management, we developed a mobile app for Community Oversight of Privacy and Security ("CO-oPS") that allows community members to review one another's apps installed and permissions granted to provide feedback. We conducted a four-week-long field study with 22 communities (101 participants) of friends, families, or co-workers who installed the CO-oPS app on their phones. Measures of transparency, trust, and awareness of one another's mobile privacy and security behaviors, along with individual and community participation in mobile privacy and security co-management, increased from pre- to post-study. Interview findings confirmed that the app features supported collective considerations of apps and permissions. However, participants expressed a range of concerns regarding having community members with different levels of technical expertise and knowledge regarding mobile privacy and security that can impact motivation to participate and perform oversight. Our study demonstrates the potential and challenges of community oversight mechanisms to support communities to co-manage mobile privacy and security.
§ INTRODUCTION
The majority of U.S. adults own smartphones <cit.>, and nearly half of them have reported downloading various third-party apps <cit.>. These mobile apps often require access to users' sensitive information, such as contacts, emails, location, photos, calendars, and even browser history <cit.>. Most apps request users' permission before accessing any information or resources. Yet users may have difficulty understanding these permission requests and the implications of granting them <cit.>. As a result, users struggle to make permission decisions or grant permission by mistake <cit.>. Even worse, there are ways for more malicious apps to circumvent the permissions system and secretly gather users' system resources and private information without consent <cit.>. Ironically, a recent Pew Research study reported that most US adults are concerned about how their personal information is being used by these third-party apps as respondents felt they lack control over their mobile privacy <cit.>.
This lack of understanding leads users to seek advice and guidance from others <cit.>. Several studies have demonstrated that users often learn about privacy and security from their social network, which influences them to change their own digital privacy and security behavior <cit.>. As such, networked privacy researchers acknowledged the importance of these social processes for managing individual and collective digital privacy and security <cit.>. Despite this prior work, few mechanisms to support these social processes have been developed and evaluated. In this paper, we explore community oversight, where trusted groups of users help one another manage mobile privacy and security.
In our previous work, we proposed a theoretical framework of community oversight <cit.>, describing how the concepts of transparency, awareness, trust, individual and community participation are needed within a particular mechanism. We have now implemented a mobile app, Community Oversight of Privacy and Security (CO-oPS), to explore these concepts in use and support a collaborative approach to mobile privacy and security management. The CO-oPS app allows individuals in a community to review one another's apps installed and permissions granted and provide direct feedback to one another.
In this paper, we present a field study of the CO-oPS app. Our aim was to understand the impact of using the app on participants' mobile app decisions and perceptions. We conducted a 4-week mixed-method longitudinal field study with 101 people in 22 self-formed groups. Each group installed, used, and evaluated the CO-oPS app, provided oversight to one another on their mobile app privacy decisions, and shared experiences through weekly surveys and optional interviews. We describe how users interacted within the app and the changes in their mobile app permission decisions after using the CO-oPS app. We also examine how participants' perceptions regarding co-managing their mobile privacy and security within their communities change throughout the study.
To do so, we measured constructs derived from our community oversight model <cit.> of perceptions of transparency, awareness, trust, and individual and community participation within the CO-oPS app. We tested for the pre-post study differences and detected increases for all of these measures that were statistically significant. Qualitative findings further explain these perceptions and identify co-management concerns: feelings of privacy invasion of their own and others, lack of trust in less knowledgeable community members, lack of close relationships, and communities' inadequate tech expertise. We also found that using the CO-oPS app helped participants increase their communities' collective capacity to address their mobile privacy and security concerns.
In sum, our study makes a unique contribution to SOUPS research community by investigating through a field study how a community oversight mechanism can help increase participants' collective capacity to support one another in co-managing mobile privacy and security together as a community. Specifically, we make the following unique research contributions: 1) Through a longitudinal field study, we describe the benefits and challenges of using a community oversight app to co-manage mobile privacy and security; 2) We provide empirical evidence of the potential for community oversight to increase users' awareness of mobile privacy issues, leading to individual changes in decisions and community exchange of knowledge; and 3) We present considerations and design-based recommendations towards features to support communities in providing oversight to one another.
§ BACKGROUND
Privacy and Security Management in Mobile Applications
Mobile applications often access sensitive information and share users' personal data with third parties <cit.>. As such, substantial work has been done to investigate and support end users in managing mobile app privacy and security. Researchers have looked at the existing privacy awareness and management approaches (e.g., app privacy permission prompts, privacy policies, etc.) and found that such mechanisms often fail to provide users with awareness and knowledge of privacy and security risks <cit.>. Moreover, users often do not understand mobile app permission dialogues <cit.> and are over-exposed to such requests <cit.>. Researchers have proposed several technology-based solutions to increase awareness and limit potential risks associated with third-party mobile apps <cit.>. For example, Sadeghi et al. suggested evaluating the app permissions against risks and automatically grant/revoke permission on users' behalf <cit.>. Others proposed mechanisms to inform users about the app privacy risks, recommend secure choices, and nudge them to review/revise permissions <cit.>. Others suggested tools to allow users to review data before sending it to the server, visualize data flow <cit.>, and replace personal information with mock data without affecting app functionality <cit.>.
While this body of research has emphasized enhancements to technology to help individuals manage privacy and security while using mobile applications, none looked at how knowledge and influence from social groups help in individual privacy and security decision-making. Our research focuses on assessing and supporting these social processes involved in privacy and security management.
Community-based Approaches for Privacy and Security
In general, research shows that people frequently take collaborative approaches to make privacy and security decisions <cit.>, and users often rely on social factors while making such decisions. Chin et al. discovered that smartphone users are more likely to consider social signals, such as reviews and ratings from other users, rather than privacy indicators regarding Android permissions when making app use decisions <cit.>. Das et al. demonstrated that social factors (e.g., community adoption of security features) could increase individuals' security awareness and encourage them to adopt security features <cit.>. As such, researchers have proposed using social and community influence to assist individuals in making decisions about digital privacy and security <cit.>. Squicciarini et al. developed CoPE, a tool to support users in collaboratively managing their shared images in social network sites <cit.>.
Past research has also examined privacy management approaches involving one party performing oversight for another. Organizations adopt mobile device management (MDMs) systems to remotely control and secure the data stored in employees' mobile devices <cit.>. Parents use adolescent online safety apps to monitor and protect teens by restricting their online behavior <cit.>. The results from these studies suggest that a collaborative approach, rather than one-sided control, could benefit both parties and lead to more privacy-preserving outcomes. Finally, several studies leveraged crowdsourcing to use mass user data to support individual users in making improved mobile privacy and security decisions <cit.>. For instance, Ismail et al. utilized crowdsourcing to recommend permissions that can be disabled for enhanced privacy without sacrificing usability <cit.>. However, these approaches showed little consideration for the trustworthiness of information from a random crowd. On the other hand, researchers found that users are more willing to adopt and share privacy advice from a trusted community <cit.>, and they often communicate first with friends and family to learn about potential privacy and security threats and mitigation strategies <cit.>.
In summary, our work builds upon the past literature in social cybersecurity, MDMs, parental control apps, and crowdsourcing to implement and evaluate a novel model of community-based oversight (i.e., self-selected groups) for mobile privacy and security through a large-scale field study. Since the network structure of oversight (e.g., individual for MDMs, many-to-one for crowdsourced recommendations, and unidirectional from parent to child for parental control) in these prior works is vastly different than ours, this new model of community oversight warrants deeper empirical investigation.
In <cit.>, we were the first to propose a novel framework of community oversight for helping people manage their mobile privacy and security together. Through a participatory design study, we identified mechanisms that would allow users to support others in the community in making privacy and security decisions regarding mobile app permissions.
We also designed a prototype mobile app that allows users to collaborate and share information with people they know to help make mobile app permissions decisions <cit.>. While this body of our prior studies provides a valuable basis for the design of community-oriented privacy and security management systems, they only present a theoretical view of users' preferences in community decision-making. In contrast, this study contributes to the literature by providing an in-situ evaluation of how trusted groups of people use and interact with different community-oriented features to collaboratively manage their mobile privacy and security.
§ DESIGN OF THE CO-OPS APP
We developed the Community Oversight of Privacy and Security (CO-oPS) Android app <cit.> based on the model of community oversight proposed in our prior work <cit.>. This model outlines the need for community oversight mechanisms to support individual and community participation through awareness and transparency features that build trust between community members. Thus, our CO-oPS app design includes four key features: 1) People page, 2) Discovery, 3) Permissions, and 4) Community Feed.
The Discovery page allows community members to review one another’s installed apps (Figure-<ref>(b)), and the list of permissions granted or denied to each app (Figure-<ref>(c)). Users also can review the count of total community members who have the same apps installed or permission granted. To help users change the app permissions easily, the Permission page provides a “SETTINGS” link that forwards users to Android Settings to modify app permissions. On the Discovery page, users can also hide some of their own apps from their community, ensuring their personal privacy. To provide feedback to one another, users can direct message and can openly discuss any privacy and security issues on the Community feed page (Figure-<ref>(d)). This community feed has another important function: when someone in the community changes their app permission, the CO-oPS app creates an automatic post on the community feed about that change. It also posts weekly protips to educate community members regarding safe apps and permissions.
§ STUDY CONSTRUCTS
To evaluate the impact of using the CO-oPS app, we measured a set of constructs that we surveyed before, during, and at the end of the field study. We measured all constructs by presenting participants with various statements relevant to each construct. Participants were asked to rate each statement on a 5-point Likert scale from 1 (strongly disagree) to 5 (strongly agree).
First, we developed new constructs derived from the theoretical framework for community oversight proposed in our prior work <cit.>, consisting of transparency, awareness, trust, individual participation, community participation, and community trust. We validated these new constructs through standard psychometric tests (i.e., Cronbach's alpha <cit.> to confirm internal consistency), which is reported in Table-<ref>. Then, we utilized three pre-validated scales from prior research <cit.> to measure community belonging, self-efficacy, and community collective efficacy. All scale items are included in Appendix A. Below, we define each of the constructs, along with our hypotheses.
Transparency:
As Das et al. demonstrated <cit.>, social proof - seeing others adopt a privacy and security behavior - often helps individuals adopt the same behavior. Therefore, to encourage individuals in a community to make informed decisions for their mobile privacy settings, the behaviors of others must first be transparent. Therefore we define transparency as an individual's perceived visibility of their community's mobile apps installed and the permissions granted/denied.
H1: At the end of the study, community members will perceive higher levels of transparency in their community's mobile privacy and security behaviors.
Awareness:
Endsley demonstrated <cit.> that situational awareness - the understanding of what is going on around someone - is a key component in effective decision-making. In a later study <cit.>, DiGioia and Dourish suggested that being informed about digital privacy and security norms and practices along with the actions performed by the community are necessary for an effective social influence process. We developed our awareness measure as an individual's perception about the awareness of their own and others' apps installed, permissions granted/denied, along with the changes made.
H2: At the end of the study, community members will perceive higher levels of awareness regarding their community's mobile privacy and security practices.
Trust:
In <cit.>, we identified that having the information available and being informed about mobile privacy and security practices might not be sufficient for community oversight. This is because individuals need to be able to trust the quality of the information and perceive the information as dependable to learn from and be influenced by it.
H3: Community members will have a higher level of trust in one another's mobile privacy and security decisions.
Individual Participation:
While an effective social process needs transparency, awareness, and trust in one another, individuals also need to be willing to engage in this process <cit.>. Users need to be motivated to utilize the knowledge gathered from their community in order to make decisions. They also need to be willing to provide oversight to others. Thus we define individual participation as an individual's willingness to take steps to make changes in their own mobile privacy and security behaviors (uninstalling unsafe apps or denying dangerous permissions) and also providing oversight to others' mobile privacy and security behaviors (providing feedback and guidance to others).
H4: Community members will perceive higher individual participation at the end of the study.
Community Participation:
Community oversight mechanisms can take place in different types of communities, such as, families <cit.>, coworkers <cit.>, friends, and social networks <cit.>. Yet not all types of communities may have an equal level of willingness to take part in different forms of community oversight.
For example, in <cit.>, we found that communities with closer relationships might be more willing to help one another make decisions than communities with weaker ties. Therefore, we define community participation as an individual's perception of their community to collectively work together, e.g., help one another, exchange feedback and guidance, and engage in open discussions.
H5: At the end of the study, participants will perceive a higher level of community participation.
Community Trust and Belonging:
Individuals are likely to help one another if they feel like they belong and can trust their community members. We define Community Trust as an individual's perception of trusting their community to keep their personal information (e.g., apps installed) private and care for one another's mobile privacy and security. For community belonging, we utilized a pre-validated measure <cit.> that has been used in exploring community support mechanisms outside of privacy and security. The community belonging construct measures an individuals' feelings about how much they matter to their community. While our participants already knew each other, participating together in the CO-oPS app could lead them to feel stronger bonds and care between each other. Therefore, our hypotheses are:
H6: An individual's community trust will be higher at the end of the study.
H7: Community belonging will be higher after the study.
Efficacy:
Two of the outcomes we wanted to measure are perceptions over the efficacy of individuals and groups to manage their mobile privacy and security. Thus, we used pre-validated measures for self-efficacy <cit.>, and community collective efficacy <cit.> in our study. The self-efficacy <cit.> construct measures an individual's perceived capacity to manage their own mobile privacy and security. The community collective efficacy <cit.> construct measures an individual's perceived collective capacity to manage their community's privacy and security together. Our hypotheses are:
H8: Individual's self-efficacy will be higher after the study.
H9: Community collective efficacy will also be higher at the end of the study.
§ METHODS
Study Overview: The overall goal of our study is to evaluate the CO-oPS app in building the capacity of the communities to manage their mobile privacy and security collectively. We also wanted to understand what impacts this community-based approach may have in changing participants' perceptions and behaviors toward their individual and collective mobile privacy and security management. To achieve these goals, we recruited small self-organized communities (2-6 Android phone users) who knew each other. Each community member installed the CO-oPS app and participated for four weeks. Measures were gathered before app installation, each week of the study, and at the end. Each week participants were asked to complete different in-app tasks that allowed them to explore the features of the CO-oPS app. Finally, participants were invited to participate in an optional follow-up interview. In each step of the study, we explicitly provided the definition of the term "community" as "your group members who are participating in this study." Each participant was compensated with a $40 Amazon gift card for completing the field study, with an additional $10 Amazon gift card for participating in the interview. Some participants withdrew from the study after two weeks due to technical difficulties with their smartphones and were compensated half the amount. Twenty-nine participants discontinued participation after week one, perhaps due to natural attrition, and were not compensated. Data were discarded from all who did not complete the study.
Participant Recruitment:
We recruited a total of 101 participants that were associated with 22 communities. We initially recruited the primary contacts of each community who completed a pre-screening eligibility survey that verified whether they met the inclusion criteria of the study prior to providing their informed consent. The inclusion criteria for participation included: 1) reside in the United States, 2) be 13 years or older, 3) have an Android smartphone, and 4) be willing to install and use the CO-oPS app. Here, we also specified that they “must participate in a group with two other people you know," which determined the minimum group size required to participate in this study. After completing the screening survey, the initial contacts were asked to share this eligibility survey with people they knew to invite them to participate in this study as their community members. Therefore, the initial contact of each group self-selected their community based on the above criteria (1-4). As such, all group members knew the initial contact but in some cases, were only loosely acquainted with one another. For the teen participants, we required their parents to complete this survey and provide their consent.
Our study was Institutional Review Board approved. The target characteristics of our participants were all Android smartphone users of any age range (minors, adults, and older adults). Therefore, we did widespread recruitment through social media, email, phone calls, and word-of-mouth. The recruitment process started in January 2022 and ended in August 2022. Overall, we recruited 22 communities (101 participants) where the size of the communities ranged from 2 to 6. Table-<ref> summarizes the gender, age groups, ethnicity, and education of our participants. Our participants were primarily young, between the ages of 13 to 34. Most of them had a college degree. The majority of the participants were Asian, followed by African American, Hispanic/Latino, and White/Caucasian. Table-<ref> illustrates the frequency of the group compositions. Most of the groups consisted of family members, friends, and others (e.g., neighbors, co-workers, and acquaintances).
App Tasks:
During the field study, our participants were asked to explore different parts of the CO-oPS app through a set of tasks each week. These tasks prompted them to become familiar with CO-oPS features and introduced them to the goal of collaboratively managing mobile privacy and security. Table-<ref> depicts the weekly tasks. For example, Week 1 tasks asked participants to become aware of their own mobile privacy and security decisions, whereas Week 2 tasks asked them to perform oversight of others in their community. Participants could check off completed tasks in the app to remove them from their task list, but we otherwise did not track or require completion to continue in the study.
Survey Design:
Each participant completed two Qualtrics surveys (pre-study and post-study) before and after the field study, which contained four constructs: self-efficacy, community belonging, community trust, and community-collective efficacy. The pre-study survey also collected participants’ demographic information, e.g., age, gender, ethnicity, and education. During the field study, participants also completed a shorter Qualtrics survey each week (weekly survey), containing all constructs of the community oversight model. Links to the weekly surveys were delivered through the CO-oPS app, which redirected participants to the Qualtrics web survey.
Follow-up Interview:
At the end of the field study, we invited participants to an optional 30-minute one-on-one interview session on Zoom to learn about their experience using the CO-oPS app with their community. Fifty-one participants from 18 communities participated in the follow-up interviews. We started the semi-structured interview by asking about mobile privacy and security practices before participating in the study. Next, we asked about their overall experience of using the CO-oPS app. Participants were also encouraged to express their perceived benefits and concerns about different features of the CO-oPS app. Appendix B presents some sample interview questions we asked during the follow-up interviews. The interview sessions ranged from 40-70 minutes and were audio/video recorded.
Data Collection and Analysis:
The study produced a rich dataset: 1) quantitative data from survey measures, 2) CO-oPS app usage logs, and 3) qualitative data from follow-up interviews. We first categorized the survey responses as pre-study, week-1, week-2, week-3, week-4, or post-study, depending on the timestamps of the survey completion. Then, we verified the construct validity of our measures using Cronbach's alpha <cit.> and created sum scores to represent each construct. Next, we conducted Shapiro–Wilk tests and found that the sum scores of the constructs were not normally distributed (ps<.01). Therefore, we performed the non-parametric Wilcoxon rank-sum test to identify significant differences between the pre-study and post-study measures (Table-<ref>). We also present the descriptive statistics for
each pre- and post-study survey item of the newly
developed constructs (Appendix D)
We instrumented the CO-oPS app to log participants’ usage data. We also stored the list of the apps installed and permissions granted/denied during the installation of the CO-oPS app and at the end of the field study. We analyzed the usage log to identify how and at what frequencies participants utilized different features of the CO-oPS app. We also analyzed the pre- and post-study app/permissions lists to investigate the changes made to the apps and permissions during the study. Due to some technical issues with the CO-oPS logging feature, we could not log the in-app activities of the first seven communities. Therefore, the app usage data was received from only the last fifteen communities (N = 68 participants).
We qualitatively coded our interview data using inductive analysis techniques <cit.> to understand how participants perceived the CO-oPS features that tie to the constructs we were measuring. Thus, our qualitative analysis complemented the quantitative results from our surveys. We first familiarized ourselves with our data by reading through each transcript and then template-coded our data based on the community oversight concepts described in Section 4. Specifically, we coded for 1) the level of transparency on the information shared by them or others, 2) the types of information they felt helped raise their awareness about the community's mobile privacy and security practices, 3) the level of trust of one another's privacy behaviors and advice, 4) how and whether they would individually participate in such a community, 5) how participants discerned community participation, 6) the trust and belonging they felt with their communities, and 7) their individual and community-level capacity to manage mobile privacy and security. The first author worked closely with three researchers to code the data iteratively and formed a consensus among their codes. The remaining authors helped guide their analyses and interpretation of the results. Appendix C presents the codes and illustrative quotations for each analysis theme.
§ RESULTS
On average, participants spent 32 minutes in the CO-oPS app over four weeks, ranging from 19 minutes to 1 hr 31 minutes. Table-<ref> summarizes the activity types that participants performed with the CO-oPS app. Table-<ref> summarizes Cronbach's alpha, means, standard deviations, skewness, and kurtosis of each construct measured during the study. All Cronbach's alphas were greater than 0.80, which suggests good internal consistency of our measures. Next, we tested for within-group differences in the constructs based on whether the participants completed it at the start or end of the study. We will discuss each measure below, along with the corresponding findings from the qualitative data and usage logs related to each construct.
Transparency:
As shown in Table <ref>, participants reported higher (p=.010, M1=4.07, M2=4.36) levels of transparency (an individual's perception of whether CO-oPS gave them a transparent view of the apps installed and permissions granted on their community’s mobile devices) at the end of the study. Hence, this result supported our hypothesis (H1). Our qualitative results also confirmed that almost all participants felt that CO-oPS made their community's mobile privacy and security decisions visible to them. Three-quarters of the participants interviewed (76%, N=39) explicitly said they liked the CO-oPS feature that let them check own apps and permissions. Participants often mentioned that having the ability to see all their installed apps on one screen provided them transparency of their app usage. Two-thirds of the participants (67%, N=34) also brought up the visibility of others' apps and permission. To this end, they said reviewing others' apps and permissions provided them a sense of purpose for using CO-oPS with their communities. Interestingly, while these participants appreciated the ability to review one another's apps and permissions, they often referred to the importance of the CO-oPS app-hiding feature because it made them feel less intrusive to others' privacy. As such, C18P1 said, "Because some of the apps can be hidden if someone likes, that gives me the feeling of a relief when I see others' [apps installed].”. However, some participants (25%, N=13) believed that this app-hiding feature defeats the main purpose of CO-oPS.
Some participants, on the other hand, perceived transparency as a two-way privacy violation, e.g., the privacy of themselves and the privacy of others. For example, more than one-third of the participants (38%, N=20) felt that their personal privacy was being violated as others, who were not close, could see their personal information (e.g., installed apps and permissions). Some participants (27%, N=14) also specifically said they might forget to hide an app after installation, which could leave their apps visible to others. On the contrary, one-fourth of the participants (25%, N=13) felt that this transparency of others' apps and permissions can be privacy-invasive to others as well. C11P2 said, “While using the app, my friends and I discussed privacy more than security because we can see the apps on our friend phones and I think that's not a good thing.”.
The results from the log analysis (Table-<ref>) also supported the above concerns. For instance, around half of the participants (49%, N=34) hid one or more of their installed apps from their communities. Participants, on average, hid six mobile apps ranging from one to 17. The most frequent types of apps that participants hid were games, video streaming apps, banking apps, and online shopping.
Awareness: Participants overall reported a higher (p<.001, M1=3.95, M2=4.37) level of awareness (individual's perception of whether CO-oPS made them more aware of their community’s mobile privacy and security decisions) after the study. Hence, this result supports our hypothesis (H2). Our qualitative results showed that using the CO-oPS app helped participants raise their overall awareness of mobile privacy and security issues, along with their awareness of one another's privacy and security practices. For example, almost all participants (94%, N=48) felt that they became more aware of mobile privacy and security issues because CO-oPS enabled them to focus on permissions. They also became more aware of which of their personal information was being accessed by their installed apps. For example, C15P1 said, “It just makes it more obvious. It's very focused on permissions. So I think having that focus, it's very beneficial. People in the community, I see are now more concerned... for their permissions specifically. I totally see how it changed our perspectives.”. Some of these participants (39%, N=20) often brought up the weekly pro tips they got on the CO-oPS app as it helped them increase their awareness regarding mobile privacy and security in general.
Most participants (86%, N=44) also said they became more aware of whether a permission is necessary for an app. They often mentioned that comparing their own app permissions with others helped them increase this knowledge. Almost all of these participants (82%, N=42) also mentioned that it helped them keep track of their own apps, as they found more installed apps on CO-oPS' Discovery page than they were aware of. They also often mentioned some granted permissions found on the CO-oPS app (e.g., microphone, camera, location, contacts, etc.) that they did not remember granting.
Around half of the participants (57%, N=29) said they became more aware of their community members' privacy and security behaviors. Here, they mostly mentioned one or two people in their community whose apps they could keep an eye on to ensure their safety. Lastly, some participants (39%, N=20) also said that they appreciated the CO-oPS feature that informed everyone about the permission changes made by any member as this helped them decide whether to imitate that change. Around one-fifth of the participants (18%, N=9) felt that CO-oPS app did not make them aware of the app changes made in the community - the apps community members installed or uninstalled on their phones. This was not a feature we implemented, and these participants felt like it had been overlooked and desired that awareness.
The findings from our log analysis (Table-<ref>) are reflective of the quantitative results. For example, most of our participants (87%, N=61) checked their own app permissions during the study. Participants, on average, reviewed the permissions of 23 apps and primarily explored the permissions of apps that are about gaming, online shopping, social media, and financial payments. Alongside reviewing their own apps and permissions, more than two-thirds of the participants (65%, N=46) reviewed others’ app permissions. On average, they explored 18 app permissions of their community members. The most common types of apps being reviewed were social media, banking, and gaming apps.
Trust:
Similar to the above two constructs, the post-study responses saw a higher (p<.001, M1=3.61, M2=4.28) level of trust (individual's perception of whether CO-oPS helped them foster trust in one another’s mobile privacy and security decisions) among the community members. This result confirmed our hypothesis (H3). Almost half of the participants (51%, N=26) said they found the advice provided by their community was dependable. They were overall appreciative of the feedback and guidance they received from more tech-savvy community members, as it helped them learn more about risky apps and unnecessary or dangerous permissions. However, the trust did not always extend to all community members. For example, C18P1 said: "In this app, you're trusting each other's decisions. But for me, in this community, only [Name] is more tech-savvy. And most of the people are not. And these decisions are not always well-informed, right? So, I follow only [Name] to check what he has.".
Conversely, participants felt that trusting others' privacy and security practices might be challenging in some cases. Around half of the participants (49%, N=25) said some of their community members were less knowledgeable about mobile privacy and security issues. Therefore, they could not trust those people's mobile privacy and security decisions to learn from. As C11P3 said, “I don't think they were much of aware. They do not care of all this, you know, privacy and security stuff,... so I am not sure I followed them, their permissions and stuff.”. Interestingly, they also often mentioned that those with less knowledge were less tech-savvy (37%, N=19) in general.
Individual Participation:
Participants reported a higher (p<.001, M1=3.78, M2=4.23) level of individual participation (perception of whether the CO-oPS app helped individuals participate in their own and others' mobile privacy and security decisions) after using the CO-oPS app for four weeks. This supported our hypothesis (H4). Our qualitative results also revealed that participants overall took the initiative to change their apps and permissions and also provided their oversight to others. Notably,
more than two-thirds of the participants (67%, N=34) said that they made changes to their own apps and permissions. Participants often said that they made these changes after reviewing their own permissions and identifying the unnecessary or concerning ones by themselves. Some other participants said that comparing their own app permissions with their community's inspired them to change their app permissions. Some of these changes were made because of feedback received from other community members. C02P1 said, “I did some changes. I denied some of my permissions. [Name] asked me to remove the microphone from one of the apps I use for workouts. I have removed it now. ... also, you can always just check and then you just have to learn what permissions are suspicious and what are necessary.”.
Next, more than one-third of the participants (41%, N=21) explicitly said that they provided feedback to their community members to warn about the apps that they thought might be risky or the permissions granted that might be a cause of privacy concerns. To provide feedback, participants did not just use the CO-oPS messaging feature, they also mentioned using other media, e.g., text messages, social media private messages, phone calls, or talking in person.
Log results (Table-<ref>) demonstrate that individuals did provide oversight during the study. We found that 74%, N=51 participants sent messages to someone in their communities, where twelve messages were about warnings regarding risky apps (games, social media). Thirty-five messages contained warnings regarding specific app permissions they found on their community members’ phones. They mostly provided feedback about location, camera, microphones, and contact permissions. For instance, C09P1 messaged C09P3: “You’re granting Douyin a ton of permissions. maybe we should keep the Chinese spyware to a minimum.”
However, some participants expressed a number of factors that reduced their motivation to participate. More than one-third of the participants (41%, N=21) believed that they were less tech-savvy than others in their community and therefore they doubted their feedback would be useful to others. Interestingly, some participants (39%, N=20) felt that the people who participated with them were not close and therefore they did not care about those people's mobile privacy and security. A few participants (29%, N=15) expressed that they had very few mobile apps installed on their devices, and so, they did not need to be concerned about mobile privacy and security. Ironically, some of these participants also believed that they did not have anything to be concerned about because the personal information that is stored in their mobile phones is not very sensitive in nature. A few also felt that their information was already leaked by some online entities and so it was too late to start caring about mobile privacy and security. As such, C14P2 said: “I don't see the point now because you can't just control what they [apps] already stole from you. I use very few apps, and all my data is already out there."
Community Participation:
The community participation measure (individual's perception of whether the CO-oPS app enabled the community to help one another make their mobile privacy and security decisions) increased (p=.003, M1=3.86, M2=4.18) over the duration of the field study. This confirmed our hypothesis (H5). More than three-fourths of the participants (78%, N=40) said the CO-oPS app allowed them to learn from their community regarding mobile privacy and security management and exchange their knowledge regarding app safety and privacy. Most of these participants (71%, N=36) also mentioned that using CO-oPS helped them initiate more open discussions regarding mobile privacy and security in their community than ever before. They said these discussions most often took place offline when they saw one another in different social gatherings. Around half of the participants (53%, N=27) specifically discussed receiving feedback and advice from their community. C17P1 said, “I mean, offline, or virtually, we kind of worked together, we talked, we get each other’s knowledge. But that also happened with the co-ops app, that there were so many options to get in touch with each other by that messaging or, notifying them, or community discussion... I will say it kind of, we helped one another learn as a team.”.
However, some participants said the CO-oPS app might not help increase community participation when the members are extremely or not particularly tech-savvy. One-third of the participants (31%, N=16) envisioned that when the community members are less tech savvy, they might not be able to provide oversight to each other. On the other hand, 27% of the participants (N=14) said that their entire community was very tech-savvy and well aware of the mobile privacy and security issues, and therefore they did not find it necessary to engage in discussion or exchange feedback with one another. C11P5 said: “My community is from a computer science background. I think we are already aware of these things. So, we don't need others' advice.”.
Community Trust and Belonging: While community trust increased over the course of the study (p=.048, M1=4.03, M2=4.22), the difference between community belonging was not statistically significant (p=.209, M1=4.09, M2=4.20). Thus, hypothesis (H6) is supported, but (H7) is not supported. In our qualitative analysis, we found that all of our participants (100%, N=51) said they personally knew each member of their communities. Most of our participants (86%, N=44) mentioned having close relationships, e.g., family members, friends, co-workers, and neighbors, with some members of their communities. Thus, using CO-oPS did not appear to bring groups closer together.
However, perceptions of trust and community relationships were still important in how individuals interacted with each other in CO-oPS. Around half of the participants (47%, N=24) said that they had trust in their community that their apps and permission information would not be misused. One-fourth of our participants (24%, N=12) said they had peace of mind because they would rely on their community members who would actively monitor their mobile privacy decisions and warn them if anything is found concerning. Here, we often noticed that participants referred to some specific community members, not the entire community, who they would rely on. C02P1 said, "With [Name] in my group, at least I know that if he saw something he didn’t think wasn’t proper, he will definitely let me and my husband know...We have that kind of relationship, so we know we can trust him.”.
However, a few participants felt that sharing the apps installed might cause some security issues due to the lack of trust in certain community members. For example, a few participants (18%, N=9) envisioned security concerns in sharing their financial apps, such as banking or mobile payments, with their community. They often brought up hypothetical scenarios of a family member (e.g., children) knowing what apps they have installed, who would somehow get access to their phone, log in to their financial apps, and transfer money. A couple of participants also imagined situations when community members might judge or bully them because of their choice of gaming apps.
Self-efficacy:
Our participants reported higher levels of self-efficacy (individual's capacity to manage their mobile privacy and security) at the end of the study (p<.001, M1=3.95, M2=4.32). This confirmed our hypothesis (H8). Most participants (80%, N=41) said they gained confidence in managing their mobile privacy and security, particularly by reviewing their installed apps and granted permissions and identifying whether there is anything concerning. C10P1 said, "So, I can now think through it, like what is the purpose of this permission? Like if the permission conflicts with the purpose of the application, I can just turn it off. You see, this is new. I now can differentiate what's necessary or what's not." Interestingly, more than half of the participants (57%, N=29) said they now have become more knowledgeable about changing permissions, mostly because they could easily navigate to the app permission settings from the CO-oPS apps. This perception was not universal, though. Around one-third of the participants (31%, N=16) also said they already had the ability to manage their own apps and permissions prior to participating in this study, and they never reached out to others for help.
Commmunity Collective Efficacy:
Participants reported higher community collective efficacy (individuals' belief that their community can co-manage mobile privacy and security) at the end of the field study (p<.01, M1=3.80, M2=4.12). This confirmed our hypothesis (H9). Reflecting this, most participants (88%, N=45) felt they could easily reach out to their community and work together as a team for their mobile privacy and security decisions. Most of these participants (67%, N=34) mentioned that they have at least one person in the community they could reach out to ask questions about whether an app was safe to use or a permission should be allowed. C03P5 said, “When I'm giving permissions, I now can tell that could be the things that are needed for a discussion. I do go to [Name] to ask what he thinks. what he thinks the permission is needed or not needed for the app. I do my permissions like this now.”
Behavioral Impact:
Our log analysis results provide further insights into participants' overall behavioral changes regarding mobile privacy and security. We found that 87% of the participants (N=61) changed at least one of their app permissions during the study. Participants, on average, changed 29 permissions, where all permissions were changed to “deny.” They mostly turned off the permissions accessing their location (approximate and precise), camera, storage, and contacts. For instance, C15P4 changed the Location (Approximate) permissions of Chase, Snapchat, and Gyve apps installed on his phone. However, participants did not show a similar decrease in the number of apps they had on their phones. Around 78%, N=53 participants installed new apps, whereas only a few participants (16%, N=11) uninstalled any apps. Participants, on average, installed two new apps, where the most common types of apps were mobile payment, banking, online shopping, social media, and games. On the other hand, the participants who uninstalled their apps mostly discarded gaming apps along with a few spiritual, fitness, and dictionary apps from their phones. Perhaps learning what apps others in their communities were using provided participants with ideas for additional apps they would be interested in.
§ DISCUSSION
While our prior work conceptually proposed community oversight as a mechanism for supporting privacy and security management <cit.>, this work is the first field study to empirically examine the real-world feasibility of implementing community oversight as a mechanism for co-managing mobile privacy and security among trusted groups. Our results largely confirm what was envisioned in that prior work: that community oversight does have the potential to help people help each other when it comes to decisions about mobile apps and app permissions <cit.>. Users' perceptions of their own and their community's capabilities to manage their mobile app privacy and security increased as a result of the study. The majority of participants modified their permissions, reducing what they were sharing with apps, and stated that their awareness of permissions and mobile apps also increased. Below we further discuss our overarching findings and their implications for the design of community oversight mechanisms.
Building Community Collective Efficacy
The goal of the CO-oPS app, as with many collaborative systems, is to build and support the collective capacity of groups to work together to achieve a common goal, in this case, to manage apps and app permissions. Thus, building community collective efficacy for mobile privacy and security is the primary end goal of CO-oPS. To that end, we believe our study was successful. The interview comments suggest that the community oversight mechanism helped our participants increase their ability to support each other in their mobile privacy and security decisions. Participants mentioned their change to a more collaborative perspective: the app facilitated knowledge sharing amongst their community and an ability to rely on others to help in decision-making.
Our results also provide an empirical validation of the components of the community oversight model <cit.>. Again, both survey and interview results demonstrated the roles of transparency, awareness, trust, and participation in providing community oversight. Future work could examine what factors are most related to community collective efficacy and thus are most important to provide in a community oversight mechanism.
Role of Tech Expertise
One of the key themes was that the level of tech expertise among community members plays a key role in bolstering or hindering community oversight. For instance, our participants expressed concerns about the potential lack of participation in communities when most members are sufficiently tech-savvy or knowledgeable about mobile privacy and security. Others expressed concerns about there being a lack of knowledge in their communities and less trust in the decisions of those with less expertise. Kropczynski et al. <cit.> also noted the importance of those with tech expertise in older adult communities for spreading privacy and security knowledge, even among those with low self-efficacy. This suggests that community oversight mechanisms may be most beneficial and appropriate when there are asymmetrical relationships among the community members in such a way that some community members need support while other members could provide that support. A key challenge is then how to incentivize those with sufficient expertise to participate in such communities, particularly to help community members they are not as close to or not already providing tech care to <cit.>.
However, when this asymmetry in expertise combines with a power imbalance, which is often seen in families, the collaborative joint oversight might cause tension. Akter et al. <cit.> demonstrated that although teens had more expertise than their parents, they did not feel empowered to oversee their parents because of the existing power hierarchies. In families, parents often use parental control apps, a more restrictive approach that fosters monitoring and surveillance to ensure teens' mobile online safety, privacy, and security. Teens often perceive this unidirectional oversight mechanism as overly restrictive and privacy-invasive <cit.>. Therefore, adolescent online safety researchers emphasize adopting a softer version than parental control or community oversight - a middle ground that allows parental oversight with bidirectional communication and teens' self-regulation <cit.>. So, the community oversight mechanism might need to incorporate additional features to help such unique types of communities with asymmetries in expertise and power.
Tensions around Transparency and Privacy
Another common concern was privacy issues arising from transparently sharing apps and permissions with others. While many appreciated such transparency, participants regularly chose to hide certain apps from other people. Some participants found this transparency too invasive and anticipated potential problems resulting from others knowing about what apps they use. Other concerns also arose from being able to determine if the advice given to another was taken or not, based on whether someone's permissions remained the same or changed. These concerns will likely be elevated as community size grows, where communities contain more members who are not close to one another. A recent study that explored collaborative mobile privacy management among families also found similar results where participants expressed concerns in including extended families with distant relationships <cit.>. To resolve these tensions, as with many collaborative systems, users may want more granular controls on who can see what apps and permissions rather than sharing equally throughout the community.
Incentives to Participate
Prior work identified that users might not be motivated to provide oversight to those not close to them <cit.> or those outside of existing care relationships such as between parents and teens <cit.>. Indeed, some of our participants expressed similar sentiments and were not concerned about the decisions made by those not close to them. Despite this, the majority of participants did perform oversight, and many interviewees described discussions and behaviors that were sparked as a result of that oversight. Yet with some incentives, participating in a user study, in this case, individuals performed the oversight, benefiting other community members. Thus a key question remains as to how to incentivize such oversight to different community members and how those incentives may need to change over time.
Implications for Design
Our results demonstrate how features that provide transparency and awareness and support trust between community members are essential components of community oversight. Mechanisms must also enable and encourage individual and community participation in the collaborative efforts of privacy and security management. Our results provide further insights into the features and mechanisms needed in a tool for communities to participate in collaborative oversight of their mobile privacy and security.
Making Privacy Features Visible: While the CO-oPS app had a feature that allowed users to hide any of their installed apps from others, it often failed to provide users with a sufficient sense of privacy. This may be because they were not well aware of this feature or were unsure how well it functioned. Participants also reported concern over forgetting to hide apps as they install new ones. Thus, mechanisms to keep users aware of this app-hiding feature will be necessary. Das et al. <cit.> and DiGioia and Dorish <cit.> also emphasized the importance of visibility so that users can be aware of the availability of the security feature and adopt it. To help users be aware of this feature, users can be prompted regularly or upon installing new apps to ask if they would like to hide. If community members hide too many apps, however, oversight will be more limited. Thus, designers should also explore additional privacy features that can protect an individual's privacy while still allowing useful sharing to the community.
Raising Mobile Privacy Knowledge:
One of our findings suggested that participants would not trust the mobile privacy and security behaviors of people who were less knowledgeable. This suggests that collaborative decision-making would not effectively function when there is little trust within the group. Increasing trust within communities may be very challenging, and how to do so remains an important open question.
In <cit.>, participants also envisioned such situations and recommended including external expert users whom the community members can turn to for guidance when they do not have the necessary expertise. Several other networked privacy researchers also demonstrated the need for knowledgeable expert stewards <cit.>. Therefore, we recommend app designers explore ways to include mobile privacy and security experts in communities. Another possibility is, rather than bringing experts into the community, to raise the expertise of certain motivated community members. This could include nudges towards additional information or resources, possibly personalized to those most amenable to such additional knowledge.
Increasing Community Participation:
We found that our participants expressed several concerns about community motivations to provide oversight to one another. Individuals and communities, as a whole, need incentives to utilize a community oversight mechanism and continue to support each other <cit.> in their knowledge-sharing and decision-making. Such needs for incentivizing individual participation in communities to support collective participation were also suggested by Watson et al. in <cit.> and Moju-Igbene et al. in <cit.>). Therefore, community oversight mechanisms need to include features that encourage such engagement and make the engagement of others apparent. For example, community members can be notified of any new apps installed or permissions granted on anyone's phone. Moreover, nudges could remind community members to review random members' apps and permissions. Additionally, lightweight feedback features might also help users to engage more. For instance, instead of messaging, users might prefer just to flag unsafe apps/permissions to notify others quickly.
Limitations and Future Work
We would like to highlight the limitations of our study that should be addressed in future work. First, our sample was skewed toward Asian adults, most of whom completed college and graduate-level education. Therefore, our results may not be generalizeable to other communities of different ethnicity, education, and age groups. Future work should explore communities with broader demographics, ethnicity, and socio-economic status <cit.>. Another limitation is that we asked our initial participants to form their communities with people they knew, which sometimes led to groups where everyone did not have strong bonds with each other. This may have led them to evaluate our app differently than if we studied with communities of families or close friends only. However, this also provided important insights into the importance of community trust in fostering oversight. Future work should examine how factors of group structure and relationships, including group size and varying levels of expertise, impact the motivation of participation and oversight activities of community members.
Although our qualitative results suggested that the CO-oPS app supported all necessary components of community oversight, this does not imply that our participants perceived usefulness, ease of use, and behavioral intent to adopt <cit.>. This is because they used the app, as we requested, to perform various tasks as part of the study. Therefore, in future studies, we would want to evaluate its usability to address users' experience issues and measure technology acceptance <cit.> to identify how to design for widescale adoption of an app to help people collaborate with their loved ones to manage mobile privacy and security. Lastly, the study design did not include a control condition, which means that any effects from the community oversight mechanism cannot be differentiated from changes that may have occurred through using the app, such as increased attention on app permissions and privacy and security. Therefore, the results cannot conclusively demonstrate a causal relationship between the usage of CO-oPS with communities and the dependent variables we analyzed. However, our qualitative insights provide evidence that some of the positive effects could be attributed to using the CO-oPS app. Moreover, there might be a survivorship bias effect in our results, as those who dropped out did not perceive any benefits to the app. Future research should investigate whether the same findings would hold for control groups
and prevent potential survivorship bias.
§ CONCLUSION
Managing mobile privacy and security as an individual is hard. We believe community oversight is one potential social mechanism that can allow community members to exchange help regarding their mobile privacy and security decisions. Our CO-oPS app was developed to evaluate this idea of community oversight in building community collective efficacy for groups managing their mobile privacy and security together. Our results provide empirical evidence that community oversight can potentially have an impact on individuals and communities alike. Given the continued proliferation and adoption of smartphones and mobile apps, we believe apps that facilitate community oversight are an essential tool for communities to help one another keep their personal information safe and secure. We will continue to build upon this work to examine how we can help people successfully co-manage mobile privacy and security within their communities.
§ ACKNOWLEDGMENTS
We acknowledge the contributions of Nikko Osaka, Anoosh Hari, and Ricardo
Mangandi, in the CO-oPS app development. We would also like to thank the individuals who participated in our study. This research was supported by the U.S. National Science Foundation under grants CNS-1814068, CNS-1814110, and CNS-2326901. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. National Science Foundation.
plain
§ SURVEY SCALES
Community Oversight Model Constructs: (Derived from Chouhan et al.'s conceptual model of Community Oversight <cit.>)
Transparency
1. The app gave me a transparent view into the apps installed and permissions granted on my own mobile device.
2. The app gave me a transparent view of the apps installed and permissions granted on the mobile devices of others.
3. The app gave us all a transparent view of the apps installed and permissions granted on the mobile devices of our community.
Awareness
1. The app made me more aware of my own mobile privacy and security decisions.
2. The app made me more aware of the mobile privacy and security decisions of others.
3. The app increased overall awareness of the mobile privacy and security decisions of our community as a whole.
Trust
1. The app helped me foster trust in the mobile privacy and security decisions of others in my community.
2. The app helped others in my community foster trust in my mobile privacy and security decisions.
3. The app helped foster trust in the mobile privacy and security decisions of our community as a whole.
Individual Participation
1. The app helped me make privacy and security decisions for myself.
2. The app helped me be involved in others' privacy and security decisions.
3. The app helped individuals in the community participate in privacy and security decisions of our community.
Community Participation
1. The app enabled me to participate in a community that helps one another regarding our mobile privacy and security decisions.
2. The app enabled others to participate in a community that helps one another regarding our mobile privacy and security decisions.
3. The app enabled the community to help one another regarding our mobile privacy and security decisions.
Community Trust (Derived from Chouhan et al.'s conceptual model of Community Oversight <cit.>)
1. I trust others in my community to protect my private information.
2. I trust others in my community to give me advice about mobile privacy and security.
3. Others in my community trust me to protect their private information.
4. Others in my community trust me to give them advice about mobile privacy and security.
Community Belonging (Pre-validated by Carroll et al. <cit.> and Sarason et al. <cit.>)
1. I can get what I need in this community.
2. This community helps me fulfill my needs.
3. I feel like a member of this community.
4. I belong in this community.
5. I have a say about what goes on in this community.
6. People in this community are good at influencing each another.
7. I feel connected to this community.
8. I have a good bond with others in this community.
Self-Efficacy (Pre-validated by Kropzynski et al. <cit.> based on a modified version from Bandura <cit.>)
1. I know that if I worked hard to learn about mobile privacy and security, I could make good decisions.
2. Mobile privacy and security decision-making is not too complicated for me to understand.
3. I think I am the kind of person who would learn to use best practices for good mobile privacy and security decision-making.
4. I think I am capable of learning to help others make good mobile privacy and security decisions.
5. Given a little time and training, I know I could learn about best practices for good mobile privacy and security decision-making for myself and my community.
Community-Collective Efficacy (Pre-validated by Kropzynski et al. <cit.> based on a modified version from Carroll et al. <cit.>)
1. Our community can cooperate to improve the quality of our decisions about mobile privacy and security.
2. Despite other obligations, we can find time to discuss our decisions about mobile privacy and security.
3. As a community, we can handle the mistakes and setbacks resulting from our decisions about mobile privacy and security without getting discouraged.
4. I am confident that we can be united in the decisions we make about mobile privacy and security that we present to outsiders.
5. As a community, we provide care and help for one another regarding our mobile privacy and security decisions.
6. Our community can leverage outside resources and services for our members to ensure the quality of mobile privacy and security decisions.
7. Our community can provide information for people with different interests and needs when it comes to mobile privacy and security decision-making.
§ SAMPLE QUESTIONS OF FOLLOWUP INTERVIEW
* Prior to participating in this study, how did you decide which apps are safe or unsafe to install on your mobile devices?
* How did you decide whether to accept or deny a permission request for an app?
* Did you ever review the permission lists of the apps installed on your phone? Why or why not? How?
* How frequently did people in your community discuss mobile privacy and security issues with one another?
* During the study, how frequently did your community members discuss mobile privacy and security decisions with one another?
* During the study, how did you communicate with others who were part of your community?
* During the study, how did you manage your mobile privacy and security decisions? Did you see any changes compared to prior to the study? Why or why not?
* Can you explain how and why the app did or did not help provide transparency into the mobile privacy and security decisions of other people in your community?
* How and why did the app or did not help raise awareness in your community about mobile privacy and security?
* How and why did the app or did not enable you and individuals in your community to provide feedback and guidance about others’ mobile privacy and security?
* How and why did the app or did not help you work together as a community about mobile privacy and security?
* Were there any problems or concerns you or others in your community encountered when using the app?
* If given the option, would you want to continue using the CO-oPS app after this study? Why or why not?
* Who do you think would be benefited the least from using the app and why? Who would be most benefited and why?
* Is there anything else that you would have liked the app to do? Any changes you would have liked on how the app currently works?
§ CODEBOOK
|p3.5cm|p12.5cm|
Codebook
Codes Illustrative Quotations
2c
– continued from previous page
Codes Illustrative Quotations
2|r|Continued on next page
2c
2|l|Transparency
1-2
Visibility to own S&P (security and privacy) (76%, N=39) “So actually using co-ops, like, for me, I got to see the list, like, what the apps, what the actual permission all the apps are using and like, what access they have. Like the list of all at the same place. For me, it was like, good to have this.” -C11P3
1-2
Visibility to others S&P (67%, N=34)
“I think having this app actually made me more, see these things of others, because it made it easier now to check, not only yours, but also other people's security settings.” -C18P1
|==|
Violation of own privacy when other's view (38%, N=20) “As some of the community are not someone who not much close, I wasnt that much confident when it came to share my apps and show my things, you know.” -C08P2
1-2
Violation of own privacy when forget to hide (27%, N=14)
“I didn't want to show a few apps, to my community members, but, as CO-oPS crashed the first time and I had to reinstall it. Then, I forgot to hide those apps. And so I think that is a privacy issue, which most people won't like it.” -C11P1
1-2
Violation of others’ privacy (25%, N=13) “While using the app, my friends and I discussed privacy more than security because we can see the apps on our friend phones and I think that's very not a good thing. I did not feel good.”-C11P2
1-2
Defeats the purpose (25%, N=13)
“But sometimes, so while people are using some apps and keeping it private to them, they dont share with anyone but yeah, then I think this app wont help much for anyone” -C06P2
|==|
2c
2|l|Awareness
1-2
Overall S&P awareness (94%, N=48) “Sometimes we allow some permissions without understanding what's been packed. So after exploring that CO-oPS, I usually get to think twice about my apps, which really cool, I am more concerned about whether to allow or not allow any permissions to secure your phones. I would say it's very helpful to change my mind. And it helped me to be more careful about my mobile security.” -C15P1
1-2
Compare own S&P with others (86%, N=44) “I think this is a great feature. Because with this, you are able to see and compare like, if what you are using and what others are using, it is like comparable Or you can just know what you are doing others are not. I guess you can help yourself.” -C17P1
1-2
Keep track of own S&P (82%, N=42)
“Earlier I couldn't know about what is there and what is not because I thought I had few apps the apps I did install. Then here [on CO-oPS app] I see I have more apps that I did not see it before... I think it helps, it feels like gives you to see what do you have on the phone, and the stuff that are accessed by the apps.” -C08P1
1-2
Aware of others' S&P (57%, N=29) “So like seeing the option of like, every single app, and then seeing like what's granted and denied, that definitely helped a lot to see what each member, what apps did they have, and also what like permissions they grant. So it helps me realize what they're granting or not granting, so that I need to I help them or not.” -C02P4
1-2
Aware of community's S&P Changes (39%, N=20) “One of the benefits of it is, on the community section, I can go through my friend's app changes, which permissions of which apps you changed. And I can go ahead and do that and change it and have fun. Okay.” -C11P1
1-2
Increased awareness from pro tips (39%, N=20) “So on this pro tip section in where you can know the basic information, like basic knowledge that you can just learn from and become careful about the app settings... I think this section talked some senses in us.” -C06P2
|==|
Doesnt inform community about Apps Changes (18%, N=9) “So, you see in the community, we get to know about the changes for the permissions, but we do not get any community posts for the app installing or installing. I think this is also important. When someone gets rid of an app, everuyone should know, right.” -C15P1
|==|
2c
2|l|Trust
1-2
Trust others’ advice (51%, N=26)
“[Name] let me reconsider what I am doing, because when he tells me warns me, you are more likely to take it seriously. It'll come to light in your mind for sure. Yeah, I did change some of the things, yeah I think he was right. I see the stuff he warns me about are all good.” -C11P2
1-2
Less aware community members (49%, N=25) “I dont think they were much of aware. They do not care of all this, you know, privacy and security stuff, so I am not sure they used it much.”-C11P3
1-2
Less tech-savvy community members (37%, N=19) "For example, my mom... whenever she goes to the Facebook or YouTube, she asks questions. So, she cant be able to understand these privacy and security, its just so beyond her capacity. So I doubt she would be someone to rely on."-C16P1
|==|
2c
2c
2|l|Individual Participation
1-2
Made own S&P changes (67%, N=34) “I got rid of some of my permissions. I haven't really thought of that before. Right now it has come to my knowledge that yes, it is a big problem and even scary. But I have that control, if you know what permissions are problematic, and what are necessary, you can always try clean up. Now cleaning up my phone has become a bit of a priority to me.” -C02P1
1-2
Provided feedback (41%, N=21) “I reviewed X’s mobile privacy, I saw he was giving a permission, don't remember which one, then I told him that, allowing that permission is not good. And then I gave some good reasons why this is important to change this or not.
-C17P1
|==|
Less tech-savvy (41%, N=21) "I dont think anyone needed my advice. I know they are careful, much careful than I am because they all are very savvy.” -C18P2
1-2
Others are not close (39%, N=20) I dont think I did much… I would be interested to help someone when I care them, maybe my parents mostly.”-C14P2
1-2
Fewer app users (29%, N=15) “I did not use it much. I'm a very minimalist in my apps. So at this point, the apps that I have, I know what I have. My advice to others is use minimal apps and make your life easy..”-C03P5
|==|
2c
2c
2|l|Community Participation
1-2
Learned from community (78%, N=40)
“We could review each other's permissions and we could Share, so we could be careful about our privacy and things. And having your community’s apps and permission in CO-oPS, you can just learn by yourself like maybe you don't really have to grant this permission.” -C18P2
1-2
Increased discussion in community (71%, N=36) “We had frequent discussions when we had discussions about what kind of security and permissions we have or on each other's phone, or in general the security issues out there. And I think the other day when we met, we were giving away some information. I think we also mentioned some of our apps are taking unnecessary data. For those apps purpose, the permissions were not necessary. So we asked to turn it off. And I don't know if they did change that, but I did. But yeah, that kind of interaction truly happened among us. And we had we shared opinion and try to suggest each other that this is not right.” -C06P4
1-2
Received feedback from community (53%, N=27)
“One of the action items we had a task like look through permissions and tell them like, hey, like, maybe you shouldn't do it. I think I received a message from X like, Hey, you have Bose like music app has access to your GPS location for some reason. Oh, wow. Which I did not notice it before. This was like, I really thanked him.” -C09P2
|==|
Less tech-savvy community (31%, N=16) "I think when your community is not tech savvy, they wont feel the importance of this security and privacy. I can see to be an effective community at least some people must be tech savvy so that they can educate everyone else." -C16P1
1-2
Tech savvy community (27%, N=14)
“We didnt find it useful, not really, because my community is from computer science background. I think we are already aware of these things. So, we dont need others advice.” -C11P5
|==|
2c
2c
2|l|Community Trust and Belonging
1-2
Good relationship with community (86%, N=44)
“We live in a same community, so we have a very good relation with the other people like X and all the other four members because we almost live in very close to and very similar minded community. So and I have personally good relationship with X that also drives me to participate in this research. So yeah, we try to go outing and explore things together.” -C15P1
1-2
Trusted others to keep S&P Info Private (47%, N=24) “I guess, like the thought that they are my close circles. Like I know sharing my apps with them is safe.” -C15P1
1-2
Depend on the community for S&P (24%, N=12) “With X in my group at least I know that if he saw something he didn't thought wasn't proper, he will definitely let us know, let me and my husband know.. We have that kind of relation, he, Yeah, he would let us know and he would tell us this just to delete that, we have that kind of relationship so, we know we can trust him, We know that,” -C02P1
|==|
Had security concern for sharing S&P (18%, N=9) “I have my Chase app, if someone on the family, like my sons, know I have this app and can somehow get my phone,... if the app is logged in already, they can just transfer the money immediately.”-C02P3
|==|
2c
2c
2|l|Self Efficacy
1-2
Gained confidence in S&P (80%, N=41)
“Okay, so, I will say that what is the purpose of this app? Like if it is like Facebook or WhatsApp, then it will use my contacts, my contact information can use or my photos they can use. But why they should go to my phone call manage permission or there will track my other applications permission. That doesn't make sense. So it conflicts with the purpose of this application. See, this is new. I can now differentiate whats necessary or what not." -C10P1
1-2
Now know how to change permissions (57%, N=29)
“So I actually now can use the settings to go directly change the permissions. Its much easier now. It has become like I randomly go check some apps and do changes instantly if I feel like.” -C12P2
|==|
Already confident in S&P (31%, N=16)
“I would say that'd be me. I'm pretty knowledgeable regarding, you know, the whole privacy and phones, I try to be secure about my own apps. Yeah, I think I am very careful with permissions and such. I know how to change things.” -C22P1
|==|
2c
2c
2|l|Community Collective Efficacy
1-2
Felt teamwork for S&P (88%, N=45)
“I mean, offline, or virtually, we kind of worked together, we talked, we get each other’s knowledge. We could easily just start a discussion about any apps and permissions stuff... I will say it kind of, we work together in this.” -C17P1
1-2
Reached out to community (67%, N=34)
“ think one thing is that I’m a little more confident of
it now. So, when I’m giving permissions, I now can tell that
could be the things that needed for a discussion. I do go to
[Name] to ask what he thinks would do. what he thinks if
the permission is needed or not needed for the app. I do my
permissions like this now.” -C03P5
1-2
§ DESCRIPTIVE STATISTICS OF COMMUNITY OVERSIGHT CONSTRUCT ITEMS
|
http://arxiv.org/abs/2306.12367v1
|
20230621162044
|
Finite Beam Depth Analysis for Large Arrays
|
[
"Alva Kosasih",
"Emil Björnson"
] |
eess.SP
|
[
"eess.SP"
] |
Finite Beam Depth Analysis for Large Arrays
Alva Kosasih, Emil Björnson, Fellow, IEEE
Parts of this paper have been submitted for potential publication at the 2023 IEEE Global Communications Conference (Globecom) <cit.>. A. Kosasih and E. Björnson are with the Division of Communication Systems, KTH Royal Institute of Technology, Stockholm, Sweden. E-mail: {kosasih,emilbjo}@kth.se. This paper was supported by the Grant 2019-05068 from the Swedish Research Council.
July 31, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================
Most wireless communication systems operate in the far-field region of antennas and antenna arrays, where waves are planar and beams have infinite depth. When antenna arrays become electrically large, it is possible that the receiver is in the radiative near-field of the transmitter, and vice versa. Recent works have shown that near-field beamforming exhibits a finite depth, which enables a new depth-based spatial multiplexing paradigm. In this paper, we explore how the shape and size of an array determine the near-field beam behaviors. In particular, we investigate the 3 dB beam depth (BD), defined as the range of distances where the gain is greater than half of the peak gain. We derive analytical gain and BD expressions and prove how they depend on the aperture area and length. For non-broadside transmissions, we find that the BD increases as the transmitter approaches the end-fire direction of the array. Furthermore, it is sufficient to characterize the BD for a broadside transmitter, as the beam pattern with a non-broadside transmitter can be approximated by that of a smaller/projected array with a broadside transmitter. Our analysis demonstrates that the BD can be ordered from smallest to largest as ULA, circular, and square arrays.
Beam depth, beam width, radiative near-field, Fresnel region, finite-depth beamforming, rectangular arrays, circular arrays.
§ INTRODUCTION
Thanks to the successful implementation of massive multiple-input multiple-output (M-MIMO) in 5G systems in sub-6 GHz and mm-wave bands <cit.>, it is likely that even larger arrays (e.g., extremely large aperture arrays <cit.> and holographic MIMO <cit.>) and wider spectrum at higher frequencies <cit.> will be employed in the next generation of wireless systems <cit.>. A higher carrier frequency implies a smaller wavelength and, therefore, a smaller antenna size. As the array's aperture grows and the wavelength shrinks, the traditional far-field distance boundary, known as the Fraunhofer distance, will be very large since it is proportional to the squared aperture length and inversely proportional to the wavelength.
Hence, future wireless systems are unlikely to operate in the far field and the signal processing must be redesigned to not rely on the far-field assumption.
For instance, far-field configured arrays feature significant array gain degradation when used in the near-field <cit.>.
Three regions have been defined to classify the wavefront behavior with respect to the distance between the transmitter and receiver <cit.>: the reactive near-field, the radiative near-field, and the far-field <cit.>. The latter two are of interest for long-range communications, as in mobile networks.
At the beginning of the radiative near-field, there are discernible and unavoidable amplitude variations over the wavefront, but the main property is the spherical phase variations because these can be managed through refined signal processing. In the far-field, both the amplitude and phase variations are insignificant.
In line with this, <cit.> suggested that conventional modeling assumptions, which include the assumption of uniform plane wave impingement, are no longer applicable when considering a large array. In particular, their research focuses on the mathematical modeling and performance analysis of extra-large arrays. They have developed a closed-form expression for the resulting signal-to-noise ratio (SNR) using optimal single-user maximum-ratio combining/maximum-ratio transmission beamforming techniques. Similarly, <cit.> suggested that an exact near-field channel model is needed to evaluate spectral efficiency with the perspective of interference suppression capability.
§.§ Related Works
Previously negligible propagation phenomena become dominant in the near-field.
In conventional far-field beamforming, the signal in line-of-sight scenarios is focused on a point that is infinitely far away, resulting in a directive beam with a limited angular width but continuing to infinity.
In the near-field, we can aim the radiated signal at a specific point in space and achieve a radiation pattern that focused around it, both angle and depth.
This is known as beamforming in the distance domain or finite-depth beamforming <cit.>. In other words, near-field beamforming enables one to configure the beam not only in the angular domain (beam width) but also in the distance domain (beam depth). Focusing signal at a specific location can be performed by utilizing a matched filter that takes into consideration the phase shift of each individual element present in the array.
The concept of finite-depth near-field beamforming was first introduced in <cit.>. The fundamental properties can be studied using continuous matched filtering, representing the case when the array consists of many small elements so that summations over the antennas can be expressed using integrals <cit.>. Since then, researchers have conducted numerous studies utilizing the limited-depth capability of near-field beamforming. In <cit.>, the focusing property was utilized to derive the closed-form array response, which is then used to develop an effective method for location and channel parameters estimation. Similarly, <cit.> proposed the focal scanning method for sensing the location of the receiver, based on the analytic expression of the reflection coefficient of a reflective intelligent surface. <cit.> proposed a near-field channel estimation using a polar domain representation to accommodate both the angular and distance domains. Similarly, <cit.> proposed location division multiple access (LDMA) to enhance spectrum efficiency compared to classical spatial division multiple access (SDMA). The LDMA exploited extra spatial resources in the distance domain to serve different users at different locations in the near field.
<cit.> further analyzed the near-field beam for massive wideband phased arrays. The focus of their work was to develop a low-complexity technique for constructing beams, which is particularly suitable for massive wideband phased arrays.
Despite the growing interest in near-field beamforming, there have been no investigations reported on the near-field beam radiation pattern in the distance domain, referred to as the beam depth (BD), except for the recent study <cit.>. Understanding the near-field beam radiation pattern is crucial because it enables us to optimize beamforming system performance and accuracy. For example, the behavior of the BD can provide insights into how to spatially multiplex users in the depth domain with minimal interference, and what array geometries to use.
<cit.> characterized the BD including the distance interval where the array gain is within 3 dB from the peak value. However, the analysis is restricted to a square array with a transmitter in the broadside direction. We note that the antenna array geometry can be optimized to make the most out of the finite BD property of near-field beamforming.
§.§ Contributions
In this paper, our objective is to extend the understanding of near-field beamforming by considering a rectangular array model with a tunable width-to-height proportion. In particular, we investigate the array gain and BD.
We analytically derive the array gain and BD using the Fresnel approximation, which is demonstrated to be an accurate and reliable method in our analysis. Using these derived expressions, we explore how various geometrical factors, such as the width-to-height proportion, area, and aperture length, affect the array gain and BD of rectangular arrays. We further consider rectangular arrays with a non-broadside transmitter and analyze how the array gain and BD are affected by the azimuth angle. Moreover, we extend the analysis by considering near-field beamforming with a circular array, and showcase the potential advantages and limitations compared with rectangular arrays. Finally, to better understand the beam behaviors in the distance domain, we analyze the nulls and sidelobes appearing as a function of the propagation distance.
There are four main contributions of this paper:
* We analyze the BD for broadside beamforming with respect to the rectangular array shape and size. The analysis is performed by deriving the normalized array gain, computing the BD analytically, and obtaining the upper finite BD limit. The analysis allows us to characterize the finite-depth beam behavior in the near-field and how the array geometry can be tuned to achieve preferred properties. This analysis can be used to define beam orthogonality in the spatial domain.
* We extend the analysis to consider non-broadside beamforming, in which case direct Fresnel approximations are highly inaccurate. We refine the approximation to obtain high accuracy and then derive the analytical array gain with respect to the azimuth angle. Furthermore, we analyze the BD with respect to the azimuth angle.
* We analyze the BD for a circular array. The analysis is performed by deriving the normalized array gain, computing the BD analytically, and obtaining the upper finite BD limit. Furthermore, we characterize the beam behaviors outside the mainlobe in the distance domain by analyzing the nulls and sidelobes as a function of the propagation distance. The results provide insights into how such sidelobes may interfere with the other users' signals.
* We investigate the possibility that a non-broadside transmitter array can be modeled as a smaller array with a broadside transmitter by projecting it onto a plane perpendicular to the axis of the non-broadside transmitter. Our results confirm that the approximation is sufficiently accurate, as it closely matches the exact gain obtained from the non-broadside transmitter array.
The paper is organized as follows. In Section <ref>, we provide preliminaries and definitions for an individual antenna and antenna array. In Section <ref>, we derive the gain and BD of rectangular arrays with a broadside transmitter and analyze its properties. In Section <ref>, we analyze the gain and BD of rectangular arrays with a non-broadside transmitter. In Section <ref>, we derive the gain and BD of a circular array. This paper is concluded in Section <ref>.
§ PRELIMINARIES
In this section, we discuss the behavior of electromagnetic radiation from a passive antenna and an antenna array.
Three regions classically define the electromagnetic radiation patterns with respect to the propagation distance: the reactive near-field, radiative near-field, and far-field.
§.§ Gain of a Passive Antenna
The reactive near-field of a large antenna with an aperture length of D spans from z=0 to z≈ 0.62 √(D^3/λ) <cit.>, where z is the distance from the antenna and λ is the wavelength. The radiative near-field, also known as the Fresnel region, starts roughly where the reactive near-field ends, i.e., from z=1.2D to z= d_F <cit.>, where
d_F = 2D^2/λ,
is the Fraunhofer distance <cit.>, obtained by using a Taylor approximation with an assumption that a phase difference of π/8 over the aperture is negligible when analyzing the gain.
These definitions are based on classical approximation, which might not be well-tuned for future systems. It is therefore essential to begin our new analysis from electric field expressions.
We consider a free-space propagation scenario, where there exists an isotropic transmitter located at (x_t,y_t,z ) and a planar receiver centered at the origin covering an area denoted as
𝒜 in the xy-plane. When the transmitter emits a signal with polarization in the y-dimension, the electric field at the point (x,y,0) of the receiver aperture becomes <cit.>
E (x,y) = E_0/√(4 π)√(z( x-x_t )^2+z^2)/ r ^5/4 e^-j2π/λ√(r),
where r = ( ( x-x_t)^2 + (y-y_t)^2 + z^2 ) is the squared Euclidean distance between the transmitter and the considered point, and E_0 is the electric intensity of the transmitted signal in the unit of Volt. The electric field expression in (<ref>) is valid in the Fresnel region since it takes into account the reduction in effective area due to the incident angle, polarization mismatch, and free-space geometric path loss. The complex-valued channel response to the receive antenna can then be calculated as in <cit.>
h = 1/√(A)E_0∫_𝒜 E(x,y)dx dy.
Hence, the received power is computed as E_0^2 |h|^2/η, where η is the impedance of free space. The receive antenna gain compared to an isotropic antenna is computed as <cit.>
G = |∫_𝒜 E(x,y) dx dy |^2 /λ^2/4π∫_𝒜 |E(x,y)|^2 dx dy ,
where λ^2/4π is the area of the isotropic antenna.
The antenna gain is normally analyzed in the far-field where the incident wave is plane so that G_ plane= 4π/λ^2 A, where A is the physical area of the antenna element. A different value is obtained in the Fresnel region where there can be phase and amplitude variations over the aperture.
In this paper, we consider the normalized antenna gain defined in <cit.> as
G_ antenna = G/G_ plane = | ∫_𝒜 E(x,y) dx dy |^2/A ∫_𝒜|E(x,y)|^2 dx dy ,
which compares the achieved antenna gain in the Fresnel region to the one obtained in the far field.
§.§ Gain of an Antenna Array
In this paper, we consider a uniform planar array (UPA) with N antenna elements, where N can be a large number. The antenna elements are uniformly distributed across the array and we consider the same number of antenna elements in the x and y dimensions.
However, the elements can be rectangular, which will result in a rectangular aperture shape in the xy-plane.
Each antenna element is indexed by n ∈{1, …, √(N)} and m ∈{1, …, √(N)} which are the location indexes on the x and y axes, respectively. The Fraunhofer array distance depends on the antenna/array sizes and becomes d_FA = Nd_F.
We consider the same isotropic transmitter as before but assume that the receiver is equipped with the UPA. The complex-valued channel response to a receive antenna element can then be written similarly to (<ref>) as
h_n,m = 1/√(A)E_0∫_𝒜 E(x,y)dx dy,
where A denotes the physical area of each antenna.
If the transmitted signal has a power of E_0^2/η and matched filtering is utilized, the signal-to-noise ratio (SNR) becomes
SNR = E_0^2/ησ^2∑_n=1^√(N)∑_m=1^√(N) |h_n,m|^2,
where σ^2 denotes the receiver noise variance.
Similarly to (<ref>), we define the normalized array gain
G_ array = ∑_n=1^√(N)∑_m=1^√(N)| ∫_𝒜 E(x,y) dx dy |^2/N A ∫_𝒜|E(x,y)|^2 dx dy ,
which is the combined antenna and array gains achieved in the Fresnel region compared to what is achievable in the far-field.
§ GAIN AND BEAM DEPTH OF RECTANGULAR ARRAYS WITH A BROADSIDE TRANSMITTER
In this section, we consider a UPA with a rectangular shape. As illustrated in Fig. <ref>, the array is composed of antenna elements, each with a height of ℓ = D/√((1+c^2)) and width of w=c ℓ for some c ∈ℝ_0^+, where D denotes the diagonal.
We refer to this as a generalized rectangular array since different shapes are obtained depending on the value of c. If we fix the array's aperture area A_ array = NA, the diagonal of the array (i.e., the aperture length) becomes √(A_ array(1+c^2)/c) which varies with c. If we instead fix the array's aperture length D_ array =
√(N)D, the array's aperture area c (D_ array )^2/(1+c^2) will depend on c. We consider both scenarios in this paper to study how the shape of the beamforming depends on the array geometry. We take the viewpoint of a receiver array with an isotropic transmitter, for ease in presentation, but the same results apply in the reciprocal setup with a transmitting array.
The antenna element with index (n,m) is located at the point (x_n,y_m,0), where
x_n = ( n - √(N)+1/2) w , y_m = ( m - √(N)+1/2) ℓ,
and covers an area of
𝒜 = { (x,y,0): |x-x_n| ≤w/2, |y-y_m| ≤ℓ/2}.
Let us now consider the normalized array gain in (<ref>). We can either numerically evaluate it or use a Fresnel approximation that allows us to obtain an analytical expression. In the latter case, we approximate E(x,y) in (<ref>) as
E(x,y) ≈E_0/√(4 π) z e^-j 2 π/λ( z+x^2/2z + y^2/2z).
This approximation is tight in the part of the Fresnel region where the amplitude variations are negligible over the aperture. The term z+x^2/2z + y^2/2z determines the phase variations in (<ref>) and is obtained from a first-order Taylor approximation of the Euclidean distance between transmitter and array:
√(x^2+y^2+z^2) = z√(1+ (x^2+y^2/z^2))≈ z + x^2+y^2/2z.
The approximation error is small when x^2+y^2/z^2≤ 0.1745, which results in relative errors below 3.5 · 10^-3 <cit.>.
To give a concrete example, we consider a square array with c=1, N=10^4 antennas, and D=0.025 meter.
The relative error is smaller than 3.5 · 10^-3 if z > 1.2 D √(N).
Suppose the matched filtering in the array is focused on the point (0, 0, F), which might be different from the location (0,0,z) of the transmitter. We will now use the Fresnel approximation in (<ref>) to compute the normalized array gain by injecting the phase-shift e^+j 2π/λ(x^2/2F + y^2/2F) into the integrals in (<ref>) to represent a continuous matched filter <cit.>.
When the transmitter is located at (0,0,z) and the matched filtering is focused on (0, 0, F), the Fresnel approximation of the normalized array gain for the generalized rectangular array becomes
Ĝ_ rect(c) = ( C^2( c√( a ) )+S^2( c√( a )) ) ( C^2( √( a ) )+S^2( √( a ) ) ) /(c a )^2,
where C(· ) and S(· ) are the Fresnel integrals <cit.>, a = d_FA/4z_ eff(1+c^2), and z_ eff = Fz/F-z.
The proof is provided in Appendix <ref>
and is inspired by the derivation in <cit.>.
In Fig. <ref>, we evaluate the tightness of the analytical expression provided in Theorem <ref>. We can see that the Fresnel approximation is close to the exact normalized array gain, especially when z≥ d_B where d_B=2 D √(N) = 400 d_F denotes the Björnson distance <cit.> since then the amplitude variations are insignificant. Since D depends on λ, it also depends on the carrier frequency f_c. We consider f_c=3 GHz in this paper. The exact normalized antenna gain is computed numerically using (<ref>) and E(x,y) as stated in (<ref>) with the injected phase-shift e^+j 2π/λ (x^2/2F + y^2/2F) that represents the matched filtering.
The array gain is the same irrespective of how the rectangular array is rotated in the xy-plane, which is proved as follows.
The array gain expression in (<ref>) is a symmetric function in the sense that Ĝ_ rect(c) = Ĝ_ rect(c'), where c'= 1/c.
Let us define k ≜d_ FA/4 z_ eff and a' =d_ FA/4 z_ eff(1+(1/c)^2). One can express c√( a ) = √(kc^2/1+c^2). If we replace c by c', we obtain c'√( a' ) = √(k(1/c)^2/1+(1/c)^2) = √(k/1+c^2) = a and √( a' ) =√(k/1+(1/c)^2) = √(kc^2/1+c^2) = c√( a ). Therefore, the numerator of (<ref>) can also be written as ( C^2( c'√( a' ) )+S^2( c'√( a' ) ) ) ( C^2( √( a' ) )+S^2( √( a' ) ) ). We notice that the denominator of (<ref>) satisfies (c a )^2 = (c' a' )^2. Hence, Ĝ_ rect(c) = Ĝ_ rect(c').
The array gain in (<ref>) is an even function, i.e., Ĝ_ rect(c) = Ĝ_ rect(-c).
From <cit.>, we know that C(c√( a )) and S(c√( a )) are odd functions, that is, C(-c√( a )) = -C(c√( a )) and S(-c√( a )) = -S(c√( a )). The square of the Fresnel integrals C^2(c√( a )) and S^2(c√( a )) are both even functions since the multiplication of two odd functions gives us an even function. Therefore, C^2(c√( a )) + S^2(c√( a )) =C^2(-c√( a )) + S^2(-c√( a )) is an even function. The same applies to the functions
C^2(√( a )) + S^2(√( a )) =C^2(-√( a )) + S^2(-√( a )) and (cx)^2 = (-cx)^2. Putting them together, we obtain (C^2(c√( a )) + S^2(c√( a ))) (C^2(√( a )) + S^2(√( a )) )/(cx)^2=
(C^2(-c√( a )) + S^2(-c√( a ))) (C^2(-√( a )) + S^2(-√( a )) )/(-cx)^2, which proves that Ĝ_ rect(c) is an even function.
The even function property of Ĝ_ rect(c) implies that the array gain is symmetric with respect to the y-axis. We will use this property to derive an analytical expression of the BD of a rectangular array.
When applying matched filtering in the radiative near-field, the beamforming might have a limited BD; in fact, <cit.> advocates that the near-field ends at the point where the BD becomes infinite and not at the Fraunhofer distance. For a given focus point F, this implies that the array gain is only large for a limited range of transmitter distances z. This stands in contrast to far-field focusing where the array gain remains large in the range z ∈ [F,∞).
We consider the 3 dB BD, BD_3 dB, where the normalized array gain is higher than half of its peak gain. We will compute an analytical approximation of the BD using Theorem <ref>.
As the peak value of Ĝ_ rect(c) is 1, we are interested in the value of a for which Ĝ_ rect(c)=0.5, indicating the 3 dB loss. We further denote this value of a as a_3 dB, for which Ĝ_ rect(c) = Ĝ_ rect(-c)= 0.5 since the function is even according to Corollary <ref>. The precise value depends on c but can be easily computed numerically.
The 3 dB BD is computed as follows by using the Fresnel approximation.
The 3 dB BD of a rectangular array that is focused on F≥ d_B is approximately computed as
BD_3 dB^ rect =
8 d_FA F^2 a_3 dB (1+c^2) / d_FA^2 - ( 4 F a_3 dB (1+c^2) )^2 , F < d_FA/4 a_3 dB (1+c^2),
∞, F ≥d_FA/4 a_3 dB (1+c^2).
We can obtain a value of a from (<ref>), for which Ĝ_ rect(c) is approximately 0.5. The value of a is denoted as a_3 dB. From Theorem <ref>, we know that a_3 dB = d_FA/ 4z_ eff (1+c^2) = d_FA (F-z)/4 Fz (1+c^2), where the value of z indicates the 3 dB gain in the propagation distance. We can then write z = d_FAF/d_FA + 4 F a_3 dB (1+c^2). Since Ĝ_ rect(c) is an even function, there are two z-values resulting in the 3 dB gain. The difference between those two is the range of distances where the gain is greater than half of its peak gain, referred to as the 3 dB BD. Therefore, the 3 dB BD is computed as d_FA F /d_FA - 4 F a_3 dB (1+c^2) - d_FA F /d_FA + 4 F a_3 dB (1+c^2) which results in (<ref>). When F ≥d_FA/4 a_3 dB (1+c^2), one of the z-values is negative which implies there is no upper limit on the beamwidth.
The fact that the BD is finite in the radiative near-field enables spatial multiplexing of closely spaced users, even in the same angular direction. It is essential to understand which array geometry makes this feature most prevalent.
BD_3 dB^ rect in Theorem <ref> is maximized when c=1.
The 3 dB BD depends on the value of a_3 dB for which Ĝ_ rect(c) = 0.5. Since it is difficult to get a closed-form solution of a_3 dB with respect to c, we simulate it as given in Fig. <ref>.
a_3 dB is a decreasing function while (1+c^2) is an increasing function. Multiplying them together results in a function maximized when c=1. Since the numerator and denominator in BD_3 dB^ rect are maximized when the multiplication result of a_3 dB and 1+c^2 is maximum, the largest value of BD_3 dB^ rect is obtained when c=1.
We obtain a square array when c=1 and then BD_3 dB^ rect = 20 d_FA F^2 /d_FA^2 - 100 F^2, which is exactly the expression of the BD in <cit.>. Another property from the analytical 3dB BD in (<ref>) is that it goes to infinity when the focus distance F is greater than d_FA/4 a_3 dB (1+c^2). We refer to this focus boundary as the finite BD limit and stress that it separates the near-field and far-field propagation characteristics. To study this limit, we plot its value in Fig. <ref> as a function of c. When doing so, there are two options.
We can either fix the aperture length or the total aperture area to observe how the finite BD limit depends on c. If we fix the aperture area as A_ array =N λ^2/32, the aperture length varies with respect to c such that √(A_ array (1+c^2)/c). Since d_FA = N d_F, where d_F = 2D^2/λ, the finite BD limit d_FA/4 a_3 dB (1+c^2) highly depends on the changes of the aperture length D_ array, as demonstrated in Fig. <ref>. The finite BD limit is minimized when c=1.
If we fix the aperture length D_ array = √(N)λ/4,
the limit now depends on a_3 dB (1+c^2). More specifically, they are inversely proportional. The finite BD limit is still minimized when c=1. We summarize these important results as follows.
The finite BD limit mostly depends on the aperture length D_ array. Finite-depth beamforming is achievable at further distances the greater the value of D_ array is. The finite BD regime is largest for a uniform linear array (ULA).
In the following, we numerically compute the 3 dB BD of the rectangular array with respect to c. We consider both fixed area and length scenarios.
§.§ Fixed Array Aperture Area vs. Fixed Array Aperture Length
When varying the shape of the antenna elements through c, we can fix the array's aperture area A_ array = Nλ^2/32, while letting the array's aperture length D_ array = A_ array(1+c^2)/c vary. Alternatively, we can fix the aperture length D_ array = √(N)λ/4 so that the aperture area depends on c as c (D_ array)^2/(1+c^2). We will analyze both cases below.
Fig. <ref> depicts the 3 dB BD with respect to c for a fixed area.
The normalized array gain is computed numerically using (<ref>). The 3 dB BD is maximum when the array is square-shaped (c=1), i.e., around 400 d_F.
The BD pattern is symmetric for tall (c<1) and wide (c>1) arrays, which is aligned with Corollary <ref>. Note that as c attains a smaller/bigger value, the two-dimensional UPA gradually approaches the shape of a one-dimensional ULA. In fact, the ULA can be regarded as a special case of our generalized rectangular array model that is achieved when the height approaches the aperture length: lim_c → 0D_ array/√(1+c^2) = D_ array. The smallest BD is obtained in Fig. <ref> when the array approaches a ULA with around 50 d_F when c=0.1 and c=10.
Suppose we instead fix the array's aperture length so that the aperture area depends on c. The 3 dB BD for this case is also plotted in Fig. <ref> with respect to c.
Similarly to the results when we fix the aperture area, the 3 dB BD is maximum when the array is square-shaped (c=1), i.e., around 400 d_F; The BD pattern is symmetric for tall and wide arrays, and the smallest BD is obtained when the array approaches a ULA with around 250 d_F, when c=0.1 and c=10. It is worth noting that the gap between the highest and smallest 3 dB BD over c for a fixed aperture length is much lower than the one in the fixed area scenario. This implies that the array's aperture length has a more significant impact on the 3 dB BD than the array area.
The largest 3 dB BD is obtained with a square array (c=1). For tall (c<1) and wide (c>1) arrays, the BD patterns are symmetric. The smallest BD is obtained when the array approaches a ULA. The array's aperture length has a more significant impact on the 3 dB BD variation than the array's aperture area.
§ GAIN AND BEAM DEPTH OF RECTANGULAR ARRAYS WITH A NON-BROADSIDE TRANSMITTER
In this section, we consider rectangular UPAs with a transmitter located in a non-broadside direction. Consequently, there will be reductions in the effective area due to directivity, as well as polarization losses. The analysis is carried out by considering a non-zero azimuth angle, but the same methodology can also be applied to consider a non-zero elevation angle. When denoting the non-zero azimuth angle as φ, the location coordinate of the transmitter is (x_t,0,z), where x_t=d sin(φ), z = d cos(φ), and d represents the distance of the transmitter to the array's center, as shown in Fig. <ref>. We utilized the rectangular array design discussed in the preceding section.
We first evaluate the first-order Taylor approximation of the Euclidean distance between the transmitter and receiver when the transmitter is off the center of the array's broadside direction. The Euclidean distance determines the phase variations in each antenna element of the array. Therefore, an accurate approximation of the Euclidean distance results in an accurate approximation of the array gain. The Euclidean distance between the transmitter and an antenna element located at (x_n,y_m,0) is calculated as f_n,m(z) = √((x_n-x_t)^2 + y_m^2 +z^2) = z√(1+ (x_n-x_t)^2 + y_m^2/z^2). The first-order Taylor approximation of the Euclidean distance is f̃_n,m(z) = z (1 + ( x_n -x_t)^2 +y_m^2/2z). This approximation is tight when (x_n-x_t)^2 + y_m^2/z^2 is small (e.g., ≤ 0.1745) as mentioned in Section <ref>. However, since z= d cos(φ), the approximation will be significantly imprecise when φ approaches ±π/2. We can mathematically express the above case as lim_φ→±π/2(x_n-x_t)^2 + y_m^2/(d cos(φ))^2 = ∞. To evaluate this, we first calculate the mean absolute error of the antenna elements in the array as ϵ = 1/N·∑_n=1^√(N)∑_m=1^√(N)| f_n,m(z)- f̃_n,m(z)|, where the subscripts n,m ∈{1,…,√(N)} are the location indices of the antennas and z = d_B. Then, we plot ϵ with respect to φ demonstrated by the blue curve in Fig. <ref>, which we refer to as the direct approximation. The curve implies that the approximation is below the threshold 3.5 · 10^-3 for -π/16 ≤φ≤π/16. As φ approaches ±π/8, the approximation error exhibits a steep incline. We obtain a better approximation by first manipulating the expression of the Euclidean distance between the transmitter and receiver as
z√(1+ (x-x_t)^2 + y^2/z^2) = z √(1 + tan^2(φ)_1/cos^2(φ) + x^2 + y^2/( d cos(φ))^2 - 2x tan(φ)/d cos(φ))
= d√(1 + (x^2+y^2-2xdsin(φ)/d^2))_ f_n,m(d) .
Then, we approximate it using the first-order Taylor approximation as
f̃_n,m(d) = d + x^2 + y^2 -2xd sin(φ)/2d.
Now, the approximation is tight when x^2+y^2-2xdsin(φ)/d^2 is small and there is no cos(φ)-term in the denominator in contrast to f̃_n,m(z). We refer to this approximation as the indirect approximation, since we manipulate the Euclidean distance expression before then applying the Taylor approximation. The corresponding mean absolute error is ϵ = 1/N·∑_n=1^√(N)∑_m=1^√(N)| f_n,m(d)- f̃_n,m(d)|. The new approximation is evaluated in Fig. <ref>, where it shows that f̃_n,m(d) (indirect approximation) is much more precise than f̃_n,m(z) (direct approximation). For the remainder of this section, we will use the indirect approximation to approximate the Euclidean distance between the transmitter and receiver.
We further evaluate the Euclidean distance approximation with respect to the distance d and width-to-height proportion c. Fig. <ref> shows that for a tall array (e.g., c=0.1), the mean absolute error ϵ is below 3.5 · 10^-3 even for a relatively large values of φ such as π/3, with distance smaller than 1000d_F. As c increases, the approximation error also increases. On the other hand, as the propagation distance increases, the approximation error decreases.
In the following, we will use the approximation f̃_n,m(d) to derive an analytical normalized array gain approximation. The Fresnel approximation for the rectangular array with a non-broadside transmitter is written as
E(x,y) ≈E_0/√(4 π) z e^-j 2 π/λ( d + x^2 + y^2 -2xd sin(φ)/2d).
We assume that the matched filtering of the array is directed towards the point (0, 0, F), which could potentially differ from the actual transmitter location (x_t,0,z). We utilize the Fresnel approximation in (<ref>) to calculate the normalized array gain by injecting the phase-shift e^+j 2π/λ(x^2/2F + y^2/2F) into the integrals in (<ref>).
When the transmitter is located at (x_t,0,z) and the matched filtering is focused on (0,0,F), the Fresnel approximation of the normalized array gain for the generalized
rectangular array becomes
G̃_ rect(c,φ) = (( C^2( p )+S^2( p ) ) /( 2cp^2 )^2( ( C( cp+q)+C( cp-q) ) ^2 + ( S(cp+q)+S( cp-q) )^2),
where p=1/2√(d_FA/d_ eff(1+c^2)), q=x_t/d π√(2λd_ eff)=sin (φ)/π√(2λd_ eff), and d_ eff = F d/|F-d|.
The proof is provided in Appendix <ref>.
Fig. <ref> presents an evaluation of the accuracy of the analytical expression stated in Theorem <ref>. It is observed that the Fresnel approximation in Theorem <ref> closely approximates the normalized array gain. The normalized antenna gain is computed numerically using (<ref>), E(x,y) as in (<ref>), and an injected phase-shift.
Theorem <ref> provides a general statement about arrays with arbitrary transmitter orientations. This is in contrast to Theorem <ref>, in which the transmitter orientation is restricted to the broadside direction. Theorem <ref> can be derived as a special case of Theorem <ref>, proved as follows.
The normalized array gain approximation in (<ref>) reduces to the one in (<ref>) when φ=0, indicating a transmitter with a broadside direction to the receiver.
When φ = 0, it follows that sin(φ) =0 and cos(φ) =1. Thus, x_t =0 while z=d. Then, p=√(d_FA/4d_ eff(1+c^2)) is exactly the same expression as a. Moreover, since x_t=0, q=0. Substituting p = a and q=0 into (<ref>) yields (<ref>) which completes the proof.
The array gain is irrespective of the azimuth angle φ when c → 0 (a vertical ULA). Therefore, the BD is independent of φ. We prove this statement as follows.
For c → 0, the normalized array gain approximation is independent of the azimuth angle φ. Thus, the BD is also independent of φ.
For c → 0, we can mathematically express G̃_ rect(c,φ) in (<ref>) as
lim_c→ 0( C^2( p )+S^2( p ) )/( 2cp^2)^2(( C( cp+q)+C( cp-q) )^2 + ( S( cp+q)+S( cp-q) )^2).
=lim_c→ 0( C^2( p )+S^2( p ) ) ×lim_c→ 0(( C( cp+q)+C( cp-q) )^2 + ( S( cp+q)+S( cp-q) )^2)/( 2cp^2)^2.
We obtain the result of the first limit as C^2(√(d_FA/4 d_ eff)) + S^2(√(d_FA/4 d_ eff)), which is independent of φ. To compute the second limit, we first consider lim_c→ 0 C(cp+q) - lim_c→ 0 C(-cp+q) = Δ, where Δ is a very small value approaching zero. The same idea applies to lim_c→ 0 S(cp+q) - lim_c→ 0 S(-cp+q)=Δ.
Due to the odd function property of the Fresnel integrals, we can rewrite the second limit in (<ref>) as
lim_c→ 0( C( cp+q)-C( -cp+q) )^2 + ( S( cp+q)-S( -cp+q) )^2/( 2cp^2)^2
=
(lim_c→ 0 C( cp+q)- lim_c→ 0 C( -cp+q) )^2 + ( lim_c→ 0 S( cp+q)- lim_c→ 0 S( -cp+q) )^2/(d_FA/2 d_ efflim_c→ 0c/(1+c^2))^2
≈Δ^2 + Δ^2/Δ^2 (2d_ eff/ d_FA)^2 = 8(d_ eff/d_FA)^2.
Note that (<ref>) is also independent of φ. Therefore, (<ref>) is independent of φ which completes the proof.
The reason for this result is that a vertical ULA with isotropic antennas has the same spatial resolution in all directions, as can also be observed from rotational invariance. A similar result can be proved for a horizontal ULA, which provides the same BD for any elevation angle if the azimuth angle is φ=0.
In what follows, we conduct a numerical analysis to compute the 3 dB BD of the rectangular array in the presence of a non-broadside transmitter with respect to its width/length proportion c. We consider both fixed area and length scenarios.
§.§ Fixed Array Aperture Area vs. Fixed Array Aperture Length
As mentioned in Section <ref>, the array's aperture area can be kept constant at A_ array = Nλ^2/32 while adjusting the shape of antenna elements through c.
The aperture length then varies with c as D_ array = A_ array(1+c^2)/c.
Alternatively, fixing D_ array = √(N)λ/4 makes the aperture area A_ array =c(D_ array)^2/(1+c^2) dependent on c. Both cases are analyzed below.
In Fig. <ref>a, we plot the 3 dB BD for various rectangular array shapes with a fixed area and a transmitter located in some angular direction in the range -3 π/8 ≤φ≤ 3 π/8. Let us consider a BD variation, defined as the difference between the largest and smallest BD values with respect to φ, for a particular value of c. The BD variation becomes greater as c increases. When c → 0, the BD variation approaches zero, implying that the BD is unaffected by the azimuth angle φ. This finding agrees with Corollary <ref>. For a square array (c=1), the BD is around 550 d_F for φ= ± 3π/8. In the case of higher c (e.g., c=10), the BD becomes much larger than 600 d_F. The latter indicates that the BD approaches infinity when φ→±π/2. To illustrate this better, we plot Fig. <ref>, which shows the normalized antenna gains for a particular value of c with respect to propagation distances and azimuth angles. Fig. <ref>c shows a larger variation in normalized array gain compared to Figs. <ref>a and b. Furthermore, the BD for φ= ±14 π/32 in Fig. <ref>c is infinite when d ≥ dB, where the normalized array gain is around 0.3, similar to its peak.
Let us now consider the scenario where the array's aperture length is fixed, but it depends on the value of c. The 3 dB BD for various rectangular array shapes is shown in Fig. <ref>b. Similar to the previous scenario with a fixed aperture area, we observe that the variation in BD increases as c increases, and the BD becomes independent of the azimuth angle φ as c approaches zero. Furthermore, we observe that the highest BD values for arrays with c=0.1 and c=1 differ by approximately a factor of two, as the former has a value of around 550 d_F while the latter has a value of around 260 d_F. This is in contrast to the scenario with a fixed array area, where the highest BD values for arrays with c=0.1 and c=1 differ by approximately a factor of eleven as the BD value of an array with c=0.1 is around 560 d_F while the one with c=1 is around 50 d_F. This indicates that the aperture length is a dominant factor determining the BD.
As c approaches zero, so that the array approaches a vertical ULA, the BD becomes independent of the azimuth angles. For larger values of c, the BD increases with increasing azimuth angle φ, and approaches infinity as φ approaches ±π/2. Finally, as the value of c increases, the BD variation also increases.
§.§ Projected Rectangular Array
In this section, we investigate the feasibility of approximating the gain that a rectangular array achieves with a non-broadside transmitter using a smaller array with a broadside transmitter. The smaller array is obtained by projecting the original array at an angle of φ. The projected array is perpendicular to the transmitter's axis, as shown in Fig. <ref>. Notably, the width of the projected array appears shorter by a factor of cos(φ), while the array's height remains unchanged.
To evaluate the tightness of the approximation, we plot the normalized array gain versus the propagation distance in Fig. <ref>a. We can see that the approximation matches the exact gain of an array with a non-broadside transmitter when the propagation distance d> 200 d_F, while they are slightly different for 100 d_F≤ d< 200 d_F. As shown in Fig. <ref>b, when evaluating the tightness of the approximation at φ=π/4, a similar pattern is observed as in the evaluation at φ=π/8. Although there exists a minor discrepancy between the exact gain and the approximation, the two values are still closely matched.
We further evaluate the approximation with respect to the azimuth angle φ by plotting the absolute difference of the exact and approximation gain in Fig. <ref>c. The approximation increases with φ. Nevertheless, the error is below 0.1, which is relatively small. This indicates that the approximation is accurate, thus, we can view an array with a non-broadside transmitter as a projected array with a broadside transmitter.
Another notable observation is that as the azimuth angle φ approaches π/2 (end-fire direction), the aperture length of the projected array, denoted as D_ array^ proj, approaches the height of the original array, which is ℓ. Mathematically, we can express it as lim_φ→π/2 (√(ℓ^2 + (w·cos(φ))^2) = ℓ. If c →∞ (corresponding to a horizontal ULA), then ℓ→ 0. According to Theorem <ref> in this scenario, the 3 dB BD will tend towards infinity because the denominator shrinks much faster than the numerator, due to the dominant factor d_FA^2 = (2 (D_ array^ proj)^2/λ)^2. This observation is consistent with the findings illustrated in Fig. <ref>, which shows that the BD increases when φ→π/2, and tends towards infinity when φ approaches π/2 and c is much larger than 1.
We summarize the findings as follows.
We can approximate the gain of a rectangular array with a non-broadside transmitter using a smaller/projected array with a broadside transmitter. Moreover, as the transmitter approaches the end-fire direction of the array, the BD grows and approaches infinity as c becomes much greater than 1.
§ GAIN AND BEAM DEPTH OF A CIRCULAR ARRAY
In this section, we consider a UPA with a circular shape. The antenna elements are uniformly distributed across the array. Although there are numerous ways to distribute the antenna elements in a circular array evenly, we simplify things by assuming that each element is sufficiently small to ensure that the particular configuration of the elements has little impact on the array's overall gain. This presumption relies on the idea that the antenna elements in the array will radiate in a roughly symmetrical way. The Fresnel approximation of the normalized array gain for a circular array with a transmitter located at (0,0,z) and the focus point at (0,0,F), is computed using (<ref>) with E(x,y) in (<ref>) and inject the phase-shift e^+j 2π/λ(x^2/2F + y^2/2F). Due to the circular geometry of the array, we adopt a polar coordinate system in the following analysis. In particular, the polar coordinate enables us to simplify the derivations of the Fresnel approximation.
§.§ Gain and 3 dB Beam Depth
We will now analyze the normalized array gain and BD of the circular array. Assuming that the phase shifts are perfectly compensated by continuous matched filtering.
When we inject a phase shift to focus the array on a certain propagation distance, the normalized array gain can be approximated using the Fresnel approximation, given in the following theorem.
When the transmitter is located at (0, 0, z) and the matched filtering is focused on (0, 0, F), the normalized array gain for the circular array can be approximated using the Fresnel approximation as
Ĝ_circ=sinc^2 (c̃),
where c̃ = R^2/2λ z_ eff and
R = D/2 = λ/8
is the radius of the circular array, which matches the aperture length of the rectangular array.
The proof is provided in Appendix <ref> and is inspired by <cit.>.
The 3 dB BD for the circular array when focusing on a point F≥ d_B can then be calculated based on (<ref>).
The 3 dB BD of a circular array with a matched filter focused on (0,0,F), where F ≥ d_B, is computed as
BD_3 dB^ circ =
1.7720 R^2 F^2 λ/R^4- (0.886 λ F)^2 , F<R^2/0.886 λ,
∞, F ≥R^2/0.886 λ.
We know that sinc^2(0)=1 and sinc^2(0.4430) ≈ 0.5. The 3 dB BD BD_3 dB^ circ is defined as the range of distances where the gain is greater than 0.5, therefore, it is the difference of two possible values of z, i.e., R^2 F/R^2 - 0.886 λ F - R^2 F/R^2 + 0.886 λ F which results in (<ref>).
The finite BD limit for the circular array is R^2/0.886 λ, since the BD goes to infinity when the focus is on a point larger than the finite BD limit.
We evaluate the normalized array gain of the circular array in Fig. <ref>.
We can see that we approach -3 dB as z →∞, and hence the finite BD limit R^2/0.886 λ is the largest distance in which the BD for which the BD is finite.
If we adjust the focus to F=d_B=2(2R) and keep the array's aperture length the same as in the case of a rectangular array (i.e., R=D/2), then we obtain the BD for the circular array as BD_3 dB^ circ≈ 247 d_F, according to Theorem <ref>. The BD is slightly higher as compared to the BD_3 dB^ rect≈ 244 d_F in Theorem <ref> for tall c=0.1 or wide c=10 rectangular arrays.
§.§ Nulls and Side-lobes in the Depth Domain
When beams are analyzed in the angular domain, they are known to consist of a main beam and multiple weaker side-lobes, with nulls in between. The same behavior appears in the depth domain when considering finite-depth beamforming. We will now investigate the nulls and side-lobes for the circular array based on the approximate gain in (<ref>). The peak of the main beam in the depth domain appears when c̃=0 so that (sin(c̃)/c̃)^2=1. Since c̃ = π R^2/2λ z_ eff, it equals to 0 when z_ eff=∞ which implies that F=z. Therefore, the peak of the main beam is obtained at the distance F=z. The first null appears when c̃=π. Hence, z_ eff = R^2/2 λ. The second and later nulls appear when c̃=2π, …, Kπ. Therefore, the nulls can be obtained by setting z_ eff = k_0 R^2/2 λ, where k_0 ∈{1,…,K} indicates the index of the nulls. The peak values of the side-lobes is given in Table <ref>. We plot Fig. <ref> to demonstrate the nulls and side-lobes of the circular array. The characterization of the nulls and beamdepths in Table <ref> match the nulls and beamdepths in Fig. <ref>.
§ CONCLUSION
In this paper, we analyzed how the BD of a large planar rectangular array depends on its width-to-height proportion, area, and aperture length.
We derived a tight Fresnel approximation of the normalized array gain and used it to characterize the BD of the rectangular array.
We proved that the 3 dB BD is the largest for a square array. The BD pattern is symmetric for tall and wide arrays. The smallest BD is obtained when the array approaches a ULA. Therefore, an array of linear geometry is preferred since it offers the smallest BD, enabling us to better utilize spatial multiplexing since there is a higher degree of spatial orthogonality (in the distance domain). The array's aperture length has a more significant impact on the BD and finite BD limit than the aperture area. We also considered a projected square array where its effective aperture length is shorter than the original square array due to the rotation.
We then considered a non-broadside transmitter, where we first evaluated the Taylor approximation of the Euclidean between the transmitter and the antenna element of the array. We refined the approximation so that the approximation error decreases with the increase of the propagation distance.
When the width-to-height proportion approaches zero, the BD becomes independent of the azimuth angles. For larger width-to-height proportions, the BD increases with increasing azimuth angle. In addition, as the value of width-to-height proportion c increases, the BD variation also increases. Furthermore, we demonstrated that the beam pattern of an array with a non-broadside transmitter can be approximated by that of a projected/smaller array with a broadside transmitter. The original array was projected onto a plane that is perpendicular to the axis of the non-broadside transmitter.
Finally, we considered a UPA with a circular shape. We derived the analytical Fresnel approximation of the normalized array gain and characterized the 3 dB BD. The BD of the circular array is slightly larger compared to that of the wide/tall rectangular array, but lower than the square array. Our analysis also involved characterizing the nulls and side lobes of the circular array that emerge at different propagation distances.
§ PROOF OF THEOREM <REF>
First, we substitute the Fresnel approximation in (<ref>) into the normalized array gain in (<ref>) and inject phase-shift e^+j 2π/λ(x^2/2F + y^2/2F), which gives us
Ĝ_ rect(c) =1/(N A)^2
| ∫_-D_ array/2√(1+c^2)^D_ array/2√(1+c^2)∫_-cD_ array/2√(1+c^2)^cD_ array/2√(1+c^2)
e^+j 2π/λ(x^2/2F + y^2/2F)e^-j2π/λ( z+x^2/2z+y^2/2z)dxdy |^2
=1/(N A)^2| ∫_-D_ array/2√(1+c^2)^D_ array/2√(1+c^2)∫_-cD_ array/2√(1+c^2)^cD_ array/2√(1+c^2)e^-j2π/λ( F-z )( x^2+y^2)/2zFdxdy |^2.
By defining z_ eff = Fz/F-z, we can rewrite the expression as
Ĝ_ rect(c) =1/(N A)^2| ∫_-D_ array/2√(1+c^2)^D_ array/2√(1+c^2)∫_-cD_ array/2√(1+c^2)^cD_ array/2√(1+c^2)e^-jπ/λx^2/z_ effe^-jπ/λy^2/z_ effdxdy |^2.
The evaluation of the anti-derivatives in (<ref>) yields <cit.>
Ĝ_ rect(c) =( 4 z_ eff(1+c^2)/c d_FA)^2( C^2( c√(d_FA/4z_ eff(1+c^2))) +S^2( c √(d_FA/4z_ eff(1+c^2))) )
C^2( √(d_FA/4z_ eff(1+c^2)))+S^2( √(d_FA/4z_ eff(1+c^2))) ,
where C(·) and S(·) are the Fresnel integrals. By defining a = d_FA/4z_ eff(1+c^2), we finally obtain
Ĝ_ rect(c) =
( C^2( c√( a ))+S^2( c√( a )) ) ( C^2( √( a ))+S^2( √( a )) )/( ca )^2.
This completes the proof.
§ PROOF OF THEOREM <REF>
For a transmitter located at (x_t,0,z), we use the Fresnel approximation based on (<ref>), as
E(x,y) ≈E_0/√(4 π) z e^-j 2 π/λ( d + x^2 + y^2 -2x x_t/2d).
Substituting (<ref>) into the normalized array gain in (<ref>) and inject phase-shift e^+j 2π/λ(x^2/2F + y^2/2F) yields
G̃_ rect(c,φ) =1/(N A)^2| ∫_-D_ array/2√(1+c^2)^D_ array/2√(1+c^2)∫_-cD_ array/2√(1+c^2)^cD_ array/2√(1+c^2)e^j2π/λ( x^2/2F+y^2/2F)
e^-j2π/λ( d + x^2 + y^2 -2x x_t/2d)dxdy |^2
=1/(N A)^2| ∫_-D_ array/2√(1+c^2)^D_ array/2√(1+c^2)∫_-cD_ array/2√(1+c^2)^cD_ array/2√(1+c^2)e^-jπ/λ( x^2/d_ eff+2x x_t/d)e^-jπ/λ( y^2/d_ eff)
dx dy |^2
=( λd_ eff( 1+c^2) /π cND^2)^2|∫_-√(π/λd_ eff)D_ array/2√(1+c^2)^√(π/λd_ eff)D_ array/2√(1+c^2)∫_-√(π/λd_ eff)cD_ array/2√(1+c^2)+x_t/d√(λd_ eff/π)^√(π/λd_ eff)cD_ array/2√(1+c^2)+x_t/d√(λd_ eff/π)e^-j u ^2e^-j v ^2du dv |^2,
where u=√(π/λd_ eff) x+x_t/d√(λd_ eff/π), v=√(π/λd_ eff) y, and d_ eff = Fd/|F-d|. By using the Euler formula, i.e., e^jx = cos(x)+j sin(x), we can compute (<ref>) and get
G̃_ rect(c,φ)= ( 1/2cp^2)^2| ( ( C( cp+q )-C( -cp+q ) )-
j( S( cp+q )-S( -cp+q ) ) )
( ( C( p )-C( -p ) )-j( S( p )-S( -p ) ) ) |^2,
where p=D√(N/2λd_ eff(1+c^2)) and q=x_t/dπ√(2λd_ eff) =sin(φ)/π√(2λd_ eff).
Since D = √(λ d_F/2) and C(-v)=-C(v) due to the odd function <cit.>, we can rewrite (<ref>) as
G̃_ rect(c,φ) =( 1/2cp^2)^2| ( C( cp+q )+C( cp-q )-
j( S( cp+q )+S( cp-q ) ) ) ( C( p )-jS( p ) ) |^2
=( C^2( p )+S^2( p ) )/( 2cp^2)^2( ( C( cp+q )+C( cp-q ) )^2
+( S( cp+q )+S( cp-q ) )^2),
where p=1/2√(d_FA/d_ eff(1+c^2)), which completes the proof.
§ PROOF OF THEOREM <REF>
We first substitute the Fresnel approximation in (<ref>) into the normalized array gain in (<ref>) and inject phase-shift e^+j 2π/λ(x^2/2F + y^2/2F). Then, we use a polar coordinate system to represent the multiple integrations in (<ref>), explained in <cit.>. We can write the Fresnel approximation of the normalized array gain for the circular array as
G̃_circ =1/( π R^2 )^2| ∫_0^2π∫_0^Rr e^j2π/λ( F+r^2/2F)e^-j2π/λ( z+r^2/2z)drdφ|^2
=1/( π R^2 )^2| 2π∫_0^Rr e^-jπ/λr^2/z_ eff dr |^2,
where z_ eff = Fz/|F-z|.
Substituting u=-jπr^2/λz_ eff, we can rewrite G̃_circ as
G̃_circ =( 1/πR^2)^2| jλz_ eff∫_0^-jπR^2/λz_ effe^udu|^2
=( 1/πR^2)^2| λz_ eff( sin( πR^2/λz_ eff)+( cos( πR^2/λz_ eff)-1 )j ) |^2
=( λz_ eff/πR^2)^2( 2-2cos( πR^2/λz_ eff) )
=( sin (c̃)/c̃)^2,
where c̃= πR^2/2λz_ eff. This completes the proof.
IEEEtran
|
http://arxiv.org/abs/2306.06138v1
|
20230609055311
|
Extraction and Recovery of Spatio-Temporal Structure in Latent Dynamics Alignment with Diffusion Model
|
[
"Yule Wang",
"Zijing Wu",
"Chengrui Li",
"Anqi Wu"
] |
q-bio.NC
|
[
"q-bio.NC",
"cs.LG"
] |
skip=11.5pt
breakablealgorithm
algorithm
height.8pt depth0pt 2pt
2pt
Extraction and Recovery of Spatio-Temporal Structure in Latent Dynamics Alignment with Diffusion Model
Yule Wang
Georgia Institute of Technology
Zijing Wu
Georgia Institute of Technology
Chengrui Li
Georgia Institute of Technology
Anqi Wu
Georgia Institute of Technology
July 31, 2023
===============================================================================================================================================================================================================================================
In the field of behavior-related brain computation, it is necessary to meaningfully align raw neural population activities against the drastic shift between them. However, the alignment is non-trivial since most neural population activities are in a multivariate time-series manner.
An instrumental framework within neuroscience research posits that trial-based neural population activities rely on low-dimensional latent dynamics. Focusing on such latent dynamics greatly facilitates the alignment procedure. Despite the considerable progress we have reached, existing methods usually ignore the intrinsic spatio-temporal structures within latent dynamics. Thus, those solutions lead to poor quality in dynamics structures and overall performance after alignment.
To tackle this problem, we propose a method leveraging the expressiveness of diffusion model to relieve such issues. Specifically, the latent dynamics structures of the source domain are first extracted by the diffusion model. Then, such structures are well-recovered through a maximum likelihood alignment procedure on the target domain.
We first demonstrate the effectiveness of our proposed method on a synthetic dataset. Then, when applied to neural recordings from primate motor cortex, under both cross-day and inter-subject settings, our method consistently manifests its capability of preserving the spatio-temporal structure of latent dynamics and outperforms existing approaches in alignment quality. [Our code is available at: <https://github.com/alexwangNTL/ERDiff>.]
§ INTRODUCTION
A key challenge severely impeding the scalability of behavior-related neural computational applications is their robustness to the distribution shift of neural recordings over time and subjects <cit.>. Given a behavior model trained on previous neural recordings (e.g., velocity predictor for human with paralysis <cit.>), it usually suffers performance degradation when applied to new neural recordings due to the neural distribution shift <cit.>. Thus, for long-term usability and stable performance of the trained behavior model, high-quality alignment between the neural recordings used for training (i.e., source domain) and new recordings for testing (i.e., target domain) is of vital importance.
Distribution alignment is an important task at the heart of unsupervised transfer learning <cit.>. The goal is to align the target domain to the source domain so that the trained model in the source domain can be directly applied to the target domain after eliminating the distribution shift. However, due to issues such as instabilities and low signal-to-noise ratio <cit.>, raw neural activities are noisy and ambiguous <cit.>, causing difficulties in aligning the distributions of high-dimensional neural activities directly.
One promising research direction <cit.> points out that the trial-based neural activities can always be understood in terms of low-dimensional latent dynamics <cit.>. Such latent dynamics manifest coordinated patterns of evolution constrained to certain "neural manifolds" <cit.>. Hence, early studies focusing on the alignment of latent dynamics reach comparably satisfactory results <cit.>. Generally, most methods <cit.> are based on a pre-defined metric for optimization during latent dynamics alignment, i.e., minimizing the difference evaluated by the metric, between source and target domains in the low-dimensional latent space. However, those metrics are usually non-parametric and handcrafted, which are not guaranteed to suit specific neural recordings or problems well. Adversarial-based methods <cit.> thus have been introduced since they can implicitly find an adapted metric <cit.>. However, they suffer from mode collapse issues <cit.>.
Moreover, during the latent dynamics alignment process, the above-mentioned works lack the necessary awareness of the latent dynamics structure, especially when aligning long and complex trials. Through an empirical study on the motor cortex of non-human primate <cit.> (shown in Fig. <ref>), we can observe that: a state-of-the-art alignment method JSDM <cit.> (minimizing the symmetric Jensen–Shannon divergence between distributions) fails to recover the latent dynamics structures of the source domain since JSDM neglects those structure during alignment. From another perspective, in the alignment phase, existing methods, like JDSM, treat each time bin within latent dynamics and each latent dimension as mutually independent, failing to leverage the valuable spatio-temporal structure information in neural activities.
In this paper, we focus on preserving the temporal evolution of each individual latent dimension and the spatial covariation between latent dimensions of the source domain after alignment. The main idea is that we first extract the spatio-temporal structure of latent dynamics from the source domain; and then, we align the target domain by recovering the source domain's underlying structure. In this manner, the source-domain spatio-temporal structure of latent dynamics can be largely preserved in the target domain after alignment. However, such a workflow is non-trivial because the underlying spatio-temporal structure is both non-linear and complex.
[14]r0.41
< g r a p h i c s >
Empirical Study. Latent dynamics (3D Visualization) of the source domain and the aligned target domain by JSDM on a primary motor cortex dataset.
To tackle this problem, we propose a novel alignment method that is capable of Extracting and Recovering the latent dynamics structure with a Diffusion model (ERDiff).
Firstly, given the source-domain neural observations, we use a diffusion model (DM) to extract the spatio-temporal structure of latent dynamics. Then, in the alignment phase, we propose a maximum likelihood alignment procedure with the guidance of the DM, through which the spatio-temporal structure of source-domain latent dynamics can be recovered well in the target domain.
The proposed extract-and-recover method nicely encodes and preserves the spatio-temporal structure of latent dynamics, which are significant inductive biases for latent dynamics alignment. Furthermore, from the perspective of machine learning, ERDiff introduces a way of extracting structure knowledge from one distribution and imposing it as prior to constrain the alignment of another distribution. Note that although we have been emphasizing extraction and recovery of the source-domain structure, ERDiff is not performing a copy-and-paste of the source domain to the target domain. As ERDiff preserves the dynamics structure of the source domain, it also maintains the original characteristics of the target domain. We present experimental results to support this argument. Finally, we conduct extensive experiments to verify the effectiveness of ERDiff on a synthetic dataset and a real-world dataset of non-human primate motor cortex <cit.>. Compared to baseline methods, ERDiff can obtain alignment solutions that are much closer to the source domain. Visualization of latent dynamics also demonstrates that ERDiff is capable of preserving the spatio-temporal structure consistently during alignment.
§ PRELIMINARY
Distribution Alignment.
We denote the source-domain observations of single-trial neural population activities as 𝐗^(s)=[𝐱^(s)_1, …, 𝐱^(s)_l]^⊤∈ℝ^l × n, where l is the trial length (i.e., number of time bins), and n is the number of observed neurons. We denote its low-dimensional latent dynamics as 𝐙^(s)=[𝐳^(s)_1, …, 𝐳^(s)_l]^⊤∈ℝ^l × d, where d is the latent dimension size. Generally, we build a variational autoencoder (VAE) to estimate the latent 𝐙^(s) given the observations 𝐗^(s). VAE consists of an encoder q(𝐙^(s)|𝐗^(s);ϕ_s) and a decoder p(𝐗^(s)|𝐙^(s), ψ_s). ϕ_s and ψ_s are the parameters of the encoder and decoder. The encoder also serves as an approximated posterior distribution to the intractable true posterior p(𝐙^(s)|𝐗^(s)).
Then in the target domain, given the neural population activities 𝐗^(t)=[𝐱^(t)_1, …, 𝐱^(t)_l]^⊤∈ℝ^l × n, we perform distribution alignment by fine-tuning the probabilistic encoder q(𝐙|𝐗;ϕ).
This alignment phase is conducted by minimizing certain probability divergence 𝔻(·|·) between the two posterior distributions:
min _ϕ_t𝔻(q(𝐙^(s)|𝐗^(s);ϕ_s) q(𝐙^(t)|𝐗^(t);ϕ_t)).
Diffusion Model (DM). Given l × d-dimensional i.i.d. samples 𝐙 from an unknown data distribution p, a DM <cit.> aims to approximate such distribution by fitting the parameters of a neural network p_θ(𝐙). DM is composed of a forward process followed by a reverse process. In the forward process, isotropic Gaussian noise is added to diffuse the original data, which can be defined in a linear stochastic differential equation (SDE) form:
d𝐙=f(𝐙, t) d t+g(t) d𝐰,
where f(·): ℝ^l × d×ℝ↦ℝ^l × d is the drift coefficient, g(·): ℝ↦ℝ is the diffusion coefficient, and 𝐰 is the standard Wiener process. The solution of the SDE is a diffusion process {𝐙_t}_t ∈[0, T], in which [0, T] is a fixed time zone. In this paper, we implement them with VP-SDE <cit.>. {𝐙_t}_t ∈[0, T] approaches the standard normal prior distribution π(𝐙) when t=T. Under mild conditions on drift and diffusion coefficients <cit.>, the denoising reverse process can be solved in the following closed-form SDE:
d𝐙=[f(𝐙, t)-g(t)^2 ∇_𝐙log p_t(𝐙)] d t+g(t) d𝐰,
where ∇_𝐙log p_t(𝐙) is the score function, and 𝐰 is a reverse-time Wiener process. We train a parameterized network s(𝐙, t; θ) to fit the score function ∇_𝐙log p_t(𝐙). However, ∇_𝐙log p_t(𝐙) is not accessible and we resort to the denoising score matching (DSM) <cit.> for optimization:
ℒ_DSM(θ) = 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ p, p_0t(𝐙_t |𝐙_0)[
λ(t)^2 ∇_𝐙_tlog p_0 t(𝐙_t |𝐙_0)-s(𝐙_t, t; θ)_2^2],
where 𝒰 represents the uniform distribution and λ(t) is the weighting function. Under VP-SDE, the transition probability p_0t(𝐙_t |𝐙_0) also follows a Gaussian distribution 𝒩 (μ_t 𝐙_0, Σ_t), where μ_t, Σ_t ∈ℝ^l × d. On the other hand, according to <cit.>, we can define a noise function with the score function as ϵ(𝐙_t, t; θ)=- K_t^-Ts(𝐙_t, t; θ), in which K_t K_t^T=Σ_t. Invoking both the above two expressions, we can thus formulate the form of DSM loss based on noise residual:
ℒ_DSM(θ) = 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ p, ϵ∼𝒩(0, I_l × d)[
w(t)^2 ϵ-ϵ(𝐙_t, t; θ)_2^2],
in which w(t) = K_t λ(t) and 𝐙_t = μ_t 𝐙_0 + K_t ϵ.
§ METHODOLOGY
In this section, we propose ERDiff, a novel latent dynamics alignment method with DM.
§.§ Maximum Likelihood Alignment
Given the source-domain neural activities 𝐗^(s), we infer their latent dynamics 𝐙^(s) by building a VAE. We use variational inference to find the probabilistic encoder q(𝐙^(s)|𝐗^(s);ϕ_s) and probabilistic decoder p(𝐗^(s)|𝐙^(s); ψ_s) through the optimization of the evidence lower bound (ELBO) <cit.>:
ϕ_s, ψ_s = ϕ, ψargmin[𝔼_q(𝐙^(s)|𝐗^(s);ϕ)[log p(𝐗^(s)|𝐙^(s); ψ)]-𝔻_KL(q(𝐙^(s)|𝐗^(s);ϕ) q̅(𝐙^(s)) )],
in which q̅(𝐙^(s)) is the normal prior distribution. Note that we introduce ERDiff with this basic VAE architecture. But ERDiff can be combined with many variants of VAE <cit.>. The essence of ERDiff is to tune the parameter set ϕ of the probabilistic encoder, regardless of the structure of the encoder and decoder.
Alignment methods that directly match the discrete samples from the source and target domains in a pair-wise fashion may lead to sub-optimal solutions <cit.>, especially when the collected samples from the target domain are limited.
Given the target-domain neural activity 𝐗^(t), we propose to perform distribution alignment via maximum likelihood estimation (MLE):
ϕargmax 𝔼_𝐗∼ p(𝐗^(t))[log p_s(h(𝐗;ϕ))] = ϕargmax𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_s(𝐙)],
in which p_s(𝐙) represents the ground-truth probabilistic density of latent dynamics in the source domain and h(·) refers to the non-linear transformation from 𝐗 to 𝐙 underlying the probabilistic encoder q(𝐙|𝐗;ϕ). The objective in Eq. <ref> implies that, instead of minimizing a distance metric between source observations and target observations, we aim at maximizing the likelihood where the density comes from the source domain and the data comes from the target domain. The left-hand-side (LHS) is the MLE for observation 𝐗 and the right-hand-side (RHS) is the MLE for latent 𝐙. We will focus on the RHS in the following sections. Note that the RHS objective implies that we will optimize the encoder parameter ϕ during alignment so that the latent encoder will map 𝐗^(t) to a proper latent 𝐙^(t) who fits the source density p_s(𝐙) well.
§.§ Spatio-temporal Structure Extraction and Source Domain Learning
In order to calculate the objective function in Eq. <ref>, we will need to know two density functions: q(𝐙|𝐗;ϕ) is defined in the original VAE model with the learnable parameter ϕ; p_s(𝐙) is the density of latent 𝐙 for the source domain. The latter is inaccessible by building a VAE alone. Therefore, the first step is to learn p_s(𝐙) given only 𝐗^(s). We propose to learn p_s(𝐙) through training a DM.
To fully capture p_s(𝐙), the DM should consider the overall spatio-temporal structure of latent dynamics. To extract such a structure, the DM can not treat each latent state or time bin in latent dynamics as mutually independent and feed them into the model sequentially. We thus take the entire trial of latent dynamics 𝐙^(s)_0 ∼ q(·|𝐗^(s);ϕ_s) as input to the DM for training. Specifically, the DM fits p_s(𝐙) through the training of a denoiser ϵ(𝐙, t; θ_s) : (ℝ^l × n×ℝ) →ℝ^l × n.
Next, we describe the architecture of ϵ(𝐙, t; θ_s), which is refined for extracting the global spatio-temporal structure of latent dynamics. Traditional architecture based on 2D-Convolution Layer <cit.> focuses on capturing the local features within latent dynamics, which can hardly extract its global spatio-temporal dependency or structure. Thus, we adopt an architecture mainly derived from Diffusion Transformer (DiT) <cit.>. Specifically, we propose to use Spatio-Temporal Transformer Block (STBlock), shown in Fig. <ref>(A). Each STBlock is composed of a Spatio Transformer layer followed by a Temporal Transformer layer, which are 1-layer encoders based on multi-head self-attention. The Spatio Transformer layer takes latent states of each time bin as inputs to extract spatial structure, whereas the Temporal Transformer layer takes the entire latent trajectory of each latent space dimension as inputs to extract temporal structure. (see Appendix A for details of the architecture of DM).
For the training objective of ϵ(·; θ_s), we sample noisy targets 𝐙^(s)_t and minimize the following DSM loss function:
θ_s = θargmin 𝔼_t ∼𝒰[0, T]𝔼_𝐙^(s)_0 ∼ q(·|𝐗^(s);ϕ_s), ϵ∼𝒩(0, 𝐈_l × d)[
w(t)^2 ϵ-ϵ(𝐙^(s)_t, t; θ)_2^2].
We note that 𝐙^(s)_0 here are actually latent dynamics inferred via VAE in Eq. <ref>. To enrich the input samples and adequately estimate the source density p_s(𝐙) as motivated earlier, we propose to learn the VAE objective (Eq. <ref>) and the diffusion objective (Eq. <ref>) simultaneously. In each training iteration, conditioning on the current value of ϕ_s and ψ_s, we obtain a set of 𝐙_0=h(𝐗^(s); ϕ_s) and use it as 𝐙^(s)_0 to optimize Eq. <ref>. We can also optimize VAE first, obtain an optimal ϕ_s, and use it to optimize Eq. <ref>. Experimental results show that the former approach achieves higher density estimation performance compared to the latter (see Appendix A for details).
§.§ Spatio-temporal Structure Recovery and Distribution Alignment
Given the trained denoiser ϵ(𝐙, t; θ_s), we go through the reverse process from t=T to t=0 in Eq.(<ref>) and obtain the marginal distribution p_0(𝐙 ; θ_s). We use p_0(𝐙 ; θ_s) to approximate p_s(𝐙) in Eq. (<ref>). The maximum likelihood estimation can thus be written as
ϕargmax𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_0(𝐙;θ_s)].
We perform alignment by tuning the parameter set ϕ of the probabilistic encoder while keeping the DM p_0(𝐙;θ_s) fixed. Note that we already optimize the VAE objective to obtain an optimal ϕ_s using source data. During alignment, we first set ϕ as be ϕ_s and then only tune a small portion of ϕ (e.g., neural observation read-in layer). Consequently, we not only initialize the model with a good encoder but also make optimization during alignment much faster and more efficient.
In the reverse process, the computation of log p_0(𝐙;θ_s) is tractable through the probability flow ODE <cit.> whose marginal distribution at each time step t matches that of our VP-SDE. However, the direct computation of log p_0(𝐙;θ_s) will require invoking the ODE solver in each intermediate time step <cit.>. Such complexity is prohibitively costly for online neural applications.
To circumvent the issue, we can reform Eq. (<ref>) as follows:
- 𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_0(𝐙;θ_s)] = 𝔻_KL(q(𝐙|𝐗^(t);ϕ) p_0(𝐙;θ_s) ) + ℍ(q(𝐙|𝐗^(t);ϕ) ),
where the first term is the KL divergence from the DM marginal distribution p_0(𝐙;θ_s) to the probabilistic encoder distribution q(𝐙|𝐗^(t);ϕ), and the second term ℍ(·) denotes the differential entropy. For the 𝔻_KL(·) term in Eq. (<ref>), via the Girsanov theorem <cit.>, we have
𝔻_KL(q(𝐙|𝐗^(t);ϕ) p_0(𝐙;θ_s) ) ⩽ℒ_DSM(ϕ,θ_s)+𝔻_KL(p_T(𝐙;θ_s) π(𝐙)),
where ℒ_DSM is the denoising score matching loss in Eq. (<ref>), and p_T(·) is the distribution at final time step T of Eq. (<ref>). Consequently, we could obtain an upper bound of the maximum likelihood objective, as follows (we provide detailed derivation in Appendix B):
- 𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_0(𝐙;θ_s)] ⩽𝔻_KL(p_T(𝐙;θ_s) π(𝐙))_Constant Term
+ 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(𝐙|𝐗^(t);ϕ), ϵ∼𝒩(0, I_l × d)[
w(t)^2 ϵ-ϵ(𝐙_t, t; θ_s)_2^2_Weighted Noise Residual - 2 ∇_𝐙·f(𝐙_t, t)_Divergence ].
Since π(𝐙) is a fixed prior distribution, it does not depend on parameter ϕ. Thus, our optimization objective will include only the latter two terms, which are more computationally tractable. The first objective simplifies to a weighted noise residual for parameter set ϕ and the second divergence objective can be approximated using the Hutchinson-Skilling trace estimator <cit.>. We note that the recovery of spatio-temporal structure is primarily conducted by the weighted noise residual part, in which the probabilistic encoder obtains alignment guidance in awareness of spatio-temporal structure from ϵ(𝐙, t; θ_s). This procedure is illustrated in Fig. <ref>(B).
In distribution alignment, it is a common practice to directly leverage the ground-truth data samples by introducing a regularizer term in the optimization function. To encourage the diversity of latent dynamics after alignment, here we further compute and penalize the Sinkhorn Divergence <cit.> between the latent dynamics samples of source domain 𝐙^(s)∼ q(·|𝐗^(s);ϕ_s) and that of target domain 𝐙^(t)∼ q(·|𝐗^(t);ϕ):
min _γ ⟨γ, 𝐂⟩_F+λℍ
(γ),
where each value 𝐂[i][j] = 𝐙_i^(s)-𝐙_j^(t)_2^2 in cost matrix 𝐂 denotes the squared Euclidean cost to move a probability mass from 𝐙_i^(s) to 𝐙_j^(t), and ℍ(γ) computes the entropy of transport plan γ. The total loss for distribution alignment is composed of the term in (<ref>) and the latter two terms on the right side of (<ref>). We note that the total loss is minimized with respect to the encoder parameter ϕ. (see Appendix C for the total loss formula and the detailed algorithm of ERDiff.)
§ EXPERIMENTS
Datasets. We first train and evaluate ERDiff with a synthetic dataset.
Then we apply ERDiff to a non-human primate (NHP) dataset with neural recordings from
the primary motor cortex (M1), in which the primates are performing a center-out reaching task in 8 different directions. The NHP dataset contains rich cross-day and inter-subject settings that provide us with an ideal test bed.
Baselines for Comparison. We compare ERDiff against the following two strong baselines proposed for the neural distribution alignment task:
∙ JSDM <cit.>: a metric-based method that leverages discrete samples from both the source and target domains. The alignment is performed through the symmetric Jensen–Shannon divergence <cit.>.
∙ Cycle-GAN <cit.>: a state-of-the-art GAN-based method that uses cycle-consistent adversarial networks to align the distributions of latent dynamics.
Considering the neural observations and latent dynamics are in the format of multi-variate time series, we also compare ERDiff with the following deep learning-based methods aiming at distribution alignment for time series:
∙ SASA <cit.>: a metric-based distribution alignment method for time series data regression task through the extraction of domain-invariant representation.
∙ RDA-MMD <cit.>: a distribution alignment method via minimizing MMD Loss between the latent dynamics extracted from LSTM.
∙ DAF <cit.>: an adversarial learning framework that uses a transformer-based shared module with a domain discriminator. During the adaptation step, the domain-invariant features are invariant (𝐐, 𝐊 of self-attention); the domain-specific features (𝐕 of self-attention) keep tuning.
§.§ Synthetic Dataset
Data Synthesis and Evaluation Metrics. We first generate a simulated latent dynamics dataset to illustrate the effect of our ERDiff method on spatio-temporal structure preserving and distribution alignment performance. In this setting, we consider modeling the nonlinear latent dynamics to follow conditionally Continuous Bernoulli (CB) <cit.> distribution. For each single-trial latent dynamics, we generate 2-dimensional latent variables 𝐙 = {𝐳_1:L} and their 32-dimensional observations 𝐗 = {𝐱_1:L}, where L=32. We use the following synthesis process and parameter settings to generate samples for the source and target domains, respectively:
p(𝐳^(s)_l+1|𝐳^(s)_l)=∏_d 𝒞 ℬ(𝐳^(s)_l+1, d|𝐖^(s)tanh (𝐳^(s)_l, d) ), p(𝐱^(s)_l|𝐳^(s)_l)=𝒩(𝐱^(s)_l|𝐑^(s)𝐳_l^(s), 𝐊),
p(𝐳^(t)_l+1|𝐳^(t)_l)=∏_d 𝒞 ℬ(𝐳^(t)_l+1, d|𝐖^(t)tanh (𝐳^(t)_l, d) ), p(𝐱^(t)_l|𝐳^(t)_l)=𝒩(𝐱^(t)_l|𝐑^(t)𝐳_l^(t), 𝐊),
where l ∈{1, …, L}, and {𝐖^(s), 𝐑^(s)}, {𝐖^(t), 𝐑^(t)} are the specific parameter sets of the source and target domains. To compare and evaluate the latent dynamics alignment performance, we estimate the trial-average log density of the aligned latent dynamics evaluated at the optimal generation distribution: 1/L ∑_l=0^L-1log q^*(𝐳^(t)_l), and the trial-average KL Divergence to the optimal latent dynamics distribution: 1/L ∑_l=0^L-1𝔻_KL( P_ϕ^*(𝐳^(t)_l+1|𝐳^(t)_l) P_ϕ^(t)(𝐳^(t)_l+1|𝐳^(t)_l) ). (see Appendix D for model details).
Results on a Synthetic Dataset. We repeat the simulation experiment five times and report the mean and standard deviation of each method in the above two quantitative evaluation metrics, shown in Fig. <ref>(A). We observe that ERDiff achieves higher alignment performance on both two evaluation metrics compared to baseline methods.
For further analysis, we plot the phase portrait of the true source domain and those inferred by ERDiff and JSDM in Fig. <ref>(B). Compared to JSDM, ERDiff can extract and recover the spatio-temporal structure of the synthetic latent dynamics more precisely and be much closer to the ground truth. This result is because ERDiff obtains structure-aware alignment signals from the DM while JSDM neglects this structural information.
§.§ Neural Dataset
Motor Cortex Dataset Description. We conduct experiments with datasets collected from the primary motor cortex (M1) of two non-human primates (`C' & `M') <cit.>. The primates have been trained to reach one of eight targets at different angles (Fig. <ref>A). Neural recordings from these two primates have been widely studied
<cit.>. During such a process, their neural spike activities (signals) in the primary motor cortex (M1) along with the reaching behavior velocity were recorded. They performed the center-out reaching task multiple times in each direction and only successful trials were saved. For our experimental evaluation purpose, we select the trials from three recording sessions for each primate per day. In total, we have 3 days for each primate. We will perform cross-day (recordings of the same primate performing the task on different days) and inter-subject (recordings of different primates) experiments.
Data Processing and Evaluation Metrics. The neural recordings of each day and each primate consist of about 180-220 trials across 3 sessions. For each trial, about 200 neurons are recorded and the number of time bins is 39 with 20ms intervals. We also bin the velocity of the primate's behavior into 39 bins. Therefore, we have time-aligned neural data and behavioral data. When training with the source data, we optimize the VAE model together with the DM. One thing we need to emphasize here is that we also include a velocity-decoding loss to the VAE loss. The decoder maps the neural latent to the velocity values, which is a regression model. Therefore, the inferred latent contains a good amount of velocity information. During testing, we align the test neural data to the training neural data so that we can directly apply the velocity decoder to the latent in the test data without performance degradation. In the training session, the ratio of training and validation set is split as 80%:20% through 5-fold cross-validation. The post-processed dataset of primate `C' contains 586 trials in total while that of primate `M'
contains 632 trials. For the evaluation protocol, since we cannot access the ground-truth source domain of latent dynamics, we use the behavior decoding performance to evaluate the performance of latent dynamics alignment. Here we compare the true behavior velocity with the decoded one in the test data using R-squared values (R^2, in %) and root-mean-square error (RMSE). To verify the generalization of each method in latent dynamics alignment, we make full use of the dataset collected in chronological order. We perform 6 sets of cross-day experiments and 10 sets of inter-subject experiments, all repeated over 5 different random seeds. We report the mean and standard deviation of each method on all experimental conditions in Table <ref>.
Experimental Setup. The VAE is based on a sequential architecture <cit.>, in which recurrent units are applied in both the probabilistic encoder and decoder. We also add domain knowledge of our alignment task into the model structure: a behavior regression decoder is cooperatively trained from the latent dynamics so that the behavior semantics information is complementarily provided during the neural manifold learning. Poisson negative log-likelihood loss is used for firing rate reconstruction and mean squared error is used for behavior regression. We use the Adam Optimizer <cit.> for optimization and the learning rate is chosen between {0.005, 0.01}. The batch size is uniformly set as 64. Despite the varying size of the input dimension (due to the varying number of recorded neurons in different sessions), the latent space dimension size is set as 8 for all the methods for a fair comparison. We use the dropout technique <cit.> and the ELU activation function <cit.> between layers in our probabilistic encoder and decoder architecture. During latent dynamics alignment, we perform parameter tuning only on the read-in layer of the probabilistic encoder while keeping the remaining layers fixed. (further details on hyperparameters and training settings can be found in Appendix D).
Neural Manifold Analysis. Considering the interpretability <cit.> and strong latent semantics contained in neural manifold <cit.>, we conduct a case study based on it to verify the spatio-temporal structure preserving capability and alignment performance of ERDiff. In Fig. <ref>(B), we plot the averaged latent dynamics of each direction in the source domain, which is based on one recording on primate `C' using demixed Principle Component Analysis (dPCA) <cit.>. The parameters of dPCA are fit with the source-domain latent dynamics while being fixed when applied to perform the transformation in the target domain. In Fig. <ref>(C), we plot the averaged latent dynamics of each direction aligned by ERDiff and two representative baseline methods (DAF and JSDM) under both cross-day and inter-subject settings.
Under both experimental settings, the overall observation is that the spatio-temporal structures of the aligned results of ERDiff are much more coherent with that of the source domain. The results of DAF and JSDM roughly recover the direction of dynamics but fail to preserve the spatio-temporal structure tightly. That is because JSDM neglects the spatio-temporal structure during alignment and it is difficult for adversarial networks to capture such structure implicitly in DAF. Additionally, the averaged latent dynamics of each direction are much more clearly separated through ERDiff. We owe this outcome to the fact that providing extra guidance on the spatio-temporal structure would also facilitate the model to align directions properly. Additionally, without any mean offset alignment, the starting points (from the bottom center) and ending points of latent dynamics are also aligned well with the source domain, further verifying the structure recovering ability of ERDiff.
Decoding Performance Comparison
Table <ref> shows a comparison of the r-squared value (R^2) and average RMSE on both the cross-day and inter-subject settings. While Fig. <ref>(A) depicted the decoded velocity trajectories of a subset of trials on the cross-day setting given the method. We have the following observations: (1) Compared to traditional alignment methods in the computational neuroscience field, deep learning-based methods additionally model the sequential information of the latent dynamics, thus achieving better alignment results, which reflects the importance of spatio-temporal structure modeling. In most cases, ERDiff achieves the highest decoding accuracy and alignment performance among all methods.
(2) From the ablation study shown at the bottom of Table <ref>, we find that both the Spatial Transformer layer and Temporal Transformer layer are key components in ERDiff, verifying the effectiveness of spatio-temporal structure modeling.
(3) As shown in Fig. <ref>(A), the spatio-temporal structure of the latent dynamics is well-preserved in the result of ERDiff. Compared to baselines, the smoothness and continuity of the trajectory decoded by ERDiff are also more satisfying.
Impact of sampling density in the target domain.
We verify the consistent performance of ERDiff in few-shot target-sample circumstances. In Fig. <ref>(B), we analyze the impact of sampling density of the target domain on decoding performance. The setting is that we sample a portion of target-domain data to learn the alignment and apply the alignment to the entire target domain. Despite the sampling density drops from 50% to 10%, our results demonstrate that ERDiff continues to produce fairly consistent decoding accuracy with a small drop. This result validates our argument that ERDiff both preserves the dynamics structure underlying neural activities and maintains the characteristics of the target domain. In comparison, the performance of baseline methods shrinks drastically because they lack prior knowledge of the spatio-temporal structure.
We can conclude that the DM in ERDiff is capable of extracting the spatio-temporal structure in the source domain latent dynamics, providing valuable inductive bias in recovering such structure during distribution alignment.
§ CONCLUSION
In this work, we propose a new method named ERDiff, for solving the distribution alignment issue in real-world neuroscience applications (e.g., brain-computer interfaces). Firstly, with the source domain, we propose to use a diffusion model to extract the spatio-temporal structures within the latent dynamics of trials. Next, in the alignment phase with the target domain, the spatio-temporal structure of latent dynamics is recovered through the maximum likelihood alignment based on the diffusion model. Experimental results on synthetic and real-world motor cortex datasets verify the effectiveness of ERDiff in the enhancement of long-term robustness and behavior decoding performance from neural latent dynamics.
ieeetr
§ METHODOLOGY DETAILS
§.§ DM Architecture Details
We adopt the architecture of DM mainly derived from Diffusion Transformer (DiT) <cit.>. The vanilla DiT architecture is based on techniques like patchify <cit.> and transformer layer for tokens <cit.>, which are well-suited for image feature extraction. This is because the above techniques focus on local feature extraction and the global feature can also be implicitly captured through the stacking of token-based Transformer layers. However, considering the neural observations and latent dynamics are in the format of multi-variate time series, patchify and local feature extraction loses their semantic meaning. There doesn't exist a mathematic guarantee or neuroscience observation that adjacent latent dimensions of the data matrix have a stronger connection than far-apart latent dimensions. Thus, directly adopting the traditional DiT architecture into this setting may lead to sub-optimal solutions.
To fully utilize the domain knowledge of our task, we novelly propose to use the Spatio-Temporal Transformer Block (STBlock). Each STBlock is mainly composed of a Spatio Transformer layer followed by a Temporal Transformer layer, which are 1-layer encoders based on multi-head self-attention. Since there exists underlying dependency and structure between latent state dimensions, the Spatio Transformer layer takes latent states of each time bin as inputs to extract their spatial structure. Whereas the Temporal Transformer layer takes the entire latent trajectory of each latent space dimension as inputs to extract its underlying temporal structure. We note that we use the sinusoidal position embeddings <cit.> to encode the timestep t (i.e., noise scale) into the deep neural network of DM. In each STBlock, the input sequentially goes through:
∙ Spatio Transformer: Layer Normalization → Multi-head Self-attention Layer (along time bins) → Point-wise Feed-forward.
∙ Temporal Transformer: Layer Normalization → Multi-head Self-attention Layer (along latent space dimensions) → Point-wise Feed-forward.
We illustrate the main architecture of DM in Fig. <ref>(A), and implement the DM in Pytorch <cit.>.
§.§ VAE and DM Cooperative Source Domain Learning Details
In DM training, we note that 𝐙^(s)_0 here are actually latent dynamics inferred via VAE in Eq. <ref>. Considering the limited number of source-domain latent dynamics, we wish to perform data augmentation so that the DM can adequately estimate p_s(𝐙). Here we propose to enrich the input samples by learning the VAE objective (Eq. <ref>) and the diffusion objective (Eq. <ref>) cooperatively. Through the learning process of the VAE objective (i.e., ELBO), the optimization process with stochastic gradient descent (SGD) adds auxiliary perturbation to the original data samples 𝐙^(s)_0 rather than Gaussian noise. This technique further fills the sample space of 𝐙^(s)_0, leading to better density estimation. Specifically, in each training iteration, conditioning on the current value of ϕ_s, we infer a set of 𝐙_0=h(𝐗^(s); ϕ_s) and use it as the temporal 𝐙^(s)_0 to optimize Eq. <ref>. The traditionally sequential approach is that we fully optimize VAE first, obtain an optimal ϕ_s, and use it to optimize Eq. <ref>. Experimental results show that the former approach achieves higher density estimation and alignment performance. Fig. <ref> manifests the training loss curve in three source-target neural recording session pairs. We observe that despite the relatively under-fitting at the early stage, the cooperative source domain learning paradigm converges to solutions with lower losses and better fits. Table <ref> manifests that our cooperative source domain learning paradigm leads to higher distribution alignment and behavior decoding performance.
§ DETAILED DERIVATION OF MAXIMUM LIKELIHOOD ALIGNMENT
§.§ Relationship between KL-Divergence and DSM Loss
Under assumptions in Appendix A of <cit.>, the KL divergence between the ground truth density q(𝐙|𝐗^(t) ; ϕ) and the DM marginal distribution p_0(𝐙 ; θ_s) can be derived as:
𝔻_KL(q(𝐙|𝐗^(t) ; ϕ) p_0(𝐙 ; θ_s))
(i)⩽ 𝔼_t ∼𝒰[0, T]𝔼_𝐙∼ q(·|𝐗^(t);ϕ)[λ(t)(∇_𝐙log p_t(𝐙;θ_s)-s(𝐙, t;θ_s)) d𝐰] + 𝔻_KL(p_T(𝐙 ; θ_s) π(𝐙))
+ 1/2𝔼_t ∼𝒰[0, T]𝔼_𝐙∼ q(·|𝐗^(t);ϕ)[ λ(t)^2∇_𝐙log p_t(𝐙;θ_s)-s(𝐙, t;θ_s)_2^2 d t]
(i i)= 𝔼_t ∼𝒰[0, T]𝔼_𝐙∼ q(·|𝐗^(t);ϕ)[λ(t)^2∇_𝐙log p_t(𝐙;θ_s)-s(𝐙, t;θ_s)_2^2 d t] + 𝔻_KL(p_T(𝐙 ; θ_s) π(𝐙))
(i i i)= 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(·|𝐗^(t);ϕ), p_0 t(𝐙_t |𝐙_0)[λ(t)^2∇_𝐙_tlog p_0 t(𝐙_t |𝐙_0)-s(𝐙_t, t ; θ)_2^2]
+ 𝔻_KL(p_T(𝐙 ; θ_s) π(𝐙))
= ℒ_DSM(ϕ,θ_s)+𝔻_KL(p_T(𝐙;θ_s) π(𝐙)),
in which (i) is due to Girsanov Theorem <cit.>, in (ii) we invoke the martingale property of Itô integrals <cit.>, and in (iii) we use the denoising score matching (DSM) technique <cit.>. Thus we can draw to Eq. <ref>.
§.§ Upper Bound of Maximum Likelihood Alignment Objective
In Section <ref>, by substituting Eq. <ref> into Eq. <ref>, we have
- 𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_0(𝐙;θ_s)] ⩽ℒ_DSM(ϕ,θ_s)+𝔻_KL(p_T(𝐙;θ_s) π(𝐙))
+ ℍ(q(𝐙|𝐗^(t);ϕ) ).
We note that the third term ℍ(·) depends on the parameter set ϕ of the probabilistic encoder. As q(𝐙|𝐗^(t);ϕ) ≈ p_0(𝐙;θ_s), we have
ℍ(q(𝐙|𝐗^(t);ϕ) ) -ℍ(p_T(𝐙;θ_s))≈∫_T^0 ∂/∂ tℍ(p_t(𝐙;θ_s)) d t
(i)= 𝔼_t ∼𝒰[0, T]𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[2 f(𝐙, t)^⊤∇_𝐙log p_t(𝐙;θ_s)-λ(t)^2∇_𝐙log p_t(𝐙;θ_s)_2^2] d t
(ii)= - 𝔼_t ∼𝒰[0, T]𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[2 ∇_𝐙·f(𝐙, t) + λ(t)^2∇_𝐙log p_t(𝐙;θ_s)_2^2] d t
(iii)= - 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(·|𝐗^(t);ϕ), p_0 t(𝐙_t |𝐙_0)[2 ∇_𝐙_t·f(𝐙_t, t) + λ(t)^2∇_𝐙_tlog p_0 t(𝐙_t |𝐙_0)_2^2] d t
where in both (i) and (ii) we use integration by parts, and in (iii) we use denoising score matching (DSM) <cit.>. Putting the second term on the LHS of Eq. <ref> into RHS and then substituting the third term on the RHS of Eq. <ref>, we have
- 𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_0(𝐙;θ_s)] ⩽𝔻_KL(p_T(𝐙;θ_s) π(𝐙))
- 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(·|𝐗^(t);ϕ), p_0 t(𝐙_t |𝐙_0)[
λ(t)^2∇_𝐙_tlog p_0 t(𝐙_t |𝐙_0)_2^2]
+ 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(·|𝐗^(t);ϕ), p_0 t(𝐙_t |𝐙_0)[
λ(t)^2∇_𝐙_tlog p_0 t(𝐙_t |𝐙_0) -s(𝐙_t, t ; θ)_2^2]
- 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(·|𝐗^(t);ϕ), p_0 t(𝐙_t |𝐙_0)[ - 2 ∇_𝐙_t·f(𝐙_t, t)].
Since the transition probability p_0 t(𝐙_t |𝐙_0) is a fixed Gaussian distribution and it is independent of the parameter set ϕ, we can eliminate the term in Eq. <ref> and rewrite the above Eqs as:
- 𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_0(𝐙;θ_s)] ⩽𝔻_KL(p_T(𝐙;θ_s) π(𝐙))
+ 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(·|𝐗^(t);ϕ), p_0 t(𝐙_t |𝐙_0)[
λ(t)^2∇_𝐙_tlog p_0 t(𝐙_t |𝐙_0) -s(𝐙_t, t ; θ)_2^2]
- 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(·|𝐗^(t);ϕ), p_0 t(𝐙_t |𝐙_0)[ - 2 ∇_𝐙_t·f(𝐙_t, t)].
By substituting denoiser function ϵ(𝐙_t, t; θ) into score function s(𝐙_t, t; θ) of Eq. <ref>, we have:
- 𝔼_𝐙∼ q(𝐙|𝐗^(t);ϕ)[log p_0(𝐙;θ_s)] ⩽𝔻_KL(p_T(𝐙;θ_s) π(𝐙))
+ 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(𝐙|𝐗^(t);ϕ), ϵ∼𝒩(0, I_l × d)[
w(t)^2 ϵ-ϵ(𝐙_t, t; θ_s)_2^2 - 2 ∇_𝐙_t·f(𝐙_t, t)].
§ DETAILED ALGORITHM
§.§ Overall Alignment Loss Function in ERDiff
Extracting the latter two terms from Eq. <ref>, we have the following main Maximum Likelihood Alignment (MLA) loss:
ℒ_MLA(ϕ) = 𝔼_t ∼𝒰[0, T]𝔼_𝐙_0 ∼ q(𝐙|𝐗^(t);ϕ), ϵ∼𝒩(0, I_l × d)[
w(t)^2 ϵ-ϵ(𝐙_t, t; θ_s)_2^2 - 2 ∇_𝐙_t·f(𝐙_t, t)].
Then, considering the Sinkhorn Regularizer term in Eq. <ref>:
ℒ_SHD(ϕ) =
min _γ ⟨γ, 𝐂⟩_F+λℍ
(γ),
where each value 𝐂[i][j] = 𝐙_i^(s)-𝐙_j^(t)_2^2 in cost matrix 𝐂 denotes the squared Euclidean cost from 𝐙_i^(s) to 𝐙_j^(t), ℍ(γ) computes the entropy of transport plan γ, and λ refers to the weight of the entropy term.
Then, we can find the optimal ϕ_t via the minimization of the following total loss function:
ϕ_t = ϕargmin [(1 - α)ℒ_MLA + αℒ_SHD],
where α∈ [0,1] is a trade-off parameter that weights the importance of Sinkhorn Regularizer term.
§.§ Algorithm for Source Domain Learning of ERDiff
Source Domain Learning of ERDiff
§.§ Algorithm for Maximum Likelihood Alignment of ERDiff
§ EXPERIMENTAL DETAILS
§.§ Model Architectures and Hyperparameters
We include model architectures and hyperparameters of both the synthetic dataset and the neural dataset in Table <ref>.
§.§ More Visualization Results
On the neural dataset, we plot the trial-based latent dynamics under both cross-day (Fig. <ref>) and inter-subject (Fig. <ref>) settings. We can first observe that ERDiff preserves the spatio-temporal structure of the latent dynamics much better than the baseline JSDM. Then, despite the fact that DAF also largely retains the spatio-temporal structure through adversarial training, ERDiff aligns the direction of latent dynamics more accurately. These results demonstrate that providing inductive biases on latent dynamics structure not only gives guidance in spatio-temporal structure preservation but also helps the method easier to perform direction alignment of latent dynamics accurately.
|
http://arxiv.org/abs/2306.03750v1
|
20230606150859
|
Goal-Oriented Scheduling in Sensor Networks with Application Timing Awareness
|
[
"Josefine Holm",
"Federico Chiariotti",
"Anders E. Kalør",
"Beatriz Soret",
"Torben Bach Pedersen",
"Petar Popovski"
] |
cs.NI
|
[
"cs.NI",
"eess.SP"
] |
[
Dario Negueruela del Castillo
July 31, 2023
=================================
Taking inspiration from linguistics, the communications theoretical community has recently shown a significant recent interest in pragmatic, or goal-oriented, communication. In this paper, we tackle the problem of pragmatic communication with multiple clients with different, and potentially conflicting, objectives. We capture the goal-oriented aspect through the metric of voi, which considers the estimation of the remote process as well as the timing constraints. However, the most common definition of voi is simply the mse of the whole system state, regardless of the relevance for a specific client. Our work aims to overcome this limitation by including different summary statistics, i.e., value functions of the state, for separate clients, and a diversified query process on the client side, expressed through the fact that different applications may request different functions of the process state at different times. A query-aware drl solution based on statically defined voi can outperform naive approaches by 15-20%.
§ INTRODUCTION
Semantic and goal-oriented communications <cit.> aim to go beyond the traditional domain of communication theory towards optimizing communication systems with respect to a specific task or goal. In <cit.>, Shannon and Weaver talk about the semantics and effectiveness levels of the communication problem: semantic communication corresponds to the transmission of the most meaningful information for the given context, while effective or goal-oriented communication aims at satisfying the specific requests of the receiver. Following the nomenclature in linguistics, we denote the latter as pragmatic communication. In this sense, pragmatic communication is about transmitting the most relevant information for the receiver in order to attain a certain goal, taking into account both timing constraints and the shared context, which acts as an implicit information channel between the transmitter and receiver.
This new perspective is crucial in the context of the Industry 4.0 and iiot paradigms, which aim at automating manufacturing and industrial processes in a flexible and easily reconfigurable fashion. A representative scenario is the remote estimation of the state of a system by a distributed network of low-power sensors, a classical iot problem <cit.>. The recently proposed voi metric <cit.> represents a theoretical tool to model the pragmatics of the monitoring application, as it can seamlessly integrate the timing performance of the communication network with the underlying estimation <cit.>. However, voi is often defined in a static manner: the value function is the mse of the system state, and there is an application constantly monitoring the process with a faster pace than the updates.
In many iot scenarios, this is not true, as there may be multiple clients monitoring the same process through an edge node or gateway. Each client may potentially be interested in a different function of the system state at different times <cit.>.
Different queries can then correspond to different functions of the state. A classic query is the sample average among the measured values, but non-linear functions and even order statistics can often be useful in industrial settings. For example, the number of state components that are within their normal operation parameters, or the maximum among the state components, can be helpful to trigger safety conditions and shut down machines or raise a warning to the operators. As another example, the sample variance can be useful when monitoring the strain on different components of a building or a structure, such as a bridge. In this case, the goal of the system is implicitly stated in the queries: instead of minimizing the estimation error on the state of the system as a whole, the objective is to reduce the error on the response to the specific query that a client makes. The dynamic nature of these queries, which may arrive at different times from different clients, is crucial in the optimization: a pragmatic transmitter needs to take into account not only the relevance of sensor information, but also which query might arrive next.
The general architecture we propose is shown in Fig. <ref>. The publish/subscribe model is already the standard for iot applications <cit.>, with the edge node acting as a message broker. In the system model in the figure, a mec-enabled base station polls a set of sensors, which respond with their latest measurements. The edge node uses the data to estimate the overall state of the process measured by the sensors, and receives queries from client applications, which might be different functions of the state, e.g., the highest value among all sensors, or the number of sensors measuring values in a certain range <cit.>. In this work, we will consider that the edge node has enough computational power to run the estimation in real-time, but the interplay between the complexity of the scenario and the allocation of computing resources is an interesting problem that could complement our analysis <cit.>.
This setup has a clear timing context for defining goal-oriented communication: the edge node should optimize its scheduling strategy to answer queries accurately, taking into account both the nature of the estimation process and the query process. The optimal voi scheduling will then consider not only the accuracy of the estimate, i.e., the semantic value of the updates, but also when the estimation of a particular function of the state will be needed, i.e., the pragmatic relevance of the information to the receiver. In other words, the goal is not static, as in mse minimization, but dynamic, as the edge node needs to anticipate upcoming queries and place more importance on updates that will help for those specific functions (e.g., if the edge node expects a client to ask for the highest temperature among all sensors, it should avoid polling sensors that have a high probability of having an extremely low value, even if doing so increases the overall estimation mse over the state).
Fig. <ref> shows a sketch of such an example. In the figure, we plot the instantaneous mse over two hypothetical queries. Fig. <ref> shows the effect of a voi maximization strategy: the errors of both queries are approximately constant across time (as shown by the dashed line), with some fluctuations due to the dynamics of the system and channel errors, independently of the query process. On the other hand, a pragmatic system, as the one shown in Fig. <ref>, can tolerate significant increases in the mse of a query as long as the query is not active. Instead, it will aim to reduce the mse of the next upcoming query, so that the error at the time of the query is minimized. While we expect some fluctuations due to the stochastic nature of these systems, the trend represented by the dashed lines is clear, and leads on average to better responses to specific queries, even if the error when measured at a random time instant is higher. Our previous work <cit.> devised a similar metric for aoi, which can be measured at the arrival of a query instead of at any time. While the example is only an illustration of the ideal behavior of a query-aware pragmatic system, our results show that this behavior can indeed be obtained in practice.
In this work, we formulate a pragmatic[In the remainder of this paper, we use the terms pragmatic and goal-oriented interchangeably.] scheduling problem, considering both the process and the queries from multiple client applications. We define the problem as a mdp, assuming that the edge node has the full statistical knowledge of the process, and solve it with drl, as the continuous state space prevents us from using simpler methods such as tabular Q-learning.
However, the main goal of this work is not just to optimize in the specific setting we consider, but rather to provide insights on the novel problem of pragmatic communication, proposing a dynamic definition of voi and using a relatively simple use case to understand it better. The contributions of the paper are the following:
* We rigorously model the pragmatic sensor scheduling problem with multiple clients, different query functions, and different query generation processes, resulting in a dynamic objective function;
* We find a closed-form solution in a simple illustrative example, showing that the optimal policies for different queries can be extremely different and that using the wrong one can lead to significant performance loss;
* We frame the problem as a pomdp, defining the state and observation space, the reward, and the actions available to the scheduler, and use drl to obtain a scheduling solution, which is executable in real time even on an embedded edge processor;
* We provide simulation results and comparisons with static voi policies and the traditional maf policy, which minimizes the average aoi of the system, for a scenario in which two query types are present, i.e., a maximum query asking the maximum value among the state components and a count range query asking how many state components have a value within a given interval. The performance of the system is strongly dependent on the queries that arrive to it, as the error on the queries depends on the exact function of the state that they request.
The rest of the paper is organized as follows: first, we discuss the state of the art on the topic in Sec. <ref>. We then define the scheduling problem in Sec. <ref>, and model it as an mdp in Sec. <ref>, which also describes the proposed drl solution. Then, Sec. <ref> describes the simulations comparing the proposed scheme to state-of-the-art policies and their results. Finally, Sec. <ref> concludes the paper and presents some possible avenues for future work.
§ RELATED WORK
The problem of designing communication systems that communicate not only reliably, but also effectively and in a timely fashion, has recently received a significant amount of attention. During the last decade, aoi <cit.> and the associated metrics have received a significant attention in the communication engineering community. Most works in the literature study the average aoi in queues and networks of queues, using basic queuing theory to compute analytical performance curves <cit.>.
However, the original definition of the aoi can result in suboptimal outcomes if some conditions are not met, and similar metrics that can extend the definition to more general scenarios and applications have been defined <cit.>.
Firstly, aoi does not need to be linear: some recent works generalized the definition to measure any non-decreasing function of the age <cit.>, leading to different considerations in aoi optimization, as higher ages are penalized more or less depending on whether the aging function is super- or sub-linear. This can be further generalized by taking into account the actual value of the process tracked by the updates: since aoi is a proxy metric for the evolution of the tracking error over time, considering that error directly leads to better performance. The aoii <cit.> mixes a linear timing penalty with a multiplier based on the error in a discrete system, while voi <cit.> directly measures the error (either real or expected) on the estimation process, whether over a continuous-valued variable or a Markov source <cit.>.
The adaptive scheduling of sensors in iot scenarios with the goal of minimizing the aoi or related metrics is a well-studied problem in the literature. The scheduling problem can be formulated both for multiple sources, in which case it involves balancing the ages of the different sources while avoiding interference <cit.>, or for a single source with resource constraints: usually, these constraints are in the form of limited energy availability <cit.> or enforced duty cycles <cit.>. The inclusion of constraints on energy is often combined with energy-harvesting capabilities <cit.>. The most interesting works in this sense consider the voi, measured as the expected tracking error of a Kalman filter <cit.>. In general, the constraint on the accuracy is due to a communication bottleneck, which occurs due to limited bandwidth and energy, such that sensors need to reduce their transmissions as much as possible <cit.>. The selection of the subset of sensors that will transmit has been studied in this context <cit.>, often considering the correlation between neighboring sensors' measurements <cit.>. Other scenarios in which voi is used are data muling applications <cit.>, in which drones, robots, or underwater vehicles need to physically move close to sensors to collect the information <cit.>, and sensor placement problems, in which the issue is not to schedule transmissions, but rather to design the network to maximize accuracy and minimize cost <cit.>. Our own previous work <cit.> extends the definition of voi from the mse of the state to arbitrary functions, presenting a one-step optimal scheduling procedure. Another interesting development involves the modeling of the state of each sensor as a Markov chain, posing the polling problem as a pomdp <cit.> to identify sensors reporting abnormal values with the minimum energy expenditure <cit.>. For a more thorough review of the recent literature on voi, we refer the reader to <cit.>. Semantic communication is a subject that is closely related to voi, and has seen significant developments in the past few years: instead of a pre-defined value, information is evaluated and encoded based on meaning, which can only be derived implicitly, either from performance at a given task <cit.> or by learning complex patterns in high-dimensional signals such as speech <cit.> or video <cit.>.
An assumption that is shared by most of the works we discussed above is that information is always relevant, and that the application that tracks the process is always active. We can relax this assumption by considering a query process, as we did in our previous work <cit.>, in which we defined the qaoi: this metric is a sub-sampling of the aoi, only considering the instants when a query arrives from the application to be relevant for the optimization. A similar approach was adopted in defining the eaoi <cit.>, with a slightly different set of assumptions, and our theoretical results were extended in <cit.>, which showed the different outcome of the aoi and qaoi minimization problems under an update-or-wait model. Another work also models requests in the optimization function <cit.>, but it only deals with memoryless request processes, which (as we will describe in the introduction) lead to a solution that is equivalent to standard aoi minimization. The extended version of that paper <cit.> considers more complex scenarios with partial battery knowledge, but still uses the same memoryless request model. Finally, a recent work by Xu et al. <cit.> also considers a memoryless request process, but considers a mix between traditional aoi and query-aware metrics.
By only tracking the aoi when a query arrives from the application, the communication system considers not only the freshness of the received information, but also when it is needed: if, for example, the application works over discrete time intervals, transmitting more data close to the next query can reduce the bandwidth and energy usage, while maintaining the same or better accuracy from the application's point of view. We note that query awareness, and query prediction in particular, has been considered in the database literature <cit.>, and is related to the problem of predicting tasks in edge computing scenarios <cit.>. However, contrary to our work, these do not consider the significance of the fetched data.
The problem of value-oriented scheduling has also been approached in distributed control: if the agents have communication capabilities, the most valuable piece of information is the one that will allow them to improve their performance in the task. The uoi metric <cit.> directly considers how much an update would affect a known linear controller. If we consider marl agents, the problem is more complex <cit.>, as the communication policy is implicitly learned by the agents while they converge to the optimal control policy, and this approach has only been successful in simple problems <cit.> or with only one supporting agent communicating to a primary one <cit.>. However, because of the centralized nature of our pull-based setting, we limit our focus to the single-agent case, and refer the reader to <cit.> for a review of the cooperative marl literature.
This work combines and extends some of the ideas on voi sensor scheduling and query awareness, as well as concepts from the semantic communications literature, by considering a system with multiple queries arriving at different times, each of which requires different information on the state of the tracked process, represented by a different voi function. To the best of our knowledge, this is the first work to consider this complete system model, and a significant step forward towards full-fledged goal-oriented communications.
§ SYSTEM MODEL
We consider a system in which an edge node receives information from a set of N sensors, indexed by n∈{1,2,…,N}, and has to respond to queries from a set 𝒞 of remote clients, with |𝒞|=C. Time is divided into slots, denoted as t=0,1,2,…, and in each slot, the edge node can send a request to one sensor, and respond to queries from any number of clients.
In turn, the sensors observe a linear dynamic system, whose state is denoted as 𝐱(t)∈ℝ^M×1. The dimensionality of the process state is M, which can be different from the number of sensors N in the general case. The system evolves according to a (potentially time-varying) transition matrix 𝐀, with an overlaid error perturbation modeled as a multivariate Gaussian noise:
𝐱(t)=𝐀𝐱(t-1)+𝐯(t),
where 𝐯(t)∼𝒩(0_M,Σ_v). The noise 𝐯(t) is zero-mean, and its covariance matrix is Σ_v∈ℝ^M× M. The sensors then measure a vector 𝐲(t)∈ℝ^N×1, which represents a linear observation of the state of the system with an added Gaussian measurement noise. We define an observation matrix 𝐇∈ℝ^N× M, which defines the linear function of the state that each function observes:
𝐲(t)=𝐇𝐱(t)+𝐰(t),
where 𝐰(t)∼𝒩(0_N,Σ_w). The observation noise 𝐰(t) is also zero-mean, with a covariance matrix Σ_w∈ℝ^N× N. The main symbols in the model are listed in Table <ref>, along with their dimensionality and meaning.
§.§ Remote Kalman Tracking
As the edge node does not know the real state 𝐱(t) of the monitored process, it needs to estimate it. In this work, we use the well-known Kalman filter <cit.>, which is the optimal solution for linear dynamic systems. We assume that the edge node knows the matrices 𝐀, 𝐇, Σ_v, and Σ_w, and define vector 𝐱̂(t)∈ℝ^M×1 as the best estimate of the state available to the edge node. The Kalman filter also outputs a covariance matrix ψ(t), which corresponds to the expected value [(𝐱(t)-𝐱̂(t))^T(𝐱(t)-𝐱̂(t))].
We can then provide a recursive formula for updating the a priori estimate:
𝐱̂_pri(t)= 𝐀𝐱̂(t-1)
ψ_pri(t)= 𝐀ψ(t-1)𝐀^T+Σ_v,
where the _pri subscript in the estimates indicates that they are a priori.
As we stated above, the edge node can request the current value y_n(t) from one sensor per timeslot, whose index is denoted by a(t). We also consider communication errors, modeling the channel between sensor n and the edge node as a pec with error probability ϵ_n. Considering the row vector 𝐡(t)∈ℝ^1× M:
𝐡(t)=1_a(t)𝐇,
where 1_a(t) is the row vector of length N whose elements are all 0, except for the one with index a(t), which is equal to 1. We can then update the observation function in (<ref>), getting the value y_a(t)(t), which is the observation transmitted by the polled sensor a(t):
y_a(t)(t)=𝐡(t)𝐱(t)+1_a(t)𝐰(t)=1_a(t)𝐲(t).
We indicate the outcome of the transmission at time t by the Bernoulli random variable λ(t), which is equal to 1 if the transmission is successful and 0 otherwise. In the former case, the edge node receives observation y_a(t)(t), while in the latter, it receives no observation for this time step.
We can then give the Kalman gain row vector 𝐤(t)∈ℝ^1× M as:
𝐤(t)=ψ_pri(t)𝐡(t)^T(𝐡(t)ψ_pri(t)𝐡(t)^T+σ_w^2(t))^-1,
where σ_w^2(t)=1_a(t)Σ_w1_a(t)^T.
The update from the a priori estimate of the state to the a posteriori one is then given by:
𝐱̂(t)= 𝐱̂_pri(t)+λ(t)𝐤(t)(y_a(t)(t)-𝐡(t)𝐱̂_pri(t))
ψ(t)= (𝐈_M-λ(t)𝐤(t)𝐡(t))ψ_pri(t),
where 𝐈_M is the M× M identity matrix. We highlight that the a priori and a posteriori estimates are the same if λ(t)=0, e.g., no observation is received by the edge node <cit.>, as the a priori estimate is the best estimate that the edge node can obtain with the information it has received.
§.§ The Query Process
We consider a query to be a request for either the state 𝐱(t) itself, or the value of a function z(𝐱(t)) of it. The edge node receives queries and responds with an estimate ẑ(𝐱̂(t),ψ(t)) based on the current state of the Kalman filter.
The temporal element of the query process can be modeled as a Markov chain. We assume that each client c follows an independent Markov chain, whose state at time t is q_c(t)∈𝒬_c, with a known transition matrix 𝐓_c. Each client c always requests the same function z_c anytime its Markov chain is in a subset of states, which we denote as 𝒬̃_c. Naturally, the state of each client is unknown to the edge node, which can only know which clients are currently subscribed and when did they send their last query. In some cases (e.g., periodic queries), this information is sufficient to predict the next query perfectly, as we will discuss below, but in the general case, the information about the query process available to the edge node entails some randomness and uncertainty.
§.§ Responding To Queries
The objective of the edge node is to respond to queries as accurately as possible, i.e., to minimize the error of its responses. The mse for client c is defined as:
MSE_z_c=𝔼[(ẑ(𝐱̂(t),ψ(t))-z(𝐱(t)))^2].
The edge node can act in two ways to minimize the mse: the first is to optimally use its knowledge of the state by using an mmse estimator to obtain ẑ(𝐱̂(t),ψ(t)), and the second is to poll sensors according to the expected voi of their readings, i.e., schedule the sensor that can help the most in reducing the mse of the next queries. The former problem is relatively simple, while the latter will be tackled in Sec. <ref>.
We can give the definitions and mmse estimators for some of the most intuitive queries as follows:
* State: in this case, the request is for the direct value of 𝐱(t);
* Sample mean: in this case, the function that the client requests to the edge node is the sample average, represented by:
z_avg(𝐱(t))=1/M∑_m=1^M x^(m)(t),
where x^(m)(t) is the m-th element of 𝐱(t);
* Sample variance: in this case, the sample variance is computed as:
z_var(𝐱(t))=1/M-1∑_m=1^M(x^(m)(t)-z_avg(𝐱(t)))^2;
* Maximum: this query returns the maximum component of the state:
z_max(𝐱(t))=max_m∈{1,…,M}x^(m)(t);
* Count range: this function counts how many components of the state are inside a given interval [a,b]. The count range query is defined as:
z_cnt(𝐱(t))=∑_m=1^M 1(x^(m)(t)∈[a,b]),
where 1(·) is the indicator function, equal to 1 if the condition inside the parentheses is verified and 0 otherwise.
Note that the definition of the function z_cnt(·) should include the interval [a,b], but here we omit it for the sake of readability. Furthermore, a and b are assumed to be fixed for the same query process.
The one-step optimal scheduler for the first three queries can be computed analytically by finding the choice that minimizes the mse of the estimate at the next step, and we derive the one-step optimal schedulers for several queries in our previous work <cit.>. However, the optimal scheduler, and even the mmse estimator, for more complex queries like the latter two, with highly non-linear functions, is hard to compute analytically.
In the case of order statistics, a closed-form mmse estimator might not even be achievable <cit.>, as the extreme values of high-dimensional multivariate Gaussian variables are computed only as limiting distributions in the relevant literature. In order to compute the mse of a given response to the count range query, we have to define the region 𝒵(m):
𝒵(m)={𝐱∈ℝ^M×1: z_cnt(𝐱)=m}.
We can then define the probability that z_cnt(𝐱(t)) is equal to m, corresponding to the integral of the multivariate Gaussian random variable 𝐱(t)∼𝒩(𝐱̂_t(t),ψ(t)) in 𝒵(m):
𝒫(z_cnt(𝐱(t))=m)=∫_𝒵(m)e^-1/2(𝐱-𝐱̂_t(t))^Tψ^-1(t)(𝐱-𝐱̂_t(t))/√(2π^M|ψ(t)) d𝐱.
The mmse estimator for the count range query is then given by:
ẑ_cnt(t)=∑_m=0^M m 𝒫(z_cnt(𝐱(t))=m).
The corresponding mse is defined as follows:
MSE_cnt(t)=∑_m=0^M (m-ẑ_cnt(t))^2𝒫(z_cnt(𝐱(t))=m).
The integral in (<ref>) can only be computed numerically, and is extremely hard to tabulate.
We can then consider the voi, which is used to evaluate the quality of a scheduler. If a query of type z_c arrives at step t, we define the value of the information available to sensor n as the expected reduction in the mse for that query with respect to the a priori estimate. The voi θ_c,n(t) is then given by:
θ_c,n(t)=(1-ε_n)𝔼[(ẑ_c(𝐱̂_pri(t),ψ_pri(t))-z_c(𝐱(t)))^2]
-(1-ε_n)𝔼[(ẑ_c(𝐱̂(t),ψ(t))-z_c(𝐱(t)))^2|a_t=n].
The one-step optimal voi scheduler for any query function can be easily approximated using Monte Carlo methods <cit.> by drawing samples from the a priori distribution. The detailed algorithm for Monte Carlo-based scheduling is given in our previous work <cit.>. While Monte Carlo estimates are not mmse, they approach the optimal estimator as the number of samples grows to infinity, at the cost of computational complexity. If we consider S samples, the complexity of one Monte Carlo estimate is O(SM^2).
§ THE SCHEDULING PROBLEM
In the previous section, we defined the system model and determined the optimal estimator for common query functions, along with a Monte Carlo strategy for general functions. However, the most complex problem is not to reply directly to a query, but to consider future queries in a foresighted manner, scheduling sensor transmissions so as to minimize the mse on future responses. This requires to consider not only the monitored system, but also the query process and the interplay between different query functions. For example, two clients which request the maximum and minimum will need very different parts of the state to be estimated accurately, and balancing between their needs will be complex. The polling decisions made by the edge node also affect the future state of the Kalman filter, requiring a dynamic strategy.
We can model the scheduling problem for the edge node as a pomdp, in which the edge node must decide which sensor to poll at each time slot. The action space is then simply 𝒜={1,…,N}, while the state space is more complex. The state of the Kalman filter just before the update, described by 𝐱̂_pri(t) and ψ_pri(t), is included in the state, and so should all the states of the clients. The state space for a system with C clients is then 𝒮=ℝ^M^2+M×∏_c=1^C𝒬_c, as the state is given by the tuple s(t)=(𝐱̂_pri(t),ψ_pri(t),q_1(t),…,q_C(t)). However, the edge node does not know the state q_c(t) of each client, but only the time that has passed since the last query, which we define as τ_c(t)∈ℕ. We then have an observation tuple o(t)=(𝐱̂_pri(t),ψ_pri(t),τ_1(t),…,τ_C(t)), belonging to the observation space 𝒪=ℝ^M^2+M×ℕ^C. The matrices 𝐀, 𝐇, Σ_v, and Σ_w, as well as the error probability vector ϵ=[ϵ_n] and the query functions z_c, should also be known a priori to the edge node, but are not part of the state.
Note that the problem reduces to a fully observable mdp if the time since the last transmission is sufficient to determine the next query, i.e., if the following condition is true:
𝒫(q_c(t+1)∈𝒬̃_c|q_c(t)=q)=𝒫(q_c(t+1)∈𝒬̃_c|τ_c(t)),
∀ q∈𝒬_c,τ_c(t)∈ℕ.
Two special cases of this are the memoryless process, in which the Markov chain only has two states (query and no query), and the deterministic chain with |𝒬̃_c|=1, which leads to a periodic query process. In the general case, the state of the query process depends on external factors (e.g., a human operator), and is not directly knowable by the edge node: if a stochastic transition can lead to a state in which (<ref>) is not verified, the problem is partially observable.
The transition probability P(s,s'|a) from one state to the next for a given action is then determined by the Markov chains of each client, along with the Kalman filter equations in (<ref>) and (<ref>). The final parameter to define the pomdp is then the reward function r(t):
r(t)=-∑_c∈𝒞α_cMSE_z_c(t)1(q_c(t)∈𝒬̃_c),
where α_c>0 is a weight parameter representing the relative importance of each client, whose value is given by the system designers, and is thus known a priori by the edge node. The reward is always negative, as the objective is to minimize the error on all queries.
We then define a policy π:𝒪→Φ(𝒜), where Φ(𝒜) is a probability distribution over the action space 𝒜. In other words, the policy maps observed states to the probability of selecting each sensor. We can then define the long-term reward function R(t|π):
R(π)=[∑_t=0^∞γ^tr(t)| s_0,π],
where γ∈[0,1) is an exponential discount factor. The objective of the scheduling problem is then to find the optimal policy π^*, which maximizes the long-term reward:
π^*=_π:𝒪→Φ(𝒜)R(π).
The case for γ=0 is a special case, in which future steps are never counted, and only performance in the next step matters: this case was solved analytically in our previous work <cit.>.
§.§ A Simple Example: The Effect of Queries on the Optimal Policy
We can first consider a simple example, in which a system with N=2 sensors observes a process with M=2 and needs to reply to a single client (i.e., C=1). The communication is assumed to be error-free, and each sensor m observes an independent binary Markov chain with state 𝐱(t)∈{0,1}^M ∀ t and 𝐇=𝐈_2. At each time step, the state changes with probability p_m and remains the same with probability 1-p_m, so that the transition matrix 𝐓_m is given by:
𝐓_m=[ 1-p_m p_m; p_m 1-p_m ].
We know that the observation of the state is error-free, so after Δ_m steps from the last observation o_m, the a posteriori state probability distribution of Markov chain m is given by:
P_m(Δ_m,o_m)=𝐓_m^Δ_m[ 1-o_m o_m ]^T.
We assume that a query arrives at every step from the client, but define two types of clients with different query functions: the first client asks for a maximum query, while the second asks for a count query, i.e., how many sensors measure a value of 1.
If at least one of the sensors has a value of 1, the value of the other sensor is useless for the maximum query; on the other hand, it is still relevant for the count query. We can compute the mmse response to each query:
ẑ_max(Δ,𝐨)= 1-P_1,0(Δ_1,o_1)P_2,0(Δ_2,o_2);
ẑ_cnt(Δ,𝐨)= P_1,1(Δ_1,o_1)+P_2,1(Δ_2,o_2),
where P_m,i(Δ_m,o_m) is the a posteriori probability that chain m will be in state i, given the latest observation and its age, and Δ and 𝐨 are the vectors of ages and observed values, respectively. We highlight that, aside from a few special cases, the query response will be a value between 0 and 1, corresponding to the probability of the correct response being 1, as this is the mmse estimator for Bernoulli random variables. We can also compute the mse for both queries:
MSE_max(Δ,𝐨)= P_1,0(Δ_1,o_1)P_2,0(Δ_2,o_2)
×(1-P_1,0(Δ_1,o_1)P_2,0(Δ_2,o_2));
MSE_cnt(Δ,𝐨)= P_1,0(Δ_1,o_1)+P_2,0(Δ_2,o_2)
-(P_1,0(Δ_1,o_1))^2-(P_2,0(Δ_2,o_2))^2.
We can note that, if we observe one of the two states and o=1, the response to the maximum query is always correct, as the probability of that component being equal to 0 is 0. In order to maximize the long-term reward from (<ref>), we can adopt the classical policy iteration method, as described in <cit.> after truncating the pomdp by setting a maximum age Δ_max. While policy iteration is not directly applicable to pomdp, we can recast the problem as a fully observable mdp whose states fully describe the history of observations <cit.>. In our case, the time since the last observation of each state variable and the values of those observations are enough to enjoy this property. The expanded state is then defined as s=(Δ,𝐨)∈𝒮, over which we can apply policy iteration. The transitions from one state to the other are extremely simple, and we can easily derive the transition probability 𝒫(s(t+1)|s(t),π(s(t))).
Policy iteration has two steps, called evaluation and improvement. The algorithm is initialized with an approximate value V_0(s) for each state and a policy π_0, which can be set as all zeros. It then repeats the two steps iteratively until the policy converges. In the first step at iteration t, the value function V_t(s) is updated as follows:
V_t+1(s)=-MSE(s,π_t(s))+γ∑_s'∈𝒮𝒫(s'|s,π_t(s))V_t(s').
Naturally, the definition of the mse depends on the type of query. After the value has been updated for all states, the policy is updated:
π_t+1(s)=_a∈{1,2} -MSE(s,a)+∑_s'∈𝒮𝒫(s'|s,a)V_t+1(s').
Policy iteration is guaranteed to converge to the optimal policy in finite-state mdp with finite reward <cit.>. We can also note that the formulation corresponds to maximizing the voi θ_c,m(t) for the selected query, as defined in (<ref>).
The results for p_1=0.1 and p_2=0.2 are given in Fig. <ref>. As Fig. <ref>-*fig:cnt11 show, the policy is the same for any observation, and only depends on the age of the two measurements, since the mse of the count query is the same for any observation. The level of uncertainty determines the action: the second component of the state, which can vary more often due to the higher state change probability, is the one that is polled, unless the age of the latest observation of the other component is approximately double. The maximum query has a more complex policy: if the last observations of each component are the same, i.e., 𝐨=(0,0) or 𝐨=(1,1), the policy is the same as for the count range query. On the other hand, if one of the last observations is 1, while the other is 0, the component with value 1 is always polled. This is reasonable, as one observation from a sensor that contains a 1 gives perfect certainty on the overall maximum. Giving a higher priority to the component with the highest probability of being equal to 1 is then beneficial, even if the uncertainty on the other component becomes extremely high.
Naturally, this is only a simple example, and introducing a query process will complicate the system, but it highlights the strong dependence between the function that determines each query and the respective polling policy. While the optimal strategy to minimize the aoi would always poll the sensor with the highest age, and a strategy that minimizes the uncertainty of the count range query (which, in this simple system, is almost equivalent to minimizing the mse) weighs each sensor's age by the speed of the corresponding process, the strategy for the maximum query actually depends on the current value of each sensor, and is starkly different from the others. Mixing different types of query in the same system will then lead to non-trivial trade-offs, particularly when the functions are highly non-linear.
§.§ Reinforcement Learning Solution and Learning Architecture
While policy iteration has strong convergence guarantees, it is infeasible to use when the state space is large, which is the case for the considered scheduling problem. Instead, we resort to approximate solutions, and consider a rl approach to the scheduling problem. rl is a machine learning approach in which an agent learns from experience, updating its estimate of the value function by trial and error. The agent makes decisions and receives immediate rewards from the environment, without any prior knowledge of the reward function or the consequences of actions. For a more thorough introduction to reinforcement learning, we refer the reader to <cit.>. While rl is not directly applicable to partially observable scenarios, the observation space in the full problem is a sufficient statistic to respect the Markov property: as the Kalman filter is the optimal estimator for linear systems, and all the historic information of past observations is available in the state and covariance estimates, the system is equivalent to a fully observable mdp and we can directly apply standard solutions such as rl <cit.>.
We implement the dqn architecture <cit.>, which uses a deep neural network to approximate the value function. In order to avoid instability, we need to use a replay memory to store the agent experience and select batches of uncorrelated samples. Each batch contains B uncorrelated samples, and each experience sample is a tuple e=(s(t),a(t),r(t),s(t+1)). We maintain two neural networks for increased stability: a target network and an update network. In order to estimate the long-term reward R(π) from an experience sample, we use the target network's prediction Q_t(s,a):
Q(e)=r(t)+γmax_a∈𝒜Q_t(s(t+1),a).
The use of the long-term reward estimates to update future estimates follows the well-established bootstrap method, and the use of a greedy update policy follows the Q-learning model implemented by the dqn. The estimates Q(e) are then used as labels for the backpropagation operation on the update network, whose output predictions are used in the action policy to select the next action. The action policy we use implements the well-known softmax function:
π(s,a)=e^Q_u(s,a)/τ/∑_a'∈𝒜e^Q_u(s,a')/τ,
where the temperature parameter τ is to balance between exploration and exploitation. Lower values of τ make the outcome closer to the greedy policy, as the probability of selecting suboptimal actions decreases, while higher values of τ increase exploration. In any case, exploration with the softmax function is directed: actions that are assumed to be highly suboptimal will be picked less frequently, while the agent will prefer actions that have an estimated long-term reward just below the maximum.
The update network is updated at every step with a new batch of samples, while the target network is only updated every U steps by copying the update network's weights. As we stated above, the use of separate target and update networks allows the system to converge, avoiding numerical and stability issues. In the rest of this work, we implement a dqn with 3 layers, whose parameters are given in Table <ref>. The first two layers have a dropout probability p_d=0.1 during the training, and the network is relatively simple, as the input is highly redundant. The hyperparameters above were found after a grid search optimization process.
§.§ Computational Complexity
We can now discuss the computational complexity of the learning solution. The following refers to the complexity of a trained model, i.e., of a single decision on the next action: while training can be performed offline in a simulation environment or even passively on existing data, actions need to be real-time for the system to work, and the time to make decisions is critical.
If we consider a single layer with ℓ_i inputs and ℓ_o outputs, there are three operations that the network needs to perform to compute each output:
* Multiply each input ℓ_i by the appropriate weight (equivalent to ℓ_i multiplication operations);
* Sum all the results (equivalent to ℓ_i sums);
* Apply the non-linear activation function.
If we consider the activation function as the result of k basic operations, the total number of basic operations for a single layer is then ℓ_o(2ℓ_i+k). If we consider our architecture as a vector ℓ of layer sizes, where the first element is the cardinality of the input (i.e., the observed state) and the last element is N (corresponding to the N possible actions), the total complexity is:
𝒞_f(ℓ)=∑_i=1^|ℓ|-1ℓ_i+1(2ℓ_i+k).
We remark that 𝒞_f(ℓ) is the total number of basic operations per layer, and as such,
If we consider our architecture for C=2 and N=20, which is given above, we have k=1, as the relu activation function is extremely simple, and the total number of operations for each step is then 𝒞_f=96 570.
The backpropagation algorithm required to train the neural network has the same complexity as the forward pass, but it must be run for each sample in a training batch <cit.>. For a training batch size of B, the total complexity for a single training step is then given by:
𝒞_b(ℓ)=B𝒞_f(ℓ).
In our architecture, we have B=128, and consequently, 𝒞_b=12 360 960. This number of operations should be entirely within the capabilities of an edge node, as even simple embedded processors can deal with much more complex architectures that require billions of operations in less than 100 ms <cit.>. As most of the required calculations in training and evaluation are vector operations, each layer might only require a single clock tick on modern processors, particularly when the processor is a GPU or designed for hardware-assisted learning.
§ SIMULATION SETTINGS AND RESULTS
The performance of the rl-based query-aware scheme is verified by Monte Carlo simulation, considering a specific scenario. Its performance is measured in terms of the mse on its query responses. The evaluation is performed over E_test=10 independent episodes, each of which consists of T_max=100 time steps. The parameters of the dqn agent are the same for all considered scenarios, and are given in Table <ref>.
§.§ Scenario and Benchmark Policies
We consider a system with M=20 sensors, each observing a different component of the state 𝐱(t), so that M=N and 𝐇=𝐈.
The dynamic system that the edge node observes is defined as follows:
𝐀^(i,j)=3/4, if i=j;
-1/8, if i≠ j∧mod(i-2j,7)=6.
The edge node knows 𝐀, as well as the process and measurement noise covariance matrices, which are given by:
Σ_v^(i,j) =11+mod(i-1,10)/5, if i=j;
1, if i≠ j, mod(i-j,6)=0,
Σ_w =𝐈.
The error probability ε_n for each sensor is given by:
ε_n=0.02⌈n/10⌉.
We consider a case with C=2 clients with the same importance, i.e., α_1=α_2=1. Client 1 requests a count range query with interval [-5,0], while client 2 makes a maximum query. The two query types are described in more detail in Sec. <ref>, and we also refer the reader to our previous work <cit.> for a deeper discussion on the derivation of mmse estimators for specific queries.
It is possible to add more clients with other queries and varying importance. This adds one parameter (i.e., the time since the last query from that client) to the dqn, but the problem is not guaranteed to scale: the added complexity necessarily makes the training longer, requiring an adjustment to the exploration and learning profiles as well. In this work, we also include the results for a scenario with 4 clients, including an average and a state query as well.
We consider 5 different benchmarks for the query-aware policy:
* maf: The maf policy, which minimizes the average aoi of the system regardless of the value of sensors' readings. This legacy approach represents a value-neutral lower bound, as it aims at minimizing the aoi for all sensors regardless of the relevance of their data or their expected effect on the accuracy of the state estimate;
* Cnt: The one-step optimal policy for client 1, which follows the procedure from <cit.> to minimize the mse of the count range query in the current step;
* Max: The one-step optimal policy for client 2, which does the same for the maximum query;
* rl (Cnt): The foresighted policy learned by a rl agent with α_2=0, which only minimizes the mse of the response to the count range query;
* rl (Max): The foresighted policy with α_1=0, which does the same for the maximum query.
We also consider two different query processes, both of which are observable by the edge node, slightly simplifying the problem:
* Periodic queries: queries are generated every T_q=6 steps. In this case, the Markov chain is deterministic, going from state 0 (in which a query is generated) to state 1 with probability 1, then increasing until 5, after which the chain goes back to 0 with probability 1;
* Memoryless queries: in this case, the Markov chain only has 2 states, and the rows of the transition matrix are identical. The time between two subsequent queries is geometrically distributed, with an expected value 𝔼[T_q]=6 steps.
We consider three combinations of these query processes: the case in which both clients have periodic queries, the case in which they both have memoryless queries, and the mixed case in which client 1 has periodic queries, while client 2 follows a memoryless process.
§.§ Periodic Query Scenario
In this subsection, we show results for the case where both queries arrive periodically, each with a period of 6 steps. The two queries are out of sync, with the maximum query starting at 0 and the count range query starting at time 2.
This is the easiest case for the edge node, as queries are generated deterministically and the optimal policy can act on deterministic knowledge of the query pattern. Fig. <ref> shows a boxplot of the mse for both types of queries, as well as the overall cost, which is defined as the opposite of the average reward as given in (<ref>). In other words, the overall cost is a weighted sum of the mse for all queries, using the weight vector α. In our simulations, queries all have the same period and weight, so the overall cost corresponds to the average of the mse over all queries. We can note that the rl policy considering both types of queries obtains a lower cost than all others (as shown in the group on the left of the figure), with a much lower average and only slightly worse performance at the 95th percentile than an aoi-oriented approach. In particular, the choice made by the rl policy is to privilege the maximum query, with results that end up being similar to only optimizing for it. All other approaches tend to reduce the mse of the count range query more, although they end up having a higher error on the maximum query. The count range query is penalized by the fact that it arrives only 2 slots after the maximum query: reducing its mse would require losing accuracy in the response to the maximum query, increasing the overall cost. On the other hand, the 4 slots between a count range query and the subsequent maximum query allow the rl policy to improve the accuracy significantly. The effect of the discount factor γ is also important: since a count range query arrives 2 slots after the maximum query before it, and γ=0.9, its mse only accounts for 81% of the reward for the steps before the maximum query. A higher value of γ, or a different weighting of the two query types by adapting α_1, would produce a more balanced outcome.
We can also note that, in this case, the other rl-based policies outperform their greedy versions on the metric that they optimize for, but no such pattern exists for the other type of query, which these policies completely disregard. As noted in previous works on voi, the aoi-based approach taken by the maf policy provides a middle ground for performance, never failing too badly by polling all sensors equally. The choices made by the various policies can be analyzed more in depth by considering the distribution of the sensors that are polled. Fig. <ref> shows two colormaps for each policy, in which the y-axis represents the step in each query period, i.e., the index of the slot modulus 6. As a reminder, the maximum query is generated at t_q=0 and the count range query is generated at t_q=2. The two colormaps differ by the value represented on the x-axis: in the first one, the x-axis represents the value x^(m)(t) measured by the chosen sensor, while in the second, the value is simply the index of the sensor. The color of each cell represents the empirical probability of each combination in our test episodes.
We can first look at the maf policy, in Fig. <ref>: the distribution of values is almost symmetrical, and values between -5 and 5 are polled with approximately the same frequency. On the other hand, the index colormap shows a checkerboard pattern, caused by the round robin-like pattern of updates (which is shifted by 2 steps at every cycle, as N is not a multiple of the query period). The rl policy has a different pattern: we can note that in even time slots, corresponding to the query instant, the distribution of values is bimodal: sensors whose value is close to 0 are polled very often, as are sensors whose value is very high, between 7 and 12. These two peaks correspond to the two queries: values close to 0 are at the edge of the interval that is relevant for the count range query, while very high values are obviously interesting for the maximum query. The indexes of the sensors that are polled are also much more concentrated: sensors 1, 6, 8, and 17 are polled extremely often, while other sensors are rarely polled: this is due to the nature of the problem, as some sensors are more valuable to answer the queries due to the evolution of the state.
The two single-query rl policies, whose choices are represented in Fig. <ref>-*fig:color_rlmax, can further shed light on the behavior of the joint policy: we can easily see that some of the nodes that are often polled by rl are also polled by its Max and Cnt versions, and that the two peaks in the distribution are close to a superposition of the two peaks of rl (Max) and rl (Cnt). Finally, we can note that the one-step greedy policies, shown in Fig. <ref>-*fig:color_max, do not have any dependence on the time step, as they are unaware of the query process: the basic features, such as the Cnt policy choosing values clustered around 0 and the Max policy choosing values on the highest end of the range, are maintained, but the policies are inherently noisier than their rl-based versions, which can exploit their knowledge of the query process to improve performance at the right moment.
The two plots in Fig. <ref> can further clarify the difference between simple aoi minimization, voi minimization, and query-aware voi. Fig. <ref> shows the average aoi for each sensor for different policies. Naturally, the maf policy maintains the minimum aoi, with an average only slightly over 10 for all sensors. As the policy tries to minimize the age for all sensors, the average is very similar across all sensors, although not identical (sensors with a higher packet error rate will be polled slightly more often as the poll is repeated after each packet loss). All other policies have a higher aoi for some sensors, polling them less often as they have less useful information, and the rl-based ones have the highest difference, with some sensors being polled extremely often and others almost never: as information useful for the queries can be reconstructed from the correlation between different sensors and the model of the dynamic system, the rl policies rarely poll sensors whose values are less useful or informative for the specific queries they are trained for. The plot in Fig. <ref>, showing boxplots of the mse on the state estimation for each policy clearly shows this: the rl-based policies actually have a higher mse than simple maf, as parts of the state are disregarded, but make a significantly smaller error when replying to queries, as the relevant information for the clients is given more importance in the scheduling.
Finally, we consider a more complex scenario, with 4 clients asking periodic queries with a period of 12 steps. Along with the maximum and count range queries, the two additional clients make state and sample mean queries, as defined in Sec. <ref>. In this case, we also consider rl-based and one-step optimal strategies aimed at these queries, and the overall performance in terms of the reward and the specific queries (all of which have the same importance) is shown in Fig. <ref>. We note that the average query is easy to respond to, as errors in opposite directions over different components of the state tend to compensate: the rl solution outperforms all others in terms of the overall cost (i.e., the mse over all queries made by clients), as while each query-specific strategy performs best on its own objective function, the rl policy manages to balance different queries, achieving a low error on all of them. We also highlight that legacy voi optimization, even when foresighted (i.e., the rl (mse) strategy in the figure), cannot effectively deal with maximum or count range queries, which depend only on specific parts of the state, as it does not take into account this importance, but statically aims at minimizing the error over all state components indiscriminately.
§.§ Geometric Query Arrival
We can consider a second scenario, in which at each step a query of either type is generated with probability 1/6. The average frequency of queries is the same as for the previous scenario, but instead of a deterministic, periodic sequence, queries follow a memoryless random process with geometrically distributed inter-query times. We remark that queries of both types may arrive to the edge node at the same time, and that in this case, no knowledge is available at the edge node: as the query process is memoryless, knowing the arrival times of past queries provides no information on future query arrivals. In this case, the advantage of a query-aware system is naturally diminished. The time since the last query of each type is still maintained as part of the input to the rl algorithm, so as to maintain the same architecture for all cases, but in this case, the rl algorithm needs to learn that this information is useless. This case also required more training than the other scenarios we considered.
Fig. <ref> shows the performance boxplots for this scenario: in this case, performance is almost uniform, and all policies have a similar overall cost. The rl policy still has a small gain in terms of the overall cost, but it performs worse than the greedy Max policy on the maximum query. Performance on the count range query is almost uniformly good, and all differences between the policies are on the worst-case performance of the maximum query.
§.§ Mixed Query Arrival
Finally, we consider a third scenario: in this case, the maximum query follows a memoryless process with a probability 1/6 of generating a query at each time step, while count range queries are periodically generated every 6 slots. In this case, the policy needs to adapt to the possibility of a maximum query arriving, while also preparing for the foreseen count range queries.
The performance of the considered policies is shown in the form of boxplots in Fig <ref>, as for the previous cases. The figure clearly shows that this case is much more complex, and the rl policy does not manage to outperform the strategies that are oriented exclusively toward the maximum query. Since the maximum query is entirely unpredictable, the full rl policy would need more training to deal with this scenario: the simpler strategy learned by the rl (Max) scheme turns out to be better on average, while rl (Cnt) performs about as well as rl. In most cases, the error on the count range queries tends to be higher for all policies. We note, however, that the rl strategy still outperforms all others in terms of worst-case performance, as the 95th percentile whisker is particularly low for the count query, resulting in better overall worst-case performance. A better strategy could be learned with more training, and we note that the complexity of the scenario has a significant impact on the amount of training required, with mixed scenarios with deterministic and stochastic query processes being the most difficult.
By knowing the instants in which count range queries will arrive, the rl strategies can limit the worst-case error, although this comes at the cost of a slightly higher worst-case error on the maximum query (which is hard to optimize for, as its arrival process is completely unpredictable). In this case, as in the geometric query arrival scenario, the one-step greedy policy for the maximum query is actually performing almost as well as the rl version, as there is no long-term information to be learned on the query process. As for qaoi, awareness of the query process is more useful if the latter is deterministic, or at least partially predictable.
§ CONCLUSIONS AND FUTURE WORK
In this work, we have presented a framework for query-aware sensor scheduling, in which an edge node needs to choose the most relevant information to respond to external user queries, which may be different functions of the system state. This type of scenario is closely linked to semantic and task-oriented communications in the iot, approaching the problem from a different angle: in our system, communications are pull-based, and the bottleneck of the system is medium access rather than rate, so that the solution is semantic, voi-based scheduling rather than encoding. Our work shows that query-aware scheduling can lead to profoundly different choices, depending on the specific functions that queries ask for and on the query arrival process for each client, and that rl-based strategies can provide a significant advantage in more predictable scenarios, while unpredictable query processes do not provide any useful information to improve scheduling past one-step greedy strategies.
There are several open avenues of research to extend this work, both on the scheduling itself and on the process estimation. Firstly, scheduling is currently limited to a single sensors, and communication is entirely pull-based: a scenario in which multiple sensors can be polled at once, or sensors can transmit urgent information without being polled first, can make scheduling strategies more interesting. Furthermore, extending the problem from simple numeric values to richer types of information such as images or point clouds could prove useful to several applications, such as cooperative driving or robot swarm management, which require the integration of data-heavy information from multiple sources. This also leads to the second line of future work that we are exploring, i.e., the substitution of the Kalman filter with more complex estimators, such as deep networks, which can deal with much more complex functions and system models, and do not require prior knowledge of the system dynamics. Finally, the combination of a control system with the remote estimation would represent another step forward toward a fully task-oriented communication system.
IEEEtran
[
< g r a p h i c s >
]
Josefine Holm (S'19) received her B.Sc and M.Sc. degrees in mathematical engineering from Aalborg University in 2016 and 2018, respectively. She recently obtained her Ph.D. degree at the Connectivity Section at Aalborg University. Her research interests include wireless communication and IoT networks.
[
< g r a p h i c s >
]
Federico Chiariotti (S'15–M'19) is currently an assistant professor at the Department of Information Engineering, University of Padova, Italy, where he also received his Ph.D. in 2019. Between 2020 and 2022, he worked as a post-doctoral researcher and as an assistant professor at the Department of Electronic Systems, Aalborg University, Denmark. He has authored over 60 published papers on semantic communication, Age of Information, Smart Cities, and transport layer protocols. He was a recipient of the Best Paper Award at several conferences, including the IEEE INFOCOM 2020 WCNEE Workshop. His current research interests include network applications of machine learning, transport layer protocols, Smart Cities, bike sharing system optimization, and adaptive video streaming. He is currently an Associate Editor of the IEEE Networking Letters.
[
< g r a p h i c s >
]
Anders E. Kalør
(S'17–M'22) received the B.Sc. and M.Sc. degrees in computer engineering in 2015 and 2017, respectively, and the Ph.D. degree in wireless communications in 2022, all from Aalborg University. He is currently a postdoctoral researcher at The University of Hong Kong, supported by an individual International Postdoc grant from the Independent Research Fund Denmark. Concurrently, he is affiliated with the Connectivity section at Aalborg University, Denmark. In 2017, he was a visiting researcher at Bosch, Germany, and in 2020 at King's College London, UK. He was awarded the Spar Nord Foundation Research Award for his Ph.D. project (2023). His current research interests include communication theory and the intersection between wireless communications, machine learning and data mining for IoT.
[
< g r a p h i c s >
]
Beatriz Soret
(M'11–SM'21) received her M.Sc. and Ph.D. degrees in Telecommunications from the University of Malaga, Spain, in 2002 and 2010, respectively. She is currently a Senior Research Fellow at the Telecommunications Research Institute, University of Malaga, and a part-time Associate Professor at Aalborg University. She has also held industrial positions in Nokia Bell Labs and GomSpace. She received a best paper award in IEEE Globecom 2013 and a Beatriz Galindo senior grant in Spain in 2020. Her current research interests include semantic communications and AoI, LEO satellite communications, and intelligent IoT environments.
[
< g r a p h i c s >
]
Torben Bach Pedersen is a professor
with the Center for Data-Intensive Systems
(Daisy), Aalborg University, Denmark. His
research concerns Predictive, Prescriptive, and Extreme-Scale Data Analytics with Digital Energy as the main application area.
He is an ACM Distinguished Scientist, an IEEE Computer Society Distinguished Contributor, a member of the Danish Academy of
Technical Sciences, and holds an honorary doctorate from TU Dresden.
[
< g r a p h i c s >
]
Petar Popovski (S'97–A'98–M'04–SM'10–F'16)
is a Professor at Aalborg University, where he heads the section on Connectivity and a Visiting Excellence Chair at the University of Bremen. He received his Dipl.-Ing and M. Sc. degrees in communication engineering from the University of Sts. Cyril and Methodius in Skopje and the Ph.D. degree from Aalborg University in 2005. He is a Fellow of the IEEE. He received an ERC Consolidator Grant (2015), the Danish Elite Researcher award (2016), IEEE Fred W. Ellersick prize (2016), IEEE Stephen O. Rice prize (2018), Technical Achievement Award from the IEEE Technical Committee on Smart Grid Communications (2019), the Danish Telecommunication Prize (2020) and Villum Investigator Grant (2021). He is a Member at Large at the Board of Governors in IEEE Communication Society, Vice-Chair of the IEEE Communication Theory Technical Committee and IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING. He is currently an Editor-in-Chief of IEEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS. Prof. Popovski was the General Chair for IEEE SmartGridComm 2018 and IEEE Communication Theory Workshop 2019. His research interests are in the area of wireless communication and communication theory. He authored the book “Wireless Connectivity: An Intuitive and Fundamental Guide”, published by Wiley in 2020.
|
http://arxiv.org/abs/2306.09852v1
|
20230616140616
|
Actor-Critic Model Predictive Control
|
[
"Angel Romero",
"Yunlong Song",
"Davide Scaramuzza"
] |
cs.RO
|
[
"cs.RO"
] |
Searching for Milky Way twins: Radial abundance distribution as a strict criterion
L. S. Pilyugin<ref>,<ref> G. Tautvaišienė<ref> M. A. Lara-López<ref>
July 31, 2023
==================================================================================================================
Despite its success, Model Predictive Control (MPC) often requires intensive task-specific engineering and tuning.
On the other hand, Reinforcement Learning (RL) architectures minimize this effort, but need extensive data collection and lack interpretability and safety.
An open research question is how to combine the advantages of RL and MPC to exploit the best of both worlds.
This paper introduces a novel modular RL architecture that bridges these two approaches.
By placing a differentiable MPC in the heart of an actor-critic RL agent, the proposed system enables short-term predictions and optimization of actions based on system dynamics, while retaining the end-to-end training benefits and exploratory behavior of an RL agent.
The proposed approach effectively handles two different time-horizon scales: short-term decisions managed by the actor MPC and long-term ones managed by the critic network. This provides a promising direction for RL, which combines the advantages of model-based and end-to-end learning methods.
We validate the approach in simulated and real-world experiments on a quadcopter platform performing different high-level tasks, and show that the proposed method can learn complex behaviours end-to-end while retaining the properties of an MPC.
§ INTRODUCTION
The animal brain's exceptional ability to quickly learn and adjust to complex behaviors stands out as one of its most remarkable traits, which remains largely unattained by robotic systems.
This has often been attributed to the brain's ability to make both immediate and long-term predictions about the consequences of its actions, and plan accordingly <cit.>.
In the field of robotics and control theory, model-based control has demonstrated a wide array of tasks with commendable reliability <cit.>.
In particular, Model Predictive Control (MPC) has achieved notable success across various domains, including the operation of industrial chemical plants <cit.>, control of legged robots <cit.>, and agile flight with drones <cit.>. The efficacy of MPC can be attributed to its inherent ability to take actions that optimize for the future states of a system within a defined short time horizon.
However, when tasks become exceedingly complex, model-based approaches need considerable effort in hand-engineering the method for each task. This includes hand-crafting the cost function, tuning the hyperparameters, or designing a suitable planning strategy <cit.>. Furthermore, the modular nature of model-based approaches can lead to the accumulation of errors in a cascading fashion <cit.>.
More recently, end-to-end learning architectures – specifically, Reinforcement Learning (RL) methods – have gained considerable traction due to their ability to alleviate some of these issues <cit.>.
By condensing the engineering effort into reward design, these architectures reduce the per-task design complexity inherent in model-based approaches.
Actor-critic methods such as TRPO <cit.> and PPO <cit.> have particularly excelled in this domain <cit.>, whereby the agent simultaneously learns an actor policy - a mapping from observation to action - and a critic that aids the learning process by estimating the value function.
This value function serves as a measure of the expected rewards to be gathered in a given state, thereby representing the long-term potential of being in a predefined state.
However, these architectures are not without their own set of challenges <cit.>. Training end-to-end without incorporating and leveraging prior knowledge, such as physics or dynamic models, results in need to learn everything from data.
While the end-to-end paradigm is attractive, it demands a substantial amount of data and often lacks in terms of interpretability and safety.
This has resulted in hesitancy in applying end-to-end learned architectures to safety-critical applications, particularly in cases involving robots that interact with or navigate around humans.
In this paper, we propose a novel approach that addresses these issues by introducing a hybrid actor-critic reinforcement learning architecture.
This architecture equips the agent with a differentiable MPC <cit.>, located at the last layer of the actor network, as shown in Fig. <ref>, that predicts and optimizes the short-term consequences of its actions.
The differentiable MPC module, which incorporates a model of the dynamics of the system, provides the agent with prior knowledge even before any training data is received.
Moreover, the direct command outputs of the MPC to the environment enhance the safety of the system.
The second component of our actor is the cost map – a deep neural network that encapsulates the dependencies between observations and the cost function of the MPC.
In other words, while the MPC captures temporal variations inside its horizon, the neural cost module attends to dependencies in relation to the observations.
This architecture thereby incorporates two different time horizon scales: the MPC drives the short-term actions while the critic network manages the long-term ones.
We demonstrate that the approach can learn complex behaviours – agile flight – for a highly non-linear system – a quadrotor, validated in both simulation and in real-world deployment.
§ RELATED WORK
Several methods have been developed to learn cost functions or dynamic models for MPC <cit.>. For example, in <cit.>, a policy search strategy is adopted that allows for finding cost function parameters for complex agile flight tasks.
However these approaches do not exploit the gradient through the optimization problem, thus not allowing the learned modules to take full advantage of the prior knowledge embedded in the MPC.
Recently several approaches that allow for the inclusion of optimization problems in learning pipelines have bloomed <cit.>.
In particular, the authors in <cit.> were able to recover the tuning parameters of an MPC via imitation learning for non-linear, low-dimensional dynamics – such as cartpole and inverted pendulum – by backpropagating through the MPC itself.
This was enabled by analytically differentiating through the fixed point of a nonlinear iLQR solver <cit.>.
Later, in <cit.>, the authors augment the cost function of a nominal MPC with a learned cost that uses the gradient through the optimizer for the task of navigating around humans.
To train this learned cost, they also learn from demonstrations.
At the same time, works that aim to make reinforcement learning architectures safer by using MPC have also been on the rise <cit.>.
In <cit.>, a predictive safety filter receives the proposed, end-to-end RL control input and decides if it can be safely applied to the real system, or if it has to be modified otherwise.
§ METHODOLOGY
§.§ Preliminaries
Consider the discrete-time dynamic system with continuous state and input spaces, x⃗_k ∈𝒳 and u⃗_k ∈𝒰 respectively.
Let us denote the time discretized evolution of the system such that x⃗_k + 1 = f(x⃗_k, u⃗_k), where the sub-index k is used to denote states and inputs at time t_k.
The general Optimal Control Problem considers the task of finding a control policy π(x⃗), a map from the current state to the optimal input, π : 𝒳↦𝒰, such that the cost function J: 𝒳↦ℝ^+ is minimized:
π(x⃗) = _u
J(x⃗)
subject to x⃗_0 = x⃗
x⃗_k+1 = f(x⃗_k, u⃗_k)
x⃗_k ∈𝒳, u⃗_k ∈𝒰
§.§ Trajectory Tracking Model Predictive Control
For standard Model Predictive Control approaches <cit.>, the objective is to minimize a quadratic penalty on the error between the predicted states and inputs, and a given dynamically feasible reference x⃗_k,ref and u⃗_k, ref.
Consequently, the cost function J(x⃗) in problem (<ref>) is substituted by:
J_MPC(x⃗) = ∑_k = 0^N-1‖Δx⃗_k ‖_Q^2 + ‖Δu⃗_k ‖_R^2 + ‖Δx⃗_N ‖_P^2
where Δx⃗_k = x⃗_k - x⃗_k,ref, Δu⃗_k = u⃗_k - u⃗_k,ref, and where Q⃗≽ 0, R⃗≻ 0 and P⃗≽ 0 are the state, input and final state weighting matrices.
The norms of the form ‖·‖_A^2 represent the weighted Euclidean inner product ‖v⃗‖^2_A = v⃗^T A v⃗.
This formulation relies on the fact that a feasible reference x⃗_k,ref, u⃗_k,ref is accessible for the future time horizon N.
Searching for these is often referred to as planning, and for several applications, it becomes rather arduous and computationally expensive <cit.>.
§.§ General Quadratic MPC formulation
Most MPC approaches need a explicit manual selection of a cost function that properly encodes the end task.
For the standard tracking MPC presented in the previous section, this encoding is done through planning by finding a dynamically feasible reference trajectory that translates the task into suitable cost function coefficients for every time step.
However, this approach presents two drawbacks: i) finding a dense, differentiable cost function – a trajectory – can be difficult, and ii) even if this cost function is found, extra effort needs to be spent in fine tuning the parameters for real-world deployment.
More generally, all receding horizon architectures such as MPC need to run in real-time when a deployment in the real world is desired.
Because of this, the optimization problem is often approximated and converted from a non-linear optimization problem to a Quadratic Program (QP).
Therefore, a task agnostic cost function can be written (<ref>).
J_Q(x⃗) = ∑_k = 0^Nx_k
u_k^T Q_k x_k
u_k + p_k x_k
u_k
In our paper we propose to directly search for the matrix coefficients of (<ref>).
This way, by varying Q_k and p_k, we are able capture a larger family of problems, without suffering from the dependency on a feasible trajectory.
§.§ Actor-Critic Model Predictive Control
In this paper we propose an Actor Critic MPC controller architecture where the MPC is differentiable and the cost function is learned end-to-end using RL. The MPC block is introduced as the final layer of the actor in an actor-critic PPO pipeline, as shown in Fig. <ref>.
Instead of resorting to task specific engineering of the cost function, we propose a neural cost map where the Q_k and p_k terms are the output of a neural network.
This allows to encode the end task directly as a reward function, which is then trainable end-to-end using the PPO training scheme.
The main benefit of this approach with respect to training a pure Multi Layer Perceptron (MLP) end-to-end is that the final layer of the actor is a model-based MPC controller, and therefore it retains its reliability and interpretability properties.
The model-based controller in the final layer ensures that the commands are always feasible for the dynamics at hand, and that they respect the system constraints.
To allow for exploration, during training the control outputs are sampled from a Gaussian distribution where the mean is the output of the MPC block, and the variance is controlled by the PPO algorithm.
However, during deployment the output from the MPC is used directly on the system, retaining all properties of a model-based controller.
§.§ Neural Cost Map
The cost function for the model predictive control architecture presented in Section <ref> is learnt as a neural network, depicted in Fig. <ref> as Cost Map.
Several adaptations to the system are needed in order to properly interface the neural network architecture with the optimization problem.
First, we constrain the Q_k matrix to be diagonal.
Q_k = diag(Q_x_1,k, …, R_u_1,k, …) p_k = [p_x_1,k, …, p_u_1, k, …] ∀ k ∈ 0, …, T
where x_1, … and u_1, … are the states and inputs to the system, respectively, and Q_x_1,k and p_x_1,k are the learnable parameters, interface from the neural network to the optimization problem.
The purpose of the diagonalization of the Q matrix is to reduce the dimensionality of the learnable parameter space.
Therefore, the dimensionality of the output dimension of this Cost Map is 2T(n_state + n_input).
In order to ensure the positive semi-definiteness of the Q matrix and the positive definiteness of the R matrix, a lower bound on the value that these coefficients needs to be set.
To this end, the last layer of the neural cost map has been chosen to be a sigmoid which allows for upper and lower bounds on the output value.
This lower and upper limits are chosen equal for Q and p, of 0.1 and 100000.0, respectively.
The upper bound is needed because otherwise a behaviour where the coefficients would grow to infinity is observed.
Threfore, the final neural cost map consists of two hidden layers of width 512 with ReLUs in between and a sigmoid non-linearity at the end.
The critic network consists also of two hidden layers of width 512 and ReLUs. The ouput of the critic network is a scalar.
§ EXPERIMENTS
In this section we present a set of experiments, both in simulation and in the real-world.
All experiments have been conducted using a quadrotor platform.
We train in a simple simulator in order to speed up the training, and evaluate in BEM, a high-fidelity simulator <cit.>, which is slower but has a higher level of similarity with the real-world.
The dynamics of the quadrotor platform are introduced in supplementary Section <ref>.
First, in Section <ref> we present a series of simulation experiments and ablation studies on two standard tracks: Horizontal and Vertical.
Then, in Section <ref> we showcase the ability of our approach to learn more complex agile flight behaviours.
Next, in Section <ref> we show that the proposed method is also able to learn perception aware flight.
Finally, we show that our approach transfers to the real world in Section <ref>.
For every different task the policies are retrained from scratch.
For these experiments, the observation space consists of linear velocity, rotation matrix and relative measurement of the target gate's corners.
The control input modality is collective thrust and body rates.
Even if the MPC block uses a model which limits the actuation at the single rotor thrust level, collective thrust and body rates are computed from these and applied to the system.
This ensures that the computed inputs are feasible for the model of the platform.
Finally, the reward function depends on the task. One common term that is shared among all tasks in our set of experiments is the progress reward, which incentivizes the platform to fly as fast as possible to a goal.
For more details about the rewards, observations and dynamics we refer the reader to supplementary Section <ref>.
§.§ Horizontal and Vertical tracks
We start with horizontal and vertical flight.
These tasks were chosen because they expose the fundamental capabilities of our system, are basic enough to allow for fast iteration cycles, and therefore are taken as base for the main ablation studies presented in this section.
For example, the vertical task can show if the approach is able to find a solution that lies directly in the singularity of the input space of the platform, since the platform can only generate thrust in its positive body Z direction.
When flying fast downwards, the fastest solution is to tilt the drone as soon as possible, direct the thrust downwards and only then command positive thrust <cit.>.
However, there is a local optimum where many approaches are prone to get stuck <cit.>, which is command zero thrust and wait to be pulled only by the force of gravity.
Fig. <ref> shows the simulation results of deploying the proposed approach which was trained in the horizontal and vertical tracks (left and right side of Fig. <ref>, respectively.
In this figure we show velocity profiles and value function profiles.
The value function profiles have been computed by selecting a state of the platform in the trajectory and modifying only the position while keeping the rest of the states fixed.
For the horizontal track we sweep only the XY positions, and for the vertical track, the XZ positions.
Additionally, 10 MPC predictions are shown and marked with Xs.
In these value function plots, areas with high values (in yellow) indicate regions with high expected rewards.
This experiment shows that the proposed approach can not only successfully recover an effective policy for the task, but it is also able to exploit the benefits of both MPC and RL approaches.
The critic has indeed been able to learn long term predictions, while the model-predictive controller focuses on the short term ones, effectively incorporating two time scales.
In the attached supplementary video we also show how these high value areas evolve as the platform flies towards the end of the track.
§.§.§ Ablation study: sample efficiency and robustness to disturbances
In order to understand the properties of the proposed method, we perform two studies where the pure MLP architecture (labeled as MLP in the following) is compared to our approach (labeled as AC-MPC) in terms of sample efficiency and rosbustness to disturbances.
All approaches have been trained with exactly the same conditions (reward, environment, observation, etc), the only change being the policy itself.
As shown in Fig. <ref>B, in terms of sample efficiency, for both the horizontal and the vertical flight tasks our approach falls slightly behind.
This can be due to the fact that by using an MPC architecture we are imposing certain rigid dynamic structure when compared to using only an MLP.
This is supported by the fact that, as depicted in Fig. <ref>B, the sample efficiency gets worse with the horizon length, as the number of cost function parameters grows linearly with the horizon.
In terms of disturbance rejection, we conduct two extra ablations (Fig. <ref>A and Fig. <ref>C) where we test how both approaches react when presented with situations outside the training distribution, and therefore, test for generalisability and reliability.
In Fig. <ref>A, we simulate a strong wind gust that applies a costant external force of 11.5 N (equivalent to 1.5x the weight of the platform).
This force is applied when the drone flies from 10m to 8m in height.
We can see how the pure MLP policy is not able to recover and go back to the designated track.
On the other hand, AC-MPC is able to recover properly, while at the same time being more consistent.
This showcases that incorporating a model-based component enables the system to achieve improved robustness and reliability.
In the case of Fig. <ref>C, we simulate 10000 iterations for each controller for the horizontal task, where the starting point is uniformly sampled in a cube of 3m of side length where the nominal starting point is in the center.
During training, the initial position is only randomized in a cube of 1m of side length (shown in yellow in Fig. <ref>.
The successful trajectories are shown in blue, while the crashed ones are shown in red. We can observe that the AC-MPC presents higher success rate, and that it is more consistent (indicated by the dark blue area in the center of the gates.
Both experiments support the claim that AC-MPC deals better with unforeseen situations and unknown disturbances, which makes it less fragile and more generalisable.
§.§ Complex tracks: Circle and SplitS
To showcase the capability of the proposed method being able to learn agile flight through complex environments, in this section we focus on two extra environments: Circle and SplitS. Fig. <ref> shows how the approach is able to learn agile flight through these two tracks, with maximum speeds of up to 16 m/s in simulation.
§.§ Perception Aware Flight
Additionaly, we create a training environment where the task is to fly agilely in a circle while keeping a certain point in the center of the camera frame.
This approach is similar to the one in <cit.>.
However in this case the perception aware objective is in the form of a reward function instead of an optimization objective.
Given an interest point in the world frame (which is marked as an orange star in Fig. <ref>), we minimize the angle between the Z-axis of the camera and the line that joins the center point of the camera with the objective.
This reward term is then summed to the previously presented progress reward term, which incentivizes the drone to move through the gates.
In Fig. <ref> we show how both MLP and AC-MPC approaches are able to learn this behaviour.
In this case the black arrows represent the direction of the camera Z-axis.
Since yaw control effectiveness is the lowest for a quadrotor – a large amount of actuation is needed for a small change in yaw –, this task poses a competing reward problem, where if the drone moves faster, it will be necessarily at the expense of losing perception awareness.
This is the reason behind the unnatural shapes that emerge from the training, shown in Fig. <ref>.
§.§ Real-world transfer
In this section we show that the proposed approach is able to transfer zero shot to the real world.
To show it we deploy the policy the tracks introduced in Section <ref>: Circle track and SplitS track.
For the real-world deployment we use the Agilicious platform <cit.>.
The main physical parameters and components of this platform have been described in supplementary Section <ref>.
In Fig. <ref> highlight the strong similarities between the simulated and the real world experiments, which indicates that the approach presents very good zero-shot transfer properties.
These experiments are also shown in the supplementary video.
§ LIMITATIONS
In this paper, we introduce the AC-MPC approach, which can leverage the advantages of both MPC and RL.
However, there are some limitations to be mentioned and to be improved in the future.
First, training with the MPC in the loop takes significantly longer than its only MLP counterpart.
As shown in supplementary Section <ref>, while an MLP policy takes 21 minutes of training in a GPU, for the same number of timesteps our policy takes 11 hours for N=2, or 39 hours for N=10 in a CPU.
The main reason behind this is that for every differentiable MPC backward or forward pass, an optimization problem needs to be solved.
This limitation can be alleviated by implementing the controller and the solver in C++, since right now our entire pipeline is implemented in python.
In fact, there are open-source libraries <cit.> that are recently evolving and implementing the differentiable MPC block in C++.
Another limitation is that the differentiable MPC controller does not support one important characteristic of model based control: state constraints.
In order to ensure safety we often need to limit the states that can be visited by our system (e.g., limitations in speeds or body rates).
This limitation could be addressed by exploring the possibilities of adding state constraints to the implementation.
§ CONCLUSION AND DISCUSSION
In this work we present a hybrid PPO actor-critic based agent that contains a differentiable model predictive controller at its core.
We show that not only is the method able to learn complex behaviours for a highly non-linear and high dimensional system such a quadrotor, but that it also improves over its MLP counterpart in terms of generalisability and reliability.
Additionally, our approach is able to transfer zero-shot to the real world. This capability is demonstrated by successfully piloting a quadrotor at velocities of up to 14 m/s, without prior specific training for the real-world setting.
We believe that the proposed method represents an important step in the direction of interpretability and safety in RL and demonstrates that modular solutions that combine the best of learning-centric and model-based approaches are becoming increasingly promising.
Our approach potentially paves the way for the development of more reliable, efficient, and safe RL-based systems, contributing positively towards the broader goal of advancing AI and robotics applications.
§.§ Acknowledgments
This work was supported by the Swiss National Science Foundation (SNSF) through the National Centre of Competence in Research (NCCR) Robotics (grant number
51NF40 185543), the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No. 871479 (AERIAL-CORE), and the European Research Council (ERC) under grant agreement No. 864042 (AGILEFLIGHT).
We would also like to thank Brandon Amos, for sharing his insights regarding the implementation of the differentiable MPC code, and Jiaxu Xing for the insightful discussions.
§ SUPPLEMENTARY MATERIAL
§.§ Drone dynamics
The quadrotor's state space is described from the inertial frame I to the body frame B, as x⃗ = [p⃗_IB, q⃗_IB, v⃗_IB, w⃗_B]^T where p⃗_IB∈ℝ^3 is the position, q⃗_IB∈𝕊𝕆(3) is the unit quaternion that describes the rotation of the platform, v⃗_IB∈ℝ^3 is the linear velocity vector, and ω⃗_B∈ℝ^3 are the bodyrates in the body frame.
The input of the system is given as the collective thrust f⃗_B = 0 0 f_Bz^T and body torques τ⃗_B.
For readability, we drop the frame indices as they are consistent throughout the description.
The dynamic equations are
ṗ⃗̇ = v⃗ q̇⃗̇ = 1/2q⃗⊙0
ω⃗
v̇⃗̇ = g⃗ + 1/m𝐑(q⃗) f⃗_T ω̇⃗̇ = 𝐉^-1( τ⃗ - ω⃗×𝐉ω⃗)
where ⊙ represents the Hamilton quaternion multiplication, 𝐑(q⃗) the quaternion rotation, m the quadrotor's mass, and 𝐉 the quadrotor's inertia.
Additionally, the input space given by f⃗ and τ⃗ is decomposed into single rotor thrusts f⃗ = [f_1, f_2, f_3, f_4] where f_i is the thrust at rotor i ∈{ 1, 2, 3, 4 }.
f⃗_T = 0
0
∑ f_i and τ⃗ =
l/√(2) (f_1 + f_2 - f_3 - f_4)
l/√(2) (- f_1 + f_2 + f_3 - f_4)
c_τ (f_1 - f_2 + f_3 - f_4)
with the quadrotor's arm length l and the rotor's torque constant c_τ.
§.§ Quadrotor parameters
In this section we introduce the quadrotor parameters that are used for the real world experiments. The details about components and physical parameters can be seen in Table <ref>.
§.§ Observations, control inputs and rewards
In this section we introduce the observation space, the control input space and the rewards used for each task.
§.§.§ Observations
For all tasks presented in our manuscript, the observation space does not change, and it consists of two main parts: the vehicle observation 𝐨_t^quad and the race track observation 𝐨_t^track.
We define the vehicle state as 𝐨_t^quad = [v⃗_t, 𝐑_t] ∈ℝ^12, which corresponds to the quadrotor's linear velocity and rotation matrix.
We define the track observation vector as 𝐨_t^track=[ δp⃗_1, ⋯, δp⃗_i , ⋯], i ∈ [1, ⋯, N],
where δp⃗_i ∈ℝ^12 denotes the relative position between the vehicle center and the four corners of the next target gate i or the relative difference in corner distance between two consecutive gates.
Here N∈ℤ^+ represents the total number of future gates.
This formulation of the track observation allows us to incorporate an arbitrary number of future gates into the observation.
We use N=2, meaning we observe the four corners of the next two target gates.
We normalize the observation by calculating the mean and standard deviation of the input observations at each training iteration.
§.§.§ Control inputs
The control inputs are expressed as a 4-dimensional vector 𝐚 = [c, ω_x, ω_y, ω_z ] ∈ℝ^4, representing mass-normalized collective thrust and bodyrates, in each axis separately.
Since our the differentiable MPC block outputs single rotor thrusts, we obtain the mass-normalized collective thrust as ∑ f_i / m.
For the body rates, we take the first prediction of the differentiable MPC as input for our system.
This way, all inputs sent to the system are ensured to fulfill the single rotor thrust constraints enforced by the MPC.
§.§.§ Rewards
Throughout this work different reward terms have been used depending on the task.
For all the experiments there is one reward term in common, which is the gate progress reward.
This term encourages the platform to fly as fast as possible.
Additionally, for the perception aware task one more term has been added.
Both terms are explained in what follows.
§.§.§ Gate Progress term
The objective is to directly maximize progress toward the center of the next gate.
Once the current gate is passed, the target gate switches to the next one.
At each simulation time step k, the gate progress objective is defined
r(k) = ‖ g_k - p_k-1‖ - ‖ g_k - p_k ‖ - b ω_k
where g_k represents the target gate center, and p_k and p_k-1 are the vehicle positions at the current and previous time steps, respectively. Here, b ω_k is a penalty on the bodyrate multiplied by a coefficient b=0.01.
To discourage collisions with the environment, a penalty (r(k) = -10.0) is imposed when the vehicle experiences a collision. The agent is rewarded with a positive reward (r(k) = +10.0) upon finishing the race.
§.§.§ Perception Aware term
Let us assume that we mount a camera to the frame of the drone.
The perception aware task is defined as keeping the Z-axis of the camera always pointing towards the point of interest.
Let us define the unit vector associated to this Z-axis as z⃗_c.
Let us denote the position of the interest point as p⃗_i, and the position of the center of the camera frame as p⃗_c.
Therefore, the vector from the center of the camera frame and the point of interest is r_ci = p⃗_i - p⃗_c, and its normalized version is u⃗_ci.
Therefore, the angle between z⃗_c and u⃗_ci is α = acos(z⃗_c ·u⃗_ci).
We are interested in giving positive rewards when this angle is close to zero, for which we use r_pa(k) = exp(-5 α_k ^ 4), shown in Fig. <ref>.
§.§ Training time and forward pass time
In Table <ref> we show the training times and the forward pass times for the MLP approach, as well as for our proposed AC-MPC approach for different horizon lengths.
|
http://arxiv.org/abs/2306.02404v1
|
20230604164921
|
Analysis of the heat transfer fluctuations in the Rayleigh-Bénard convection of concentrated emulsions with finite-size droplets
|
[
"Francesca Pelusi",
"Stefano Ascione",
"Mauro Sbragaglia",
"Massimo Bernaschi"
] |
physics.flu-dyn
|
[
"physics.flu-dyn",
"physics.comp-ph"
] |
[email protected]
Istituto per le Applicazioni del Calcolo, CNR - Via dei Taurini 19, 00185 Rome, Italy
Department of Physics, Tor Vergata University of Rome - Via della Ricerca Scientifica 1, 00133 Rome, Italy
Department of Physics & INFN, Tor Vergata University of Rome, Via della Ricerca Scientifica 1, 00133 Rome, Italy
Istituto per le Applicazioni del Calcolo, CNR - Via dei Taurini 19, 00185 Rome, Italy
Employing numerical simulations, we provide an accurate insight into the of heat transfer mechanisms in the Rayleigh-Bénard convection of concentrated emulsions with finite-size droplets. We focus on the unsteady dynamics characterizing the thermal convection of these complex fluids close to the transition from conductive to convective states, where the heat transfer phenomenon, expressed in terms of the Nusselt number Nu, is characterized by pronounced fluctuations triggered by collective droplets motion [Pelusi et al., Soft Matter 17(13), 3709 - 3721 (2021)]. By systematically increasing the droplet concentration, we show how these fluctuations emerge along with the segregation of “extreme events" in the boundary layers, causing intermittent bursts in the heat flux fluctuations. Furthermore, we quantify the extension S and the duration 𝒯 of the coherent droplet motion accompanying these extreme events via a suitable statistical analysis involving the droplets displacements. We show how the increase in droplet concentration results in a power-law behaviour of the probability distribution function of S and 𝒯 and how this outcome is robust at changing the analysis protocol. Our work offers a comprehensive picture, linking macroscopic heat transfer fluctuations with the statistics of droplets at the mesoscale.
Analysis of the heat transfer fluctuations in the Rayleigh-Bénard convection of concentrated emulsions with finite-size droplets
Massimo Bernaschi
July 31, 2023
================================================================================================================================
§ INTRODUCTION
Rayleigh-Bénard Convection (RBC) is one of the most paradigmatic buoyancy-driven flows in fluid dynamics <cit.>. It is observed whenever a fluid is placed between two plates under the influence of buoyancy forces, while heated from below and cooled from above. The heat transfer properties are generally quantified in terms of the dimensionless Nusselt number Nu, expressing the importance of the convective transport in comparison to the conductive one <cit.>. RBC plays an important role in a variety of fields, ranging from the atmosphere dynamics <cit.> to the design of indoor environments <cit.>, from the geophysical context <cit.> to the metallurgic industry <cit.>. From the theoretical side, RBC represents a fascinating problem that leads to the study of the instabilities and the transition from conductive to convective states, with the associated heat transfer properties, from the large scales down to the small ones. Different reviews have been written on the topic, covering experimental, numerical, and theoretical aspects <cit.>.
RBC has been traditionally addressed in the context of single-phase fluids, but studies in recent years also investigated the importance of the multi-phase and/or multi-component nature of the convective fluids <cit.>, since it can impact technological applications <cit.>. For example, RBC between two fluid layers has been studied, and numerical simulations helped in understanding the relationship between the heat transport efficiency and the properties of the two fluid layers, e.g., viscosity contrast, density contrast, and layers thickness <cit.>. In the presence of multi-phase fluids, some studies also revealed enhanced heat transfer in comparison to the single-phase case, especially in proximity of the critical point, due to an increased occurrence of droplet condensation, or in the presence of the melting of a solid above a liquid melt <cit.>. RBC laden with bubbles/droplets has also been numerically studied in recent works <cit.>, showing how the heat transport properties can be affected by the presence of the dispersed phase <cit.>, the surface wettability <cit.>, the condensation condition <cit.>, and the presence of non-trivial correlations between distant droplets <cit.>. Furthermore, experiments show that the introduction of a small percentage of a second component in a pure water solution is sufficient to affect the overall heat transfer <cit.>.
In the present paper, we focus on those situations where RBC laden with droplets is studied at increasing droplet concentration – i.e., concentrated emulsions – resulting in a non-Newtonian response of the fluid <cit.>. The impact of non-Newtonian rheology itself has been studied in the framework of RBC for a while <cit.>, while in a recent study <cit.> some of the authors have highlighted the richness brought by the finite-size of the droplets, i.e., the situation where RBC takes place in a confined environment and the actual extension of the droplets cannot be neglected in comparison to the characteristic size of the confining channel (cfr. Fig. <ref>(a)). Numerical simulations allowed to tune the droplet concentration, ranging from diluted Newtonian to highly packed non-Newtonian emulsions, revealing that the increase in droplet concentration results in enhanced fluctuations in the heat transfer <cit.>. Here we take a step forward and provide a comprehensive characterization of the fluctuations. We first delve deeper into the connection between fluctuations at large scales and fluctuations at the droplet scales by systematically increasing the droplet concentration. The increase in droplet concentration reveals the emergence of “extreme” fluctuations that we characterize in terms of their spatial localization. The extreme fluctuations materialize in terms of collective droplet motion with a characteristic extension S and duration 𝒯. We therefore investigate the statistical properties of both S and 𝒯 following a well-established protocol based on the tracking of droplets displacement <cit.>. This result is robust at changing analysis protocol.
The rest of the paper is organized as follows: the numerical methodology along with the necessary theoretical tools of analysis are recalled in Section <ref>; a detailed characterization of the heat flux fluctuations, from “extreme events" localization to their statistical characterization, is provided in Section <ref>; conclusions will be drawn in Section <ref>.
§ METHODS
Numerical simulations have been conducted with the open-source code TLBfind <cit.> developed by some of the authors. TLBfind is a GPU code that implements a lattice-Boltzmann model <cit.> for non-ideal multi-component systems to simulate the thermal convection of a multi-component fluid made of immiscible non-coalescing droplets in a two-dimensional system. The multi-component fluid comprises two components with densities ρ_1 and ρ_2, respectively [Note that hereafter we use the convention of considering the first component as the one associated with the dispersed phase.]. Non-ideal interactions between the two components allow for phase separation and the formation of diffuse interfaces between regions with the majority of one of the two components. In TLBfind, the dynamics of the temperature field T(x,y,t) obeys the advection-diffusion equation <cit.>, driven by the hydrodynamic fluid velocity. In turn, the two fluid components evolve following the Navier-Stokes equation with an additional buoyancy term introduced via the Boussinesq approximation <cit.> implying a force term F =ρα g T, where ρ=ρ_1+ρ_2 is the total density, α the thermal expansion coefficient and g the gravity acceleration. Furthermore, TLBfind allows the choice of the system size L and H along the x- and y- direction, respectively, and the number of emulsion droplets N_droplets (cfr. Fig. <ref>(a)). The resulting droplet concentration Φ_0 is defined as the fraction of domain size occupied by the dispersed phase, i.e., Φ_0 = {∫∫Θ(ρ_1(x,y)-ρ^*) dx dy }/L H, where Θ is the Heaviside step function and ρ^* is a reference density value (cfr. Ref. <cit.>). In this work, we fix the system sizes L and H and we explore emulsions with different Φ_0 by varying N_droplets, i.e., from diluted (Φ_0 = 0.27, N_droplets = 352) to highly concentrated emulsions (Φ_0 = 0.73, N_droplets = 800).
A detailed rheological characterization of the simulated emulsions is provided in Section 1 of the Supplementary Material.
We prepare emulsions with a desired droplet concentration and we place them between two parallel plates located at y=± H/2. In the proximity of these plates, both components feel the effect of no-slip velocity boundary conditions, while we linearly initialize the temperature field between the two walls by prescribing T(x,y=± H/2,t)=∓Δ T/2, resulting in a system heated from below and cooled from above [The value of Δ T = T_hot - T_cold (cfr. Fig. <ref>(a)) is fixed in all simulations.].
Intending to investigate the heat transfer properties of the emulsions in the framework described above, we introduce a macroscopic observable indicating the importance of convective transport, i.e., the dimensionless Nusselt number. This quantity is defined as <cit.>
(t) = ⟨ u_y (x,y,t) T (x,y,t)⟩_x,y - κ⟨∂_y T(x,y,t) ⟩_x,y/κΔ T/H ,
where u_y is the y-component of the hydrodynamical velocity field, κ is the thermal diffusivity, and the angular brackets indicate a spatial average of the total domain size. We remark that a value of Nu equal to unity implies a conductive state, whereas a value larger than unity implies convective transport. For a homogeneous (single-phase) fluid, it is well established that the destabilization of a conductive state results in a convective state characterized by both a steady flow and a value of Nu that is time-independent. Contrariwise, for heterogeneous fluids, particularly in the case of concentrated emulsions, exhibits fluctuations in time around a time-averaged value ⟨⟩_t <cit.> (see Fig. <ref>(b)), due to the presence of finite size droplets. To characterize these fluctuations, for each emulsion, we have carefully chosen the model parameters to keep the time-averaged value ⟨⟩_t ≈ 2, i.e., just above the transition from conduction to convection. As a matter of fact, we aim to compare emulsion systems whose heat transfer is, on average, the same and to highlight how a systematic variation in the emulsion concentration influences the fluctuations of Nu. This analysis will be also conducted at scales comparable to the size of the droplets. Specifically, we introduce the single droplet Nusselt number _i(t), i.e., a value of Nu associated with the i-th droplet, which can be obtained following Ref. <cit.> as a result of the decomposition of Eq. (<ref>) into the contributions of each single droplet
_i^(drop)(t) = u^(i)_y(t) T^(i)(t) - κ (∂_y T)^(i)(t)/κΔ T/H.
For the sake of simplicity, hereafter we will refer to ^*,(drop)_i as the fluctuations of _i^(drop) with respect to its averaged value ⟨^(drop)⟩ and normalized with respect to its standard deviation σ_Nu:
_i^*,(drop) (t) = _i^(drop)(t)- ⟨^(drop)⟩σ_Nu.
⟨^(drop)⟩ as well as σ_Nu results from considering all droplets i at all times. Besides the droplet heat flux fluctuations in Eq. (<ref>),
another core observable for the purpose of this work is the droplet displacement: by exploiting the Lagrangian tool embedded in TLBfind, which can individually track all the droplets via the identification of their centers of mass X_i= X_ix̂ + Y_iŷ, we compute the vectorial displacement d_i(t). Then, starting from d_i(t), at any simulation time step, the corresponding Eulerian field d(x,y,t) is extracted <cit.>. Finally, we compute the averaged-in-time displacement field ⟨ d(x,y,t) ⟩_t and consider the fluctuations of d(x,y,t) with respect to it
δ d(x,y,t)= d(x,y,t)-⟨ d(x,y,t) ⟩_t .
Fig. <ref>(c) shows a sketch of the above-mentioned fields. Note that the displacement fluctuation in Eq. (<ref>) is computed in a time range that is large enough to collect sufficient statistics but limited to an interval where the thermal plume does not move too much in the x direction.
Simulations have been performed on a domain of size H ∼ 40 d and ∼ 2H, where dd is the average droplet diameter. This choice of domain size allowed us to maintain consistency between simulations at different droplet concentrations without significant finite-size effects.
All the simulations have been conducted on GPUs (Tesla K80 and Quadro RTX 8000 GPUs), gathering data about tens of millions of droplets for each emulsion. In order to collect the same amount of data for all the concentrations, diluted emulsions require longer simulations due to the smaller number of droplets for each time step. In all cases, we exclude from the statistical analysis the initial, transient, period necessary to the development of the convective state.
§ RESULTS AND DISCUSSION
We first report on the statistical analysis of the droplet heat flux fluctuations _i^*,(drop) defined in Eq. (<ref>). The aim is to extend and improve the preliminary results reported in Ref. <cit.>, which allowed to establish a link between the fluctuations on the macroscopic observable Nu shown in Fig. <ref>(b) and the statistics of heat transfer at the droplet scale. In Ref. <cit.>, only the phenomenology of two “categories” of emulsions have been compared, i.e., a diluted Newtonian and a concentrated non-Newtonian emulsion. Here a systematic characterization at increasing droplet concentration Φ_0 is provided in order to delve deeper into the way heat flux fluctuations emerge, i.e., whether there is a sharp transition or a more continuous change. In Fig. <ref>, we show the probability distribution function (PDF) of the mesoscopic observable _i^*,(drop) for different droplet concentrations Φ_0 (different symbols/colors). It is apparent that the larger Φ_0, the more pronounced PDF tails. We also observe that these tails grow continuously upon increasing the droplet concentration. This trend is also confirmed by the inset of Fig. <ref>, reporting the values of σ_Nu used to normalize the droplet Nusselt number fluctuations (crf. Eq. (<ref>)). It shows an almost doubled standard deviation in the highly packed case as compared to the diluted one. Moreover, for a highly packed emulsion, the authors of Ref. <cit.> showed that large fluctuations in the PDF of ^*,(drop) are due to droplets localized in the boundary layers. Here we take a step forward in that starting from the PDF reported in Fig. <ref> we report the spatial localization of the events that contribute to the PDF tails. Specifically, we compute the PDF of the y-coordinate of the droplet center-of-mass positions Y, by conditioning the statistics to a specific range of ^*,(drop) in Fig. <ref>: from the entire positive (^*,(drop) > 0) and negative (^*,(drop) < 0) tails, to smaller portions of them (e.g., ^*,(drop) > 4 and ^*,(drop) < -2). Results are shown in Fig. <ref>, where an increasing Φ_0 results in an increasing solid line color intensity and each panel refers to a different range of ^*,(drop). Left panels ((a)-(c)-(e)-(g)) refer to data contributing to the negative tails, whereas right panels ((b)-(d)-(f)-(h)) refer to portions of positive tails. PDFs are close to being flat when considering all collected data (panels (a) and (b)), whereas the scenario changes when approaching the extreme tails. Any emulsion shows a common trend: droplets moving in the proximity of the center of the rolls, do not experience large velocities and follow the average-in-time flow, thus not contributing to large heat transfer fluctuations whereas only droplets which are close to the walls, and receive a boost from the upward or downward thermal plume, instantly experience a significant velocity, resulting in a shift of the peaks of PDF(Y) from the center of the Rayleigh-Bénard cell towards the boundary layers. This trend is exacerbated in the most concentrated cases, where droplets are highly packed and the free volume of each droplet, as well as their mobility, is, in general, very limited.
In other words, the extreme heat transfer fluctuations observed especially in the case of highly packed emulsions are localized close to the walls of the Rayleigh-Bénard cell. Finally, we can state that Fig. <ref> is a further confirmation of the continuous transition from diluted emulsions, exempted from heat fluctuations, to more concentrated emulsions, that accommodate macroscopic and mesoscopic anomalous heat flux fluctuations. Note that, in the case of Φ_0 = 0.48, the PDF(Y) is slightly asymmetric for ^*,(drop)>4 because of the limited statistics. Moreover, since PDFs(^*,(drop)) in Fig. <ref> are asymmetric, then the shape of PDF(Y) for the case ^*,(drop)< -2 is qualitatively similar to the one for ^*,(drop)> 5 (data not shown). To enrich this result with information about the dynamics, we monitored the time evolution of _i^*,(drop) and the corresponding Y_i for a single droplet i randomly chosen among droplets moving closer to the walls (see Fig. <ref>). For the sake of readability, we show data for only three values of Φ_0 (panel (a): Φ_0 = 0.27, panel (b): Φ_0 = 0.48, and panel (c): Φ_0 = 0.73). From Fig. <ref> it is clear that _i^*,(drop) exhibits an intermittent behavior, more pronounced as the droplet concentration increases. It reveals a non-trivial correlation between “bursts" in the droplet heat transfer fluctuation and the spatial approach-to/departure-from a wall. Contrariwise, when the droplet “slips" close to the wall, i.e., in the periods where the signal of Y_i is almost flat, the droplet does not contribute to an extreme event and the heat it “carries” is damped. In addition, Fig. <ref> reveals that the oscillation period of Y_i is dependent on the droplet concentration and it is not constant as expected for a homogeneous material. We interpret this finding as due to a droplet layer change by the drop under consideration, whose oscillation period around the center of the convective roll decreases as the droplet approaches it (see dotted lines outlining droplets layers). The droplet layer change is triggered by one or more collisions with the surrounding droplets, especially in the concentrated case where the droplet mobility is further reduced. This picture can be caught by the eye by watching the density map animations we include in the Supplementary Material (Phi027.mp4, Phi048.mp4, and Phi073.mp4). Notice that in the diluted case the selected droplet rarely moves in the boundary layers, as already observed in Fig. <ref>.
The observation of an intermittent behavior in the heat transfer and its fluctuations in RBC of concentrated emulsions naturally leads to asking whether these materials present a droplets collective motion in such a situation. To answer this question, we inspected the statistical properties of droplets coherent motion in terms of its spatial extension S and its temporal duration 𝒯 following Ref. <cit.>. In the latter reference, the system was driven by coarsening dynamics, characterized by an averaged-in-time flow that is almost zero, and the authors employed a protocol based on the absolute value of the vectorial droplet displacement d_i(t) (cfr. Fig. <ref>(c), black arrows). However, in an RBC the system's dynamics exhibits an averaged-in-time flow characterized by the presence of convective rolls (cfr. Fig. <ref>(c), colorbar), meaning that, to apply the same protocol, we need to subtract the mean flow and consider the vectorial droplet displacement fluctuation δ d as the key observable (cfr. Eq. (<ref>) and Fig. <ref>(c), brown arrows). As highlighted by Fig. <ref>(a), this observable exhibits a weak intensity in the diluted emulsion with no manifest coherence, while it is substantial in the highly packed case, highlighting large regions of droplet coherent motion. Thus, in order to focus on extreme events, we select at any time step only the droplet with the largest absolute value of δ d, obtaining an intermittent signal in time of δ_sup (cfr. Fig. <ref>(b)). As threshold value of δ to identify a droplets coherent motion, we take the knee value of the PDF of that observable (cfr. Fig. <ref>(c)): whenever the signal of δ_sup overcomes the threshold, we record a t_start as the beginning of the spatial coherence. Then, the first time t>t_start where δ_sup returns below the threshold, we record its end (t_end). We measure the spatial extension of the detected coherent motion S by summing up the area of droplets whose absolute value of δ d is larger than the threshold value during its temporal duration, defined as 𝒯 =t_ end - t_ start. The statistics of S at varying droplet concentration Φ_0 is reported in Fig. <ref>(a): it is evident how the extension of the PDF of S increases as droplet concentration Φ_0 increases, meaning that the spatial coherence develops in originally small regions which grow in extension with Φ_0. However, only the most concentrated emulsion presents a robust power-law behavior that covers two decades, echoing observations on other systems presenting an avalanche-like behaviour <cit.>. This result is further confirmed by analyzing the statistics of the temporal duration 𝒯 (cfr. Fig. <ref>(b)), where, once again, a power-law behavior is better visible in the case of the most concentrated emulsion. Both panels of Fig. <ref> still confirm how the phenomenology changes in a continuous way from diluted to concentrated emulsions. In addition, since δ d is not the only mesoscopic observable we can measure in our simulations, it is interesting to double-check whether the mesoscopic heat transfer fluctuation ^*,(drop) is equally good at capturing the droplets coherent motion of emulsions under RBC. Fig. <ref> provides the answer, showing the comparison of the PDF of spatial coherence S and duration 𝒯 for Φ_0 = 0.73 between the protocol based on δ d and the one based on _i^*,(drop), where the latter follows the same procedure mentioned for δ d. The qualitative good overlap of the two PDFs contains double information: on the one hand, it confirms the strong correlation between bursts of heat transfer and displacement at the droplet scale; on the other, it also supports the idea of the emergence of “thermal avalanches” for the most packed systems and very close to the transition point between conduction and convection.
§ CONCLUSIONS
We have used numerical simulations with the open source code TLBfind <cit.> to study the statistical properties of heat flux fluctuations in concentrated emulsions under thermal convection, just above the transition from conductive to convective states. Numerical simulations have been helpful in providing a systematic analysis at varying droplet concentrations, ranging from diluted (showing small heat transfer fluctuations) to packed emulsions (highlighting enhanced heat flux fluctuations). By systematically increasing the droplet concentration, we have observed evidence of a continuous transition that goes together with a continuous rise of the tails of the PDF of the mesoscopic observable ^*,(drop) used to quantify the droplet heat transfer fluctuations. We have analyzed these extreme fluctuations, observing their strong correlation with the droplet spatial localization, resulting in an accumulation in the boundary layer region of droplets contributing to the extreme fluctuations. Furthermore, extreme heat transfer fluctuations take place via coherent droplets motion, with a spatial extension S and with a characteristic temporal duration 𝒯. A statistical analysis of both S and 𝒯 has been conducted, showing a neat power-law behavior when the emulsion is highly packed. This result is found to be robust at changing the mesoscopic observable used to identify a droplet coherent motion.
Our findings open new questions concerning other aspects that may affect the heat transfer properties in emulsion systems with finite-size droplets. For instance, it is not known so far if the heat flux fluctuations explored in this work can be more pronounced by changing from a two-dimensional to a three-dimensional system or by varying the number of convective rolls in the Rayleigh-Bénard cell. The role played by these fluctuations may also change as the Rayleigh number Ra = ρα g Δ T H^3/μκ increases, assessing the regime that for Newtonian fluids is no longer laminar. Furthermore, an investigation into the effect of confinement may be useful to shed light on the characteristic domain size limit beyond which the observed extreme heat flux fluctuations disappear.
§ ACKNOWLEDGEMENTS
We wish to acknowledge Fabio Bonaccorso for his support. MS and MB gratefully acknowledge CN1 – Centro Nazionale di Ricerca in High-Performance Computing Big Data and Quantum Computing – for support. This work received funding from the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 882340).
rsc
|
http://arxiv.org/abs/2306.10648v1
|
20230618232956
|
Bidder Selection Problem in Position Auctions via Poisson Approximation
|
[
"Nick Gravin",
"Yixuan Even Xu",
"Renfei Zhou"
] |
cs.GT
|
[
"cs.GT",
"68W25"
] |
figuresection
theoremTheorem[section]
definition[theorem]Definition
remark[theorem]Remark
corollary[theorem]Corollary
*theorem*Theorem
fact[theorem]Fact
observation[theorem]Observation
proposition[theorem]Proposition
example[theorem]Example
lemma[theorem]Lemma
claim[theorem]Claim
|
http://arxiv.org/abs/2306.05520v1
|
20230608193218
|
A 3D Drizzle Algorithm for JWST and Practical Application to the MIRI Medium Resolution Spectrometer
|
[
"David R. Law",
"Jane E. Morrison",
"Ioannis Argyriou",
"Polychronis Patapis",
"J. Alvarez-Marquez",
"Alvaro Labiano",
"Bart Vandenbussche"
] |
astro-ph.IM
|
[
"astro-ph.IM"
] |
0000-0002-9402-186X]David R. Law
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD, 21218, USA
0000-0002-9288-9235]Jane E. Morrison
Steward Observatory, University of Arizona, Tucson, AZ, 85721, USA
0000-0003-2820-1077]Ioannis Argyriou
Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium
0000-0001-8718-3732]Polychronis Patapis
Institute for Particle Physics and Astrophysics, ETH Zurich, Wolfgang-Pauli-Str 27, 8093 Zurich, Switzerland
0000-0002-7093-1877]J. Álvarez-Márquez
Centro de Astrobiología (CAB), CSIC-INTA, Ctra. de Ajalvir km 4, Torrejón de Ardoz, E-28850, Madrid, Spain
0000-0002-0690-8824]Alvaro Labiano
Telespazio UK for the European Space Agency, ESAC, Camino Bajo del Castillo s/n, 28692 Villanueva de la Cañada, Spain
Centro de Astrobiología (CAB), CSIC-INTA, ESAC, Carretera de Ajalvir km4, 28850 Torrejón de Ardoz, Madrid, Spain.
0000-0002-1368-3109]Bart Vandenbussche
Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium
We describe an algorithm for application of the classic `drizzle' technique to produce
3d spectral cubes using
data obtained from the slicer-type integral field unit (IFU) spectrometers
on board the James Webb Space Telescope. This algorithm relies upon the computation
of overlapping volume elements (composed of two spatial dimensions and one spectral dimension) between
the 2d detector pixels and the 3d data cube voxels, and is greatly simplified
by treating the spatial and spectral overlaps separately at the cost of just 0.03% in
spectrophotometric fidelity.
We provide a matrix-based formalism for the computation of spectral radiance, variance,
and covariance from arbitrarily dithered data and comment on the performance of this algorithm for
the Mid-Infrared
Instrument's Medium Resolution IFU Spectrometer (MIRI MRS). We derive a series
of simplified scaling relations to account for covariance between cube spaxels in spectra
extracted from such cubes, finding multiplicative factors ranging from 1.5 to 3 depending on the
wavelength range and kind of data cubes produced.
Finally, we discuss how undersampling produces
periodic amplitude modulations in the extracted spectra in addition to those naturally produced by fringing within the instrument; reducing such undersampling artifacts below 1%
requires a 4-point dithering strategy and spectral extraction radii of 1.5 times the PSF FWHM or greater.
§ INTRODUCTION
The James Webb Space Telescope (JWST) contains two integral-field unit (IFU) spectrographs
operating at near-infrared and mid-infrared wavelengths.
The Near Infrared Spectrograph <cit.> provides IFU spectroscopy throughout
a 3 × 3 arcsec field from λλ 0.6 - 5.0 μm, while the Mid Infrared Instrument
Medium Resolution Spectrometer <cit.> provides IFU spectroscopy
throughout a 3.2 × 3.7 - 6.6 × 7.7 arcsec field from λλ 4.9 - 27.9 μm <cit.>.
Both IFUs are of slicer-type design, meaning that they sample the field of
view with an optical image slicer that disperses spectra on a detector in a manner
akin to multiple adjacent slit-type apertures.
A key part of the data processing pipeline for both MIRI and NIRSpec is the reformatting of calibrated two-dimensional (2d) spectral data from multiple dithered observations
into three-dimensional (3d) rectified data cubes on a regularly-sampled grid. Such data cubes are a convenience that greatly simplifies later scientific analyses as they combine arbitrarily dithered observations into a single data product and obviate the need for a given user to know about the complex distortion solution of the instrument.
However, one-dimensional (1d) spectra extracted from such cubes can also contain artifacts produced by the resampling of the spectral data, and some science programs (e.g., observations of unresolved point sources) may
achieve higher SNR, better spectral resolution, or lower covariance
by extracting 1d spectra directly from the 2d data.
Fundamentally, a cube building algorithm converts between detector pixels (i.e, 2d elements of the detector focal plane array) and rectified data cube voxels (i.e., 3d volume elements with two spatial
dimensions and one spectral dimension).
As typically implemented within the FITS image data format,
such voxels have constant spatial size throughout a given data cube but can have varying spectral width (e.g., in the case of non-linear wavelength solutions).
It is likewise common to refer to individual spaxels within such data cubes, where
a spaxel correponds to a given 2d spatial element and consists of many individual
voxels stretching away in the spectral domain.
In the present contribution, we describe our implementation of a 3d drizzle algorithm
for the specific use case of the JWST MIRI and NIRSpec IFUs. We note that
this is not the first time that such algorithms have been developed;
<cit.>, <cit.>, <cit.>, and <cit.> have all discussed its application
to Spitzer/IRS, Herschel/PACS, AAO/SAMI, and VLT/MUSE respectively and similar schemes using other weighting functions have been presented previously for many other ground-based IFUs <cit.>. However, the specifics of the implementation often differ based on the characteristic
properties of each instrument; we therefore present the JWST-specific algorithm here
along with an analysis of its implications for data quality.
While we focus primarily on the JWST MIRI MRS IFU, most of our discussion
and conclusions are relevant for the JWST NIRSpec IFU as well (see <ref>).
We organize this manuscript as follows.
In <ref> we review the basic design characteristics
of the MIRI MRS IFU. In <ref> we describe
the algorithm used by the STScI JWST data reduction pipeline, along with some
practical notes for implementation of the algorithm in a pipeline-style environment.
We assess the covariance properties of the resulting data cubes for the MIRI MRS
in <ref>, providing a set of scale factors that can be applied to the
pipeline-generated data cubes to ensure proper variance characteristics of
extracted spectra. Finally, we discuss the impact of the severe spatial and spectral undersampling
of the MRS on the composite data cubes in <ref>, along with
recommendations for observing and data analysis strategies to mitigate the impact
of these effects on scientific data.
We summarize our conclusions in <ref>.
§ OVERVIEW OF THE MIRI IFU AND DATA PIPELINE
The MIRI MRS <cit.> consists of four cospatial IFUs that jointly cover the wavelength range 4.9 - 27.9 μm using a series of dichroic
beam splitters and gratings to disperse light across a pair of arsenic-doped silicon (Si:As) impurity band conduction detectors with 1032 x 1024
pixels each <cit.>. This is achieved using a pair of dichroic and grating assembly (DGA) wheels to select between
three grating settings (A/B/C[Here we use the A/B/C naming scheme; elsewhere some references instead refer to these as the SHORT/MEDIUM/LONG settings respectively.]); each observation obtains data from all four IFUs simultaneously at a single setting.
There are thus twelve sub-bands (A/B/C for each of Channels 1-4) that make up the full MRS wavelength range. Longer wavelength
IFUs have progressively larger footprints ranging from 3.2 × 3.7 arcsec in Channel 1 to
6.6 × 7.7 arcsec in Channel 4. As illustrated in Figure <ref>, it is the task of the IFU cube building algorithm
to create a rectified data cube (strictly more of a step-pyramid) from the dispersed spectra of each of the individual fields of view.
The on-orbit performance characteristics of the MRS have been presented by <cit.> and references therein; we summarize here those aspects
relevant to the construction and evaluation of composite data cubes.
In order to maximize the spectral resolution and field of view
for a fixed number of detector pixels, the MRS is significantly spatially
undersampled[The MRS is, to a lesser extent, spectrally undersampled as well.]
by design and does not reach the Nyquist-sampling minimum of at least two
samples per spatial resolution element (Figure <ref>) required to
faithfully reconstruct the original signal.
In the IFU image slicer along-slice direction (α) the sampling
is set by the detector pixel size, which undersamples the PSF for
λ < 15 μm, critically samples the PSF for 15 < λ < 20 μm, and oversamples
the PSF for λ > 20 μm (dashed blue lines). In the across-slice direction
(β) the sampling is set by the width of the IFU slicer optics which undersample the PSF at all wavelengths (orange dashed lines).
These slice widths have thus been carefully chosen <cit.> such that a single across-slice dither
can simultaneously achieve half-integer offsets with respect to the slicers in each
of the four MRS channels.[I.e., 0.97” is 5.5, 3.5, 2.5, and 1.5 times the slice
width in Channels 1-4 respectively.]
Such dithering is essential in order to be able to recover spatial and spectral resolution lost due to the convolution
of the native PSF delivered by the JWST focal optics with the MRS slicer and pixel response functions.
Nonetheless, we note that the measured widths of bright point sources observed during
JWST commissioning do not reach the theoretical diffraction limit, as illustrated by
Figure <ref> (blue and orange points).
As discussed at length by <cit.> and <cit.>, this is due not just to undersampling but also to internal scattering within the MRS detectors, which broadens the effective PSF by multiple internal reflections of the wavefront.
For the purposes of the present contribution, we neglect the differences between the along-slice and across-slice PSF and
describe the average PSF FWHM of the MIRI MRS data as a linear function of wavelength λ
(solid black line in Figure <ref>) given by:
θ = 0.033 (λ/micron) + 0.106 arcsec
The JWST data pipeline <cit.> consists of three primary stages. In the first stage calwebb_detector1 <cit.>
the raw detector ramps are corrected for cosmic rays and a variety of electronic artifacts to produce a series of uncalibrated slope images
giving the count rate in each detector pixel in units of DN/s. In the second stage calwebb_spec2 these uncalibrated slopes
have the MIRI MRS wavelength calibration <cit.> and distortion model <cit.> attached, are corrected for the MIRI
cruciform artifact <cit.>, flatfielded, flux calibrated, and corrected for fringes produced by
gain modulations resulting from internal reflections within the MRS detectors <cit.>. These calibrated slope
images are provided in units of surface brightness (MJy/sr), and have both science (SCI), uncertainty (ERR), and data quality (DQ) extensions. The task of IFU cube building is thus to take these flux-calibrated detector images and combine multiple dithered images into composite 3d science, uncertainty, and data quality cubes.
This task occurs within the third stage of the JWST pipeline calwebb_spec3, along with other steps that can perform background subtraction and outlier detection based on the multiple dithered exposures.[Strictly, cube building also occurs within the
calwebb_spec2 stage as well to create small preview cubes based on individual exposures.]
1d spectra (e.g., Figure <ref>) can then easily be extracted from these cubes by applying standard 2d aperture photometry techniques to each wavelength plane, with suitable corrections for aperture losses given the reconstructed MRS PSF.
At the present time the JWST pipeline includes two method for building IFU data cubes: the 3d drizzle approach that is the subject of the present contribution and an alternative based on an exponential Modified-Shepard method (EMSM) weighting function. While we concentrate our discussion on 3d drizzle, we compare briefly against the EMSM approach in <ref>.
§ THE 3D DRIZZLE ALGORITHM
§.§ Base algorithm
As outlined in the original case for 2d imaging by <cit.>, the drizzle algorithm is based around projecting the natively-sampled detector pixels onto a common output grid and allowing the pixel
intensities to `drizzle' down onto the output grid using weights given by the
fractional area overlap between the input and output pixels. Similarly, in the 3d case we need to project the 2d
detector pixels to their corresponding 3d volume elements and allocate
their intensities to the individual voxels of the final data cube according
to their volumetric overlap.
The volumetric overlap of two arbitrarily-oriented hexahedra is a
non-trivial calculation, but can be simplified first by the assumption that
our coordinate frame is aligned with the cube voxels. In this case, our
task reduces to computing the overlap between the irregular projected volumes
of the detector pixels and the regular grid of cube voxels which for simplicity
we assume corresponds to the world coordinates (, , λ).[This represents the `skyalign' orientation provided by the JWST pipeline, but cubes can also be produced with arbitrary rotation angles since the actual rectilinear grid is a cartesian plane tangent to the spherical world coordinate system.]
Unlike in the imaging case, the detector pixels illuminated by our slicer-type
IFUs contain a mixture
of degenerate spatial and spectral information.
The spatial extent in the along-slice direction (α) and
the spectral extent in the dispersion direction (λ) both vary continuously within the dispersed image of a given slice
in a manner akin to a traditional slit spectrograph and are sampled by the detector pixels (x,y).
In contrast, the spatial
extent in the across-slice direction (β) is set by the IFU
image slicer width and changes discretely between slices.
The four corners of a detector pixel thus define a tilted hexahedron in (α, λ) space with the front and back faces of the polyhedron defined
by the lines of constant β created by the IFU slicer. (α, β) is itself rotated (and incorporates some degree of optical distortion)
with respect to world coordinates (,) and thus the volume element defined by a detector pixel is rotated in a complex manner with respect to the cube voxels
(Figure <ref>).
The iso-α and iso-λ directions are not perfectly orthogonal to each other, and are similarly tilted with
respect to the detector pixel grid. However,
since iso-α is nearly aligned with the detector Y axis (mean tilt of 1± 0.8^∘ across all channels;
see Figure <ref>) and iso-λ nearly aligned with the detector X axis (mean tilt of 4± 3^∘),
we make the additional
simplifying assumption to ignore this small tilt when computing the projected volume of the detector pixels
(see <ref>). Effectively, this means that the surfaces
of the volume element are flat in the α, β, and λ planes[In making such an assumption, we are also implicitly ignoring the
fact that the contents of the 3d volume represented by a given detector pixel are not uniformly scrambled.
As for ordinary slit spectroscopy, spatial substructure within a given
slice in the across-slice (β) direction will be degenerate with
substructure in the spectral dispersion direction.]
and the spatial and spectral overlaps can be computed independently
(see Figure <ref>).
In the spectral domain, the overlap in λ between detector
pixels and cube voxels is a simple 1d problem with a well-known
solution.
If the minimum and maximum wavelengths covered by the ith detector pixel are
given by λ_(i,1) and λ_(i,2) respectively, and the
minimum and maximum wavelengths covered by the jth cube voxel by λ_(j,1) and λ_(j,2), then the total length of overlap δλ[i,j] in the spectral dimension between pixel i and voxel j is:
δλ[i,j] = max(0,max((λ_(j,2)-λ_(i,1)), 0)
- max((λ_(j,2)-λ_(i,2)), 0)
- max((λ_(j,1)-λ_(i,1)), 0))
In the spatial domain, we use the IFU spatial distortion transform
(,) = 𝒟(α,β) mapping from local to world coordinates to
define the four corner coordinates 𝒟(α_1,β_1), 𝒟(α_2,β_1), 𝒟(α_2,β_2), 𝒟(α_1,β_2), where α_1/α_2 are the minimum/maximum along-slice coordinates associated with a given pixel and
β_1/β_2 are the minimum/maximum across-slice coordinates associated
with the corresponding slice.[Note that these coordinates pairs should wrap the footprint rather than crossing it on a
diagonal since most standard area-overlap computations implicitly assume such
an ordering.]
Our exercise is thus to compute the common area δΩ[i,j] between the projected detector pixels defined by
Ω_i = [(_(i,1), (i,1)), (RA_(i,2), DEC_(i,2)),
(RA_(i,3), DEC_(i,3)), (RA_(i,4), DEC_(i,4))]
and the cube spaxels defined by
Ω_j = [(RA_(j,1), DEC_(j,1)), (RA_(j,2), DEC_(j,2)),
(RA_(j,3), DEC_(j,3)), (RA_(j,4), DEC_(j,4))]
This problem can be solved using the well-studied Sutherland-Hodgman <cit.> polygon
clipping algorithm.
Similarly to the imaging case,
we can likewise choose to shrink our effective pixel borders
if desired by a fractional amount p
prior to computation of the common overlap area. Doing so can improve the effective PSF FWHM of the drizzled
data cubes by about 5 - 10%
<cit.>,
but at the cost of making spectral resampling artifacts (<ref>) substantially worse.
Adapting the formalism outlined by <cit.> for the SDSS-IV/MaNGA IFU,
we describe our input data as vectors of specific intensity
f[i] (given for JWST in units of MJy sr^-1) and variance g[i] for all
i corresponding to valid detector pixels across all exposures to be
combined together. Likewise, we define F[j] and G[j] to represent the
specific intensity and variance of the j voxels in the final 3d data cube.
We can thus write F as the matrix product of W and f
F = W × f
where the normalized weights W are given by
W[i,j] = δλ[i,j] δΩ[i,j]/∑_i δλ[i,j] δΩ[i,j]
The covariance matrix C is given by
C = W × (g' × W^⊤)
where g' is the variance matrix whose diagonal elements are the g[i] and
all non-diagonal elements are zero.
Equivalently, the final voxel specific intensities and variances can also be
written in index notation as
F[j] = ∑_i f[i] W[i,j]
and
G[j] = ∑_i g[i] (W[i,j])^2
respectively.
Once computed, these vectors F and G can be trivially rearranged
into 3d arrays corresponding to the output data cube dimensions.
We note that this formalism is agnostic about the origin of the input data; while designed
for use with the JWST MIRI and NIRSpec IFUs, it can equally well be applied
to data obtained with ordinary slit-type instrument modes or even the NIRSpec microshutter array. In this case the pixel
footprint would be defined by the pixel size in the along-slit direction
and the shutter width in the dispersion direction, and the same core algorithm
could be used to combine stepped-slitlet observations into composite data
cubes in future JWST observing cycles.
§.§ Application to the JWST pipeline environment
In terms of practical implementation in a pipeline environment, there is a
tension between the numerical accuracy of the solution and the CPU runtime
required to reach that solution. With regard to our simplifying assumption that we
could separate the spatial and spectral components of the volume overlap
calculation, we compared two different
pure python versions of the 3d drizzle algorithm; one in
which this simplification is made, and the other in which we calculate
the full volumetric overlap using a series of convex polyhedra.
The simplified code is found to run approximately 500 times faster, with a runtime
of about 15 minutes to build a data cube from a single MIRI MRS exposure
compared to approximately 120 hours using a standard Mac desktop. The cubes themselves, however,
have statistically insignificant differences on the order of 0.03% near pixels
whose trace tilt is approximately 10^∘ and approximately
0.003% near pixels with a wavelength tilt of nearly 0^∘.
Given that the maximum MRS iso-λ tilt is 12^∘ and the maximum iso-α tilt is just
3.3^∘, we conclude
that we can safely use the simplified method.[Likewise, the impact of this approximation on the reconstructed cubes will
be small compared to the instrumental systematics produced by fringing, PSF defocus, and the cruciform artifact <cit.>.]
Even a 15-minute runtime for a simple single-frame data cube is impractical
however since cube building is run many times within the JWST pipeline architecture,
and typically with far more than a single exposure.
In contrast, we found that a python-wrapped C implementation of the core
3d drizzle algorithm reduced the runtime for a single-exposure MIRI MRS cube to just
7 seconds, representing a factor of ∼ 100 improvement with identical
result.
The optimal sampling parameters of the final data cube will vary as a function
of wavelength, and should be designed to approximately Nyquist sample both
the spatial point spread function (PSF) and the spectral line spread function
(LSF) in order to not lose information in the final data products.
Based on the observed PSF FWHM values for MIRI MRS data (see discussion
in <ref>), we adopt default values of 0.13, 0.17, 0.20, and 0.35 for the cube spaxel scales in Channels 1-4 respectively. Likewise, based on the typical spectral resolving power R
within each band <cit.>, we adopt default spectral sampling steps for the four
channels of
0.8 nm, 1.3 nm, 2.5 nm, and 6.0 nm
respectively.
The JWST pipeline is thus able to construct data cubes with a common linear
spaxel scale and wavelength scale across all three grating settings within
each of the four MRS wavelength channels.
It is often convenient, however, to be able to construct composite drizzled data cubes across longer wavelength ranges combining multiple spectral channels in order to be able to effectively visualize
the corresponding spectra. Given the factor of ∼ 5 in wavelength covered by MIRI MRS, however (and the factor of ∼ 8 covered by NIRSpec)
this poses some challenges. In particular, both the effective PSF FWHM
and spectral LSF change significantly over this range, meaning that the ideal cube voxel size
is different for each channel which is difficult to accommodate within the FITS
standard. The pipeline therefore makes available a `multi-channel' drizzling option
in which the linear spatial sampling is set to the ideal
value for the shortest wavelengths and the non-linear spectral sampling is designed to
provide roughly Nyquist
sampling of the spectral resolving power at all wavelengths (see Figure <ref>).
An example of a 1d spectrum extracted from such a multi-channel data cube was illustrated previously in Figure <ref>
using a conical extraction radius that increases as a function of wavelength to account for the increasing PSF FWHM.
While the 3d drizzle algorithm can thus combine data from multiple
spectral bands (and even, in principle, between MIRI MRS and NIRSpec) we
note that this can complicate the analysis of the data within the spectral overlap regions between different bands.
In these regions cube building may combine together data with quite different spatial and/or spectral resolutions. In the spatial domain, the different slicer widths between channels can produce a discontinuous jump in the central PSF structure and require correspondingly different aperture correction factors for small extraction radii. In the spectral domain such a combination of data from different bands can produce a hybrid LSF, although as
discussed by <cit.>
for relatively small LSF differences (∼ 20%) such a combination can be described to reasonable accuracy as the simple mean of the
two LSF widths.
§ COVARIANCE
Since the process of building a rectified 3d data cube involves resampling the original data, it necessarily
introduces covariance into the resulting cubes. This covariance between individual voxels means
that the variance of spectra extracted from these data cubes does not improve with increasing aperture radius as quickly as
might otherwise be expected, and is thus important to take into consideration in the analysis of IFU spectra.
Formally, the covariance of JWST data cubes can be computed for an arbitrary set of observations using Equation <ref>.
For a typical MRS Channel 1 data cube (dimensions ∼ 50 × 50 × 3500), the corresponding covariance matrix has
nearly 10^14 elements describing the correlation between every pair of voxels, requiring roughly 0.3 PB to store.
Fortunately this matrix is extremely sparse for the 3d drizzle weighting function, with all non-zero entries closely confined to within
a few elements of the diagonal. In the future, the JWST pipeline may provide such sparse matrices
since they can be useful in computing SNR statistics for binned regions of the data cubes <cit.>.
In the present contribution, we simply estimate an approximate scaling factor that can be applied to the variance computed
for 1d extracted spectra as a function of the aperture extraction radius for typical MIRI MRS data cubes.
Such an approach has a long history of use in the literature, and was adopted for both the CALIFA IFU survey <cit.> and early
MaNGA IFU survey <cit.>, in addition to classical HST imaging applications <cit.>.
We determine this scaling factor empirically by taking a set of 4-pt dithered MRS observations,
replacing all ERR extension values with an arbitrary constant value of 0.1 MJy/sr, and replacing all
SCI extension values with quantities
drawn from a gaussian random distribution centered around 1 MJy/sr with a 1σ distribution width of 0.1 MJy/sr. We then construct
science and error cubes from the data following the prescriptions in <ref>
and the default cube parameters given in <ref>
to obtain a single realization of the composite
cube. We repeat this exercise 20 times with a different seed for the random number generator.
For every voxel in the data cube, we can thus directly compare the pipeline-estimated variance in the voxel with the actual variance observed
between the 20 random trials. Similarly, for arbitrary aperture sizes we can extract 1d spectra from the data cubes
and compare the nominal diagonal variance in the spectrum σ_ diag to the effective variance σ_ meas observed between spectra extracted from the 20 random realizations.
We show the results of this experiment in Figure <ref> for both channel-specific IFU cubes and multi-channel IFU cubes
spanning the entire MRS wavelength range.[There is no significant dependence on location within the FOV or wavelength within a given channel.]
Figure <ref> demonstrates that the ratio σ_ meas/σ_ diag increases asymptotically with increasing aperture radius. For channel-specific
cubes, this ratio is 1.66, 1.62, 1.78, and 1.53 at r = 2.0 FWHM for Channels 1-4 respectively (default spaxel scales 0.13, 0.17, 0.20, and 0.35 arcsec). The corresponding ratios for the multi-channel cubes (1.58, 1.87, 2.36, and 3.03 for Channels 1-4 respectively) increase significantly at longer wavelengths due to the significant oversampling of the larger PSF by a spaxel scale designed for short wavelengths.
We therefore recommend that scientific analyses using spectra extracted from the JWST
data cubes scale their variance accordingly in order to account for this effect.
§ CONSEQUENCES OF UNDERSAMPLING
As discussed in <ref>, the MRS is significantly undersampled
compared to the ideal Nyquist frequency. As a result, resampling the 2d detector data
into 3d rectified data cubes produces aliasing artifacts
<cit.>
in the final data products.[This resampling noise is in addition to the large (∼ 30%) periodic amplitude modulations produced in
MIRI spectra due to fringing within the instrument <cit.>.]
While dithering helps to mitigate these artifacts by filling in the missing phase space
it does not eliminate them entirely.
We illustrate the origin of these artifacts schematically using a 1d
example in Figure <ref>.
In this figure we plot a 1d gaussian function representing an intrinsic
signal (black line in upper panels) and convolve that function with a tophat pixel response function to determine the counts that would be recorded in the pixelized representation
of that function (filled orange circles). The pixel width is here chosen to be roughly half-Nyquist,
similar to the case of the MRS at many wavelengths. The pixelized representation of the
function is then resampled to a different pixel grid using a 1d version
of the drizzle algorithm (filled blue squares). While the integrated counts in the resampled
function are independent of the phase ϕ of the intrinsic signal peak with respect
to the pixel boundaries, the profiles change dramatically. The flux in the peak
resampled pixel for instance varies from 50% to 80% of the intrinsic peak flux; as
illustrated in the lower-left panel this value varies systematically with the pixel phase.
If we assume that the pixel phase of the signal peak varies with wavelength in a manner
akin to the trace of the spectral dispersion on the MRS detectors in Channel 1A, we would
therefore expect a peak resampled count rate that shows a sinusoidal oscillation with
wavelength (lower-right panel). The effective frequency of this sinusoidal oscillation
changes with wavelength according to the tilt between the spectral dispersion and the detector
pixel grid; at the ends of the detector where the trace crosses pixel boundaries faster
the oscillation frequency is higher than near the middle of the detector where the trace
is nearly aligned with the detector Y axis.
In three dimensions the situation is slightly more complex but the effect broadly equivalent.
In Figure <ref> (left-hand panels) we show the results of numerical simulations
in which we constructed a detailed model of the JWST PSF using the WebbPSF tool <cit.>, and
simulated a source with constant 1 Jy spectrum with a realistic model of the MRS distortion
solution and flight-like dither patterns. These simulations neglected all sources of noise and detector responsivity variations (e.g., fringing, spectrophotometric calibration, etc) and considered
only the effective sampling of each detector pixel based on the instrument distortion model.
These dithered data were combined into a 3d data cube using the drizzle algorithm defined
in <ref>.
The spectrum of a single cube spaxel (solid blue line) shows the same kind of wavelength-dependent amplitude variations as seen in the simplified 1d model shown in Figure <ref>.
These expectations are confirmed by in-flight observations. In Figure <ref> (right-hand panels) we plot spectra extracted from Cycle 1 calibration observations of the 6th-magnitude G3V star 16CygB (JWST Program ID 1538). These spectra show similar large-scale amplitude
variations with comparable changes in frequency as a function of wavelength.
Such behaviour has been observed before in data from previous generations of space-based telescopes.
<cit.> for instance discussed the impact of such pixel-phase effects on the
reconstruction of the point spread function for HST/WFC3 imaging data, while <cit.>
discussed the effect on HST/STIS spectra whose spectral traces change pixel phase rapidly.
The JWST/MIRI IFU case is akin to a combination of the two; plotting the spectrum of a single
cube spaxel is similar to trying to compare source fluxes in imaging data by plotting the peak
value of the drizzled image for each source rather than doing proper aperture photometry.[Such effects are less pronounced in data cube produced by fiber-fed spectrographs such as MaNGA because of azimuthal scrambling in combination with the relevant
sampling taking place at the fiber face whose footprint changes only slowly with wavelength
due to chromatic differential refraction.]
As for the imaging and single slit cases, the solution to this problem for IFU spectroscopy
is a combination of dithering and a suitably large spectral extraction aperture.
As illustrated by Figure <ref>, the amplitude of this 'resampling noise' decreases
with the use of a four-point dither pattern that samples the PSF at roughly
half-integer intervals. A two-point pattern in contrast does not significantly improve
resampling noise beyond the undithered case, largely because optical distortions
in the MRS combined with pixel-mapping discontinuities between slices
prohibits such a simple pattern from actually achieving half-integer sampling
throughout the entire field of view.
Similarly, resampling noise decreases dramatically with the use of larger extraction
apertures. Even undithered data for instance show little to no resampling noise when
using apertures whose radius is 2.0 times the PSF FWHM. This is unsurprising, given
we know that the total integrated flux is conserved.
We estimate the impact of this resampling noise on spectra throughout the MRS wavelength
range using MIRI Cycle 1 calibration observations of bright G3V star 16CygB
(JWST Program ID 1538).
In each of the twelve MRS bands, we extract spectra from the composite data cubes using apertures of radius 0[I.e., a single spaxel.], 0.5, 1.0, 1.5, and 2.0 times the PSF FWHM given by Eqn <ref>.
We apply a 1d residual fringe correction <cit.> to our extracted spectra in order to remove periodic modulations of known frequency produced by fringing within the MIRI instrument that are unrelated to any sampling considerations <cit.>. We then compute the normalized ratio between spectra extracted from the smaller apertures to
spectra extracted from the r = 2 FWHM aperture (the default for the MIRI MRS pipeline), and measure
the semi-amplitude of the resampling-induced oscillation.[We do not apply the aperture correction factors necessary to obtain the true flux from each aperture, as our normalization would eliminate such global correction factors anyway.]
These values are plotted in
Figure <ref> for cubes reconstructed from single undithered observations, two-point
dithered data, and four-point dithered data.
In order to reduce sampling artifacts in the recovered spectra to 5% or less at all wavelengths we recommend use of a 4-point dither pattern with an extraction radius
of at least 0.5 times the PSF FWHM, while reducing such artifacts
below 1% requires an extraction radius of at least 1.5 times the PSF FWHM.[As a practical consequence, resampling noise will be larger in the preview cubes produced by the calwebb_spec2 stage of the JWST pipeline which incorporate just a single exposure than in the calwebb_spec3 data cubes combining multiple dithered exposures.]
§ CONCLUSIONS
We have presented an algorithm for the extension of the classic 2d
drizzle technique to three dimensions for application to the MIRI and NIRSpec integral
field unit spectrometers aboard JWST. While this technique is conceptually straight forward and relies
on a weight function proportional to the volumetric overlap between native detector pixels
and resampled data cube voxels, in practice additional simplifications are necessary to
make the problem computationally tractable. By separating the spatial and spectral
overlap computations it is possible to achieve orders of magnitude gains in computational time at
minimal cost in spectrophotometric accuracy. Such gains, along with speed improvements
provided by implementation of the core algorithm in a pre-compiled (i.e, C-based)
architecture are essential for implementation of this approach in a practical
pipeline environment.
While this algorithm provides a matrix-based formalism for computation of the variance
of the final cube spaxels and the covariance between such spaxels, the full covariance
matrix is intractably large for practical use. This covariance means that
the effective variance in extracted spectra does not average down with the inclusion
of more cube spaxels in the naively expected manner. We therefore computed a series of
simple scaling relations that can be applied to spectra extracted from MIRI MRS
data cubes in order to account for covariance between cube spaxels. Depending on the
wavelength band in question, the cube sampling, and the aperture radius, such correction factors range from ∼ 1.5 to 3.
Finally, we discussed the practical implications of the severe (factor ∼ 2)
undersampling of the MIRI MRS on the quality of the calibrated data cubes provided by the
3d drizzle technique. We demonstrated that this undersampling imparts a periodic
amplitude modulation of up to 20% into spectra extracted from such data cubes without
efforts to mitigate it. With the use of appropriate dithering (at least 4 points)
and spectral aperture extraction radii (r = 1.5 FWHM or greater) these sampling artifacts
can be reduced to 1% or less throughout the MRS spectral range.
We thank Eric Emsellem and Peter Weilbacher for helpful discussions with regard to IFU cube building applications to the WHT/SAURON and VLT/MUSE integral field spectrographs.
Ioannis Argyriou and Bart Vandenbussche thank the European Space Agency (ESA) and the Belgian Federal Science Policy Office (BELSPO) for their support in the framework of the PRODEX Programme.
J.A.-M. acknowledge support by grant PIB2021-127718NB-100 from the Spanish Ministry of Science and Innovation/State Agency of Research MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”.
The following National and International Funding Agencies funded and supported the MIRI development: NASA; ESA; Belgian Science Policy Office (BELSPO); Centre Nationale d'Etudes Spatiales (CNES); Danish National Space Centre; Deutsches Zentrum fur Luftund Raumfahrt (DLR); Enterprise Ireland; Ministerio De Economi´a y Competividad; Netherlands Research School for Astronomy (NOVA); Netherlands Organisation for Scientific Research (NWO); Science and Technology Facilities Council; Swiss Space Office; Swedish National Space Agency; and UK Space Agency.
This manuscript uses data from DOI 10.17909/63a3-d535
[Anderson(2016)]anderson16 Anderson, J. 2016, Instrument Science Report WFC3 2016-12, 42 pages
[Argyriou et al.(2020)]argyriou20 Argyriou, I., Rieke, G. H., Ressler, M. E., et al. 2020, , 11454, 114541P. doi:10.1117/12.2561502
[Argyriou et al.(2023)]argyriou23 Argyriou, I., Glasse, A., Law, D. R., et al. 2023, arXiv:2303.13469. doi:10.48550/arXiv.2303.13469
[Böker et al.(2022)]boker22 Böker, T., Arribas, S., Lützgendorf, N., et al. 2022, , 661, A82. doi:10.1051/0004-6361/202142589
[Bushouse et al.(2022)]bushouse22 Bushouse, H., Eisenhamer, J., Dencheva, N., et al. 2022, Zenodo
[Casertano et al.(2000)]casertano00 Casertano, S., de Mello, D., Dickinson, M., et al. 2000, , 120, 2747. doi:10.1086/316851
[Dressel et al.(2007)]dressel07 Dressel, L., Hodge, P., & Barrett, P. 2007, Instrument Science Report STIS 2007-04, 20 pages
[Fruchter & Hook(2002)]fh02 Fruchter, A. S. & Hook, R. N. 2002, , 114, 144. doi:10.1086/338393
[Husemann et al.(2013)]husemann13 Husemann, B., Jahnke, K., Sánchez, S. F., et al. 2013, , 549, A87. doi:10.1051/0004-6361/201220582
[Jones et al.(2023)]jones23 Jones, O. C., Álvarez-Márquez, J., Sloan, G. C., et al. 2023, arXiv:2301.13233. doi:10.48550/arXiv.2301.13233
[Kavanagh et al. (2023)]kavanagh23 Kavanagh, P. et al. 2023, in prep.
[Labiano et al.(2021)]labiano21 Labiano, A., Argyriou, I., Álvarez-Márquez, J., et al. 2021, , 656, A57. doi:10.1051/0004-6361/202140614
[Law et al.(2016)]law16 Law, D. R., Cherinka, B., Yan, R., et al. 2016, , 152, 83
[Law et al.(2021)]law21 Law, D. R., Westfall, K. B., Bershady, M. A., et al. 2021, , 161, 52. doi:10.3847/1538-3881/abcaa2
[Morrison et al. (2023)]morrison23 Morrison, J. et al. 2023, in prep.
[Mueller et al. (2023)]mueller23 Mueller, M. et al. 2023, in prep.
[Patapis et al. (2023)]patapis23a Patapis, P. et al. 2023a, in prep.
[Patapis et al. (2023)]patapis23b Patapis, P. et al. 2023b, in prep.
[Perrin et al.(2014)]perrin14 Perrin, M. D., Sivaramakrishnan, A., Lajoie, C.-P., et al. 2014, , 9143, 91433X. doi:10.1117/12.2056689
[Regibo(2012)]regibo12 Regibo, S. 2012, Ph.D. Thesis
[Ressler et al.(2015)]ressler15 Ressler, M. E., Sukhatme, K. G., Franklin, B. R., et al. 2015, , 127, 675. doi:10.1086/682258
[Rieke et al.(2015)]rieke15 Rieke, G. H., Ressler, M. E., Morrison, J. E., et al. 2015, , 127, 665. doi:10.1086/682257
[Sánchez et al.(2012)]sanchez12 Sánchez, S. F., Kennicutt, R. C., Gil de Paz, A., et al. 2012, , 538, A8. doi:10.1051/0004-6361/201117353
[Sharp et al.(2015)]sharp15 Sharp, R., Allen, J. T., Fogarty, L. M. R., et al. 2015, , 446, 1551. doi:10.1093/mnras/stu2055
[Smith et al.(2007)]smith07 Smith, J. D. T., Armus, L., Dale, D. A., et al. 2007, , 119, 1133. doi:10.1086/522634
[Sutherland & Hodgman(1974)]sh74 Sutherland, I. E. & Hodgman, G. W. 1974, “Reentrant polygon clipping”, Commun. ACM, Vol. 17, No. 1, pp. 32-42.
[Weilbacher et al.(2020)]weilbacher20 Weilbacher, P. M., Palsa, R., Streicher, O., et al. 2020, , 641, A28. doi:10.1051/0004-6361/202037855
[Wells et al.(2015)]wells15 Wells, M., Pel, J.-W., Glasse, A., et al. 2015, , 127, 646. doi:10.1086/682281
[Westfall et al.(2019)]westfall19 Westfall, K. B., Cappellari, M., Bershady, M. A., et al. 2019, , 158, 231. doi:10.3847/1538-3881/ab44a2
[Wright et al. (2023)]wright23 Wright, G. et al. 2023, PASP submitted.
§ APPLICATION TO JWST/NIRSPEC
While our analysis in this contribution has focused primarily on the JWST/MIRI MRS IFU, all of the considerations of resampling noise and covariance likewise apply to data obtained with the JWST/NIRSpec IFU as well. We do not explore all operational modes of NIRSpec in detail, but present here a brief analysis of a single band to demonstrate this similarity. For this comparison we use observations of the standard star GSPC P330-E (spectral type G2V) obtained by JWST Program ID 1538 (Observation 62) in the G140H/F100LP grating using a 4-point nod dither pattern.
After processing the observations through the JWST pipeline we use the 3D drizzle algorithm to build data cubes from individual exposures and from the composite 4-point dither sequence. In Figure <ref> we plot the resulting spectrum in the wavelength range λλ 1.00-1.40 μm for a single spaxel located in the center of the star for the individual-exposure cube (solid blue line) and from a three-spaxel radius aperture around the star in the 4-point dithered data cube (solid orange line).
As for the MIRI MRS observations, we note a strong beating due to resampling noise in the single-spaxel spectrum whose frequency varies with wavelength according to the rate at which the dispersed spectral traces cross detector pixel boundaries. This beating is significantly reduced in the spectrum extracted from dithered data with a moderate aperture extraction radius.
Likewise, if we repeat our analysis from <ref> with the NIRSpec data we can estimate the covariance between spaxels in the rectified data cubes. In the case of G140H/F100LP we find a covariance factor of about 1.24 for apertures three spaxels in radius. This factor is somewhat lower than shown for typical MIRI MRS observations in Figure <ref> because the spaxel scale of the
data cube (0.1 arcsec by default) is comparable to the native pixel scale and slice width with which the scene was originally sampled (in contrast, the MIRI MRS Channel 1 default spaxel scale is about 70% of the native sampling size in order to maximize the angular resolution of the reconstructed cubes).
If the NIRSpec cubes were reprocessed with a finer spaxel scale the covariance would increase accordingly.
§ EXPONENTIAL MODIFIED SHEPARD METHOD (EMSM)
As noted in <ref>, the JWST pipeline also contains a second independent method of building data cubes. This second method uses a flux-conserving variant of Shepard's interpolation method commonly employed for ground-based IFUs <cit.>. In brief, each detector pixel is described by its midpoint position in (, , λ) and combined into the final data cube using a 3d weighting kernel w described by
w = e^-(x^2 + y^2 + z^2)/σ^2
where x, y, z are normalized distances in voxel units in the three axes of the data cube (two spatial axes and one spectral) and σ is the scale radius of the exponential profile. Since this function formally extends to infinity, it must be truncated within some limiting region of influence in both spatial and spectral dimensions and normalized by the sum of individual weights to ensure flux conservation.
While the 3d drizzle approach has no free parameters (other than the desired output voxel size), EMSM thus has many such parameters including the scaling and truncation radii in all three dimensions that must be tuned to provide reasonable performance for each
spectral band. Effectively, while the 3d drizzle method apportions flux from a given pixel evenly to the region defined by the boundaries of that pixel, the EMSM method apportions flux to an exponential profile whose size and shape can be tuned by the user.
In practice, data cubes constructed using the EMSM approach have marginally lower spatial and spectral resolution than their drizzled counterparts due to the introduction of the exponential convolution kernel which typically extends beyond the native pixel boundaries. This smoothing can also result in differences in the effective resampling noise between cubes constructed using the drizzle vs EMSM methods.
In Figure <ref> we illustrate one such example from the MRS Channel 3B observations of 16CygB; in this case the brightest-spaxel spectrum obtained from the EMSM data cube (black line) shows resampling artifacts that are about half the amplitude of those in the drizzled data cube (grey line). Similar results can be obtained from the drizzled data cube by convolving it with a spatial gaussian kernel comparable to the EMSM scale factor σ (Figure <ref>, red line) prior to spectral extraction.
|
http://arxiv.org/abs/2306.03597v1
|
20230606113614
|
Human-Object Interaction Prediction in Videos through Gaze Following
|
[
"Zhifan Ni",
"Esteve Valls Mascaró",
"Hyemin Ahn",
"Dongheui Lee"
] |
cs.CV
|
[
"cs.CV"
] |
1]Zhifan Nicor1
[email protected]
2]Esteve Valls Mascaró
[email protected]
3]Hyemin Ahn
[email protected]
2,4]Dongheui Lee
[email protected]
[cor1]Corresponding author
[1]Technical University of Munich (TUM), Arcisstr. 21, Munich 80333, Germany
[2]Technische Universität Wien (TU Wien), Karlsplatz 13, Vienna 1040, Austria
[3]Ulsan National Institute of Science and Technology (UNIST), UNIST-gil 50, Ulsan 44919, Republic of Korea
[4]German Aerospace Center (DLR), Muenchener Str. 20, Wessling 82234, Germany
Understanding the human-object interactions (HOIs) from a video is essential to fully comprehend a visual scene. This line of research has been addressed by detecting HOIs from images and lately from videos. However, the video-based HOI anticipation task in the third-person view remains understudied. In this paper, we design a framework to detect current HOIs and anticipate future HOIs in videos. We propose to leverage human gaze information since people often fixate on an object before interacting with it. These gaze features together with the scene contexts and the visual appearances of human-object pairs are fused through a spatio-temporal transformer. To evaluate the model in the HOI anticipation task in a multi-person scenario, we propose a set of person-wise multi-label metrics. Our model is trained and validated on the VidHOI dataset, which contains videos capturing daily life and is currently the largest video HOI dataset. Experimental results in the HOI detection task show that our approach improves the baseline by a great margin of 36.3% relatively. Moreover, we conduct an extensive ablation study to demonstrate the effectiveness of our modifications and extensions to the spatio-temporal transformer. Our code is publicly available on <https://github.com/nizhf/hoi-prediction-gaze-transformer>.
§ INTRODUCTION
Detecting human-object interactions (HOIs) is a fundamental step toward high-level comprehension of scenes. Compared to instance-level visual recognition tasks such as object detection <cit.> and action recognition <cit.>, HOI detection can provide more contextual and fine-grained cues for scene understanding. However, real-world applications, such as robotics, autonomous driving, and surveillance system, usually need to reason about a scene and generate a plausible HOI anticipation for the near future. For instance, as shown in Fig. <ref>, the person on the right is pushing a bicycle and walking towards a door. Based on this observation, if an intelligent system could anticipate that the human will open the door, it could assist that person to perform this interaction beforehand. Then the human could leave the room without interruption. Thus, a framework that can forecast future HOIs from a video is essential.
However, HOI detection and anticipation are still challenging as multiple humans and objects may appear in a scene and a human may have multiple interactions with multiple objects. In addition, the dependencies between frames are crucial to understand the temporal evolution of human interactions. Due to these difficulties, most existing approaches are only designed for HOI detection in static images. Conventional methods <cit.> often contain two stages. First, an object detector is applied to locate humans and objects. Second, a multi-stream classifier predicts the interactions for each human-object pair. To increase the model efficiency, several one-stage or end-to-end methods <cit.> are proposed to generate object detection and interaction classes in parallel.
While the image-based HOI detectors show great performance on image datasets, they may perform poorly on video datasets because they cannot exploit the temporal cues required to distinguish between some continuous interactions, such as open or close a door <cit.>. Hence, a few works <cit.> are proposed to leverage the temporal dependencies between frames and demonstrate superior performance to the image-based methods. However, these approaches do not consider the human gaze as an additional feature while it often provides valuable information about human intentions <cit.>.
To enable an intelligent system to collaborate with humans more effectively, only recognizing the current HOIs is not sufficient. The ability to anticipate subsequent HOIs is beneficial for task planning and danger avoidance. Nevertheless, there are very few studies addressing the HOI anticipation task from the third-person view <cit.>. However, these works are conducted on small-scale datasets and cannot be generalized to real-world applications.
Thus, we propose a multimodal framework that leverages visual appearance features, semantic contexts, and human gaze cues to tackle HOI detection and anticipation tasks in videos. To our best knowledge, our work is the first one attempting to utilize gaze features in video-based HOI anticipation, and the first to anticipate HOIs in multi-person scenarios. Our framework works in two-stage as follows: in the first stage, an object module detects and tracks humans and objects across the video, and a gaze module leverages human head features to identify where the human is looking at every instant. In the second stage, a spatio-temporal transformer aggregates all extracted features from a sliding window of frames to infer the current or future HOIs. Our spatio-temporal transformer is inspired by the STTran model <cit.>. However, we observe several limitations in STTran architecture that diminish the performance. First, we notice that using the spatial encoder to implicitly extract intra-frame contexts yields a very small benefit. Since the global scene context is useful for vision-related tasks <cit.>, we extend the spatial encoder to explicitly generate a global feature vector for each frame. Inspired by Vision Transformer (ViT) <cit.>, we prepend a learnable class token to the spatial encoder input, which captures the global relationship among all human-object pairs at a particular moment. Moreover, we observe that the temporal encoder in STTran infers temporal relations of all human-object pairs from a sliding window of frames. Instead, we propose an instance-level temporal encoder, which independently processes each unique human-object pair. This allows our model to focus on the individual evolution of each human-object representation in time. Finally, we apply a cross-attention layer to fuse the extracted global features and the gaze information with the instance-level human-object representations. Therefore, our architecture proposes a big extension to STTran and clearly boosts its performance in both HOI detection and anticipation tasks.
Our model is trained and validated on VidHOI dataset <cit.>, which is composed of daily-life videos and is currently the largest video HOI dataset. We design a training strategy to address the dataset imbalance issue. Moreover, inspired by the metrics for egocentric action anticipation tasks, we propose a set of person-wise metrics to assess the model in the HOI anticipation task on multi-person videos. These metrics compute the multi-label recall, precision, accuracy, and F1-score <cit.> separately for each human using the top-k predictions. We also conduct an extensive ablation study to confirm the effectiveness of our modified and added components.
The main contributions of our work are summarized as:
* A deep multimodal spatio-temporal transformer network is designed for anticipating HOIs in multi-person scenes.
* The use of gaze-following methodology in the cross-attention mechanism is explored as an additional novel step towards HOI detection and anticipation in videos.
* A person-wise multi-label criterion is proposed to evaluate the HOI anticipation model in third-person videos.
§ RELATED WORKS
§.§ Gaze in HOI Detection
A Human's gaze direction can indicate where the human is paying attention to. Cognitive studies <cit.> show that human eyes often fixate on the object when performing manual actions with it. Moreover, humans sometimes move their gaze to the next object before finishing the current interaction. <cit.> further suggest that humans may scan over all task-relevant objects when planning a complex movement. <cit.> then discover that the gaze point on an object is dependent on the interaction type. The above-mentioned works demonstrate that gaze cues can provide useful information for detecting and anticipation HOIs.
However, the use of gaze features in HOI detection is not much investigated. For image-based HOI detection, <cit.> propose a human intention-driven HOI detection framework, which utilizes human pose and gaze to assist HOI detection. Their ablation study shows that utilizing human gaze regions can improve the model performance. Nevertheless, to the best of our knowledge, there is no work leveraging the human gaze in video-based HOI tasks. To bridge this gap, our framework explores the effectiveness of gaze information in HOI detection and HOI anticipation tasks.
§.§ Video-based HOI Detection
To properly detect interactions between a human and an object from a video, understanding the evolution of the pair relationship over time is essential. For instance, <cit.> represent human-object relations as a spatio-temporal graph and adopts a Structural Recurrent Neural Network (S-RNN) to infer the interaction types. <cit.> refine the S-RNN by additionally considering object-object relations. <cit.> further improve the model performance by applying learned visual features as the graph nodes. Instead of RNNs, <cit.> propose a Graph Parsing Network (GPN) to parse the spatio-temporal graphs of human-object interactions. Then, <cit.> design a two-stream GPN that also incorporates the semantic features. In contrast to the graph-based methods, <cit.> propose an instance-based architecture to separately reason each human-object pair instance. This model leverages human skeletons as an additional cue for HOIs. ST-HOI <cit.> also utilizes human pose features to detect HOIs. In addition, ST-HOI applies a 3D backbone to extract correctly-localized instance features from a video. Moreover, the large-scale VidHOI dataset is proposed to enable the development of large-size models. Recently, motivated by the great success of the transformer model, different instance-based spatio-temporal transformers <cit.> are designed and are reviewed in the next section.
§.§ Transformer in HOI Detection
The transformer <cit.> is designed for natural language processing (NLP) tasks. The key component in transformer is the attention mechanism, which copes with the gradient vanishing problem of recurrent neural networks (RNNs) in long data sequences. In many NLP tasks, transformer models outperform RNN-based models by a great margin.
Recent advances in transformer in computer vision tasks have motivated researchers to apply it also in the HOI detection task. Several approaches <cit.> attempt to extend the Detection Transformer (DETR) <cit.> from object detection to HOI detection in static images. These approaches first use a convolutional neural network (CNN) to extract visual features from the input image. Then, a transformer network aggregates image-wide contextual features and returns the human bounding box, object bounding box, object class, and interaction class in parallel. These models achieve state-of-the-art performance in the image-based HOI detection task. However, they may perform poorly when detecting HOIs in a video as they cannot understand the temporal contexts between frames.
Recently, researchers <cit.> propose to detect HOIs from videos using spatio-temporal transformers. <cit.> design the Human-Object Relationship Transformer (HORT), which leverages both visual appearance and human pose features to facilitate HOI detection. These features are fused by a transformer with densely-connected parallel spatial and temporal encoders. In contrast, Spatial-Temporal Transformer (STTran) <cit.> consists of a sequential architecture of spatial and temporal transformer encoders. The visual appearance feature of each human-object instance is concatenated with the spatial relation feature and the semantic feature. Most recently, inspired by ViT <cit.>, <cit.> extract patch tokens from frames by a spatial encoder and link them to tubelet tokens across time. A transformer decoder similar to DETR <cit.> reasons HOIs from the tubelet tokens by using learned positional encodings.
Nevertheless, the above-mentioned spatio-temporal models do not consider gaze cues, which could provide useful information for HOI detection and anticipation. Thus, we introduce gaze features as an additional modality to a spatial-temporal transformer model. We choose STTran <cit.> as our base model since it achieves remarkable performance on the Action Genome <cit.> dataset and can be easily extended with more features.
§ OUR METHOD
We aim to solve both HOI detection and anticipation tasks from videos with the same spatio-temporal transformer architecture. The proposed two-stage framework illustrated in Fig. <ref> is composed of an object module, a gaze module, and a spatio-temporal module. The object module and gaze module extract features from RGB frames in parallel. The spatio-temporal module based on STTran <cit.> exploits these features to detect current HOIs or anticipate future HOIs.
§.§ Problem Setup
Similar to the image-based HOI detection task <cit.>, a video-based HOI detection task is defined as to retrieve bounding boxes of human subjects {𝐛_t,i^s} and objects {𝐛_t,j}, identify object classes {c_t,j}, and recognize their interaction predicates 𝐩_t,⟨ i,j ⟩ in every frame I_t, where I_t ∈ℝ^h× w× 3 denotes an RGB frame at time t. The subscripts i and j represent an arbitrary human and object. The detected HOIs are expressed as a set of triplets {⟨𝐛_t,i^s, 𝐩_t,⟨ i,j ⟩, 𝐛_t,j⟩}.
For a video-based HOI anticipation task, we follow the setup that the model detects humans {𝐛_t,i^s} and objects {𝐛_t,j}, {c_t,j} from past observations [I_1, …, I_t] and predicts HOIs {⟨𝐛_t,i^s, 𝐩_t+τ_a,⟨ i,j ⟩, 𝐛_t,j⟩} in the future with a fixed time gap τ_a.
§.§ Object Module
The object module takes a sequence of T RGB frames as input V=[I_1, …, I_T]. In each frame I_t, the object module detects n_t bounding boxes {𝐛_t,j}, as well as the corresponding classes {c_t,j}. Among the n_t detections, n_t^s are human bounding boxes {𝐛_t,i^s}. An object tracker then associates current detections with past detections and obtains the trajectories of bounding boxes of human {𝐇_i} and objects {𝐎_j}. This object tracker allows the model to analyze every unique human-object pair separately in a complex scene. After locating humans and objects in a video, it is essential to exploit features from human-object pairs to detect and anticipate the interactions. Inspired by STTran <cit.>, we use a ResNet feature extractor to generate visual features 𝐯_t,j∈ℝ^2048 for each box 𝐛_t,j. The visual feature inside the subject bounding box 𝐛_t,i^s is denoted as 𝐯_t,i^s=𝐯_t,i. In addition, leveraging the spatial relation between human and objects is crucial to recognize some actions, such as playing or not playing a guitar. Thus, the visual relation features 𝐯_t,⟨ i,j ⟩∈ℝ^2048 and a two-channel spatial relation binary mask 𝐦_t,⟨ i,j ⟩∈ℝ^2 × 27 × 27 are also generated for each human-object pair ⟨𝐛_t,i^s, 𝐛_t,j⟩. Furthermore, possible types of interactions depend on object classes. For example, humans are more likely to ride or carry a bicycle than bite a bicycle. To reflect this characteristic of HOIs, our object module uses a word embedding model <cit.> to generate the object semantic feature 𝐬_t,j∈ℝ^200 from the object category c_t,j as an additional modality.
§.§ Gaze Module
We adopt the gaze-following method proposed in <cit.> to generate the gaze heatmap for each human. This method requires a head image as an input. Thus, we need a head detector to identify human heads in the scene. We observe that directly obtaining the head bounding box from the human box might cause mismatches in some scenarios. As shown in both images in Fig. <ref>, in human A's bounding box, another person's head appears. Directly obtaining head detection from human A's box may cause human B's head to be mismatched with human A. Therefore, our gaze module first retrieves n_t^h heads {𝐛_t,i^h} from the full RGB frame I_t. Then, all detected head bounding boxes are matched to all human bounding boxes from the object module. This process involves a linear assignment problem. We first determine which detected heads are possible matches for each human. An intersection over head (IoH) ratio is computed for every human 𝐛_t,i^s and head 𝐛_t,k^h according to Equation <ref>, where 𝒜(·) denotes the function for area calculation. If the IoH ratio is larger than a threshold, this head detection is considered as a shortlisted head for this human. We set the threshold to 0.7, which allows this metric to be robust to slightly inaccurate detections.
IoH(i, k)=𝒜(𝐛_t,k^h∩𝐛_t,i^s)/𝒜(𝐛_t,k^h),
We apply the Jonker-Volgenant algorithm <cit.> to find the best human-head association for each frame. This algorithm requires a cost matrix. Intuitively, the human head is usually positioned at the limits of the body. Thus, we compute a human-head distance ratio d_t, ⟨ i, k⟩ by dividing the distance between a human bounding box and a head bounding box by the length of the shorter edge of the human box. In addition, the confidence score of head detection plays an important role in the human-head association. Therefore, we use a weighted sum of the human-head distance ratio d_t, ⟨ i, k⟩ and the inverse of head confidence score as the cost to assign head 𝐛_t,k^h to human 𝐡_t,i.
Finally, the gaze-following model proposed by <cit.> estimates human gaze heatmaps from video clips. This approach combines the head information and the scene feature map using an attention mechanism. Then, a convolutional Long Short-Term Memory (Conv-LSTM) network is applied to encode the fused features and extract temporal dependencies to estimate the gaze heatmap 𝐠_t,i∈ℝ^64 × 64 for each human 𝐛_t,i^s at each time step.
§.§ Input Embedding
At each time step t, the object module generates a set of features (𝐯_t,i^s, 𝐯_t,j, 𝐯_t,⟨ i,j ⟩, 𝐦_t,⟨ i,j ⟩, 𝐬_t,j) for the human-object pair ⟨𝐛_t,i^s, 𝐛_t,j⟩. Meanwhile, the gaze module outputs human gaze heatmaps 𝐠_t,i for each human. To reduce the dimensionality and optimize the model efficiency, these features need to be encoded before being fed to the spatio-temporal transformer. Inspired by STTran <cit.>, we use linear projection matrices 𝐖^s∈ℝ^2048 × 512 and 𝐖^o∈ℝ^2048 × 512 to compress the dimensionality of human visual features 𝐯_t,i^s and object visual features 𝐯_t,j from 2048-d to 512-d. The visual relation features 𝐯_t,⟨ i,j ⟩ are projected to 256-d with 𝐖^vr∈ℝ^2048 × 256. To extract features from the two-channel spatial relation mask, a two-layer CNN 𝑓_mask(·) introduced in <cit.> with an average pooling layer at the end is applied to transform 𝐦_t,⟨ i,j ⟩ to a 256-d vector. The same CNN structure 𝑓_gaze(·) is adopted to transform the human gaze heatmap 𝐠_t,i to a 512-d vector 𝐠^'_t,i. The semantic feature vector 𝐬_t,j remains untouched. L2-normalization is applied to each feature vector to ensure that every feature vector has a similar data distribution. Finally, all feature vectors for the human-object pair ⟨𝐛_t,i^s, 𝐛_t,j⟩ are concatenated to a relation representation vector 𝐱_t,⟨ i,j ⟩∈ℝ^1736. Note that all projection matrices and CNNs are jointly trained with the spatio-temporal transformer.
§.§ Spatio-Temporal Module
A spatio-temporal transformer inspired by STTran <cit.> is applied to aggregate contexts from a sliding window of frames. The architecture is illustrated in Fig. <ref>. This model is composed of a spatial encoder and a temporal encoder.
First, a spatial encoder exploits human-object relation representations from one frame to understand the dependencies between the visual appearances, spatial relations, and semantic features. It also extracts a global feature vector for each frame, which is expected to represent the contexts between all human-object pairs. The spatial encoder receives the human-object pair relation representations 𝐗_t=[𝐱_t,⟨ 1,1 ⟩, … , 𝐱_t,⟨ i,j ⟩, …, 𝐱_t,⟨ n_t^s,n_t^o⟩] within one frame as the input. Inspired by the classification token proposed in ViT <cit.>, we prepend a learnable global token to the spatial encoder input. After N_sp stacked self-attention layers, the global token summarizes the dependencies between human-object pairs to a global feature vector c_t, while the pair relation representations are refined to 𝐗_t^sp=[𝐱^sp_t,⟨ 1,1 ⟩, … , 𝐱^sp_t,⟨ i,j ⟩, …, 𝐱^sp_t,⟨ n_t^s,n_t^o⟩].
Then, the refined pair representations are concatenated to several input sequences for the temporal encoder. The original STTran <cit.> is designed for the Action Genome dataset <cit.>, where only one human is annotated in each video. However, in real-world scenarios, multiple people and objects may appear. The VidHOI dataset <cit.> also provides full annotations for multi-person scenes. Thus, STTran may suffer from performance degradation as it treats all relation representations jointly as one sequence. In contrast, we propose to model the temporal evolution of each unique human-object pair independently. For that, we re-formulate the temporal encoder input such that each sequence only contains one particular human i and object j, i.e., [𝐱_t-L+1,⟨ i,j ⟩^sp, …, 𝐱_t,⟨ i,j ⟩^sp], where L denotes the length of a sliding window.
Next, referring to Fig. <ref>, the gaze feature 𝐠^'_t,i from each unique human in a frame is concatenated with the global feature 𝐜_t of that frame to 𝐜_t, i=[𝐜_t, 𝐠^'_t, i]. The resulting vector is filled into a person-wise sliding window of high-level context features [𝐜_t-L+1, i, …, 𝐜_t, i], which are fed to the temporal encoder along with the pair-wise sliding windows. Since the temporal encoder processes all entries in a sequence in parallel, the temporal order of the entries is lost. Therefore, a positional encoding is added to all entries in both high-level context sliding window and relation representation sliding window. STTran <cit.> applies a learned positional encoding, however, we observe that the sinusoidal encoding performs better in our model.
The temporal encoder fuses the high-level context features and the refined pair representations by cross-attention layers and captures the evolution of their dependencies in time, which is essential to detect and anticipate temporal-related HOIs such as push and pull, for instance. In the first temporal encoder layer as shown in Fig. <ref>, a multi-head self-attention layer first captures temporal dependencies between high-level context features. A cross-attention layer then fuses the human-object pair representations with the high-level contexts. Same as in the vanilla transformer <cit.>, the cross-attention is computed as:
Attention(Q, K, V)=softmax(QK^T√(d_k))V.
Where Q, K, and V denote queries, keys, and values. d_k is the dimensionality of the keys. In our case, the queries are the pair representations and the keys and values are the high-level contexts features. The outputs of the first temporal encoder layer are fed to N_tmp-1 stacked conventional self-attention layers to aggregate deeper temporal dependencies between the fused features. To ensure causality, the last temporal encoder layer only outputs the representation vectors for the last frame in each sliding window, i.e., 𝐱_t,⟨ i,j ⟩^tmp.
Finally, a set of prediction heads generate the probability distributions for different interaction categories. Each prediction head is a one-layer feed-forward network followed by a Softmax or Sigmoid function depending on whether the classification is single-label or multi-label. The outputs of all prediction heads are concatenated to the final model output 𝐳_t,⟨ i,j ⟩. On the VidHOI dataset <cit.>, we have a spatial relation head and an action head, each with Sigmoid function. On the Action Genome <cit.> dataset, there are three prediction heads: attention head, spatial relation head, and action head. The attention head determines whether the human is watching an object, thus is with Softmax function. The other two heads are with Sigmoid function.
§.§ Loss Function
Since a human-object pair in the VidHOI dataset <cit.> may be labeled by multiple interactions at the same time, such as ⟨human, next to & watch & hold, cup⟩, HOI detection and anticipation on VidHOI dataset leads to a multi-class multi-label classification problem. Binary cross-entropy (BCE) loss is usually applied in such tasks, which computes the loss for each interaction class independently to other classes. However, VidHOI dataset is an unbalanced dataset with long-tailed interaction distribution. To address the imbalance issue and avoid over-emphasizing the importance of the most frequent classes in the dataset, we adopt the class-balanced (CB) Focal loss <cit.> as follows:
CB_focal(p_i, y_i) = -1-β/1-β^n_i(1-p_y_i)^γlog(p_y_i),
with p_y_i={ p_i if y_i=1
1-p_i otherwise.
.
The term -(1-p_y_i)^γlog(p_y_i) refers to the Focal loss proposed in <cit.>, where p_i denotes the estimated probability for the i-th class and y_i ∈{0, 1} is the ground-truth label. The variable n_i denotes the number of samples in the ground truth of the i-th class and β∈ [0, 1) is a tunable parameter. The mean of losses in all classes is considered as the loss for one prediction.
§ EXPERIMENTS
§.§ Dataset and Baselines
§.§.§ VidHOI dataset
We validate our framework on VidHOI dataset <cit.> as this is currently the largest video dataset with complete HOI annotations. The VidHOI dataset contains videos retrieved from social media where humans are performing daily activities without pre-defined scripts in highly unstructured and noisy environments. Thus, these videos represent real-world scenes. The VidHOI dataset applies keyframe-based annotations, where the keyframes are sampled in 1 frame per second (FPS). There are 78 object categories and 50 predicate classes. Among the predicate classes, we define 8 predicates as spatial relations (away, towards, above, next to, behind, in front of, inside, beneath), while the rest 42 predicates are actions (e.g., hold, push, …).
The ST-HOI baseline <cit.> is adopted as the baseline for HOI detection task on VidHOI dataset. This method extracts visual features from object trajectories by a SlowFast <cit.> backbone and generates pose features using a spatio-temporal pose module. These features are concatenated and fed to a two-layer prediction head. In addition, we use the original STTran <cit.> as another baseline model. This model is trained with the same learning rate scheduler as our model but only for 10 epochs as suggested in their source code. The TUTOR model <cit.> is also validated on VidHOI dataset. We use their provided results for comparison.
§.§.§ Action Genome dataset
Action Genome <cit.> is another large-scale video dataset containing 35 object categories and 25 interaction classes. Nevertheless, only HOIs for a single person are annotated in each video even if more people show up. Moreover, the videos are generated by volunteers performing pre-defined tasks. Thus, models designed on the Action Genome dataset may be less useful in the real world. We only conduct an experiment on this dataset in the HOI detection task to demonstrate the robustness of our framework.
We apply the original STTran <cit.> as the baseline model on the Action Genome dataset. In addition, several image-based HOI detection models <cit.> are chosen for further comparison. The results of these works are provided by <cit.>.
§.§ Evaluation Metrics
Following the standard procedure in HOI detection, mean average precision (mAP) is adopted as one of our evaluation metrics. The mAP is a summary of precision-recall curves for all interaction classes. A predicted HOI triplet is assigned true positive if: (1) both detected human and object bounding boxes are overlapped with the ground truth with intersection over union (IoU) >0.5, (2) the predicted object class is correct, and (3) the predicted interaction is correct. The metric mAP is reported on the VidHOI dataset over three different HOI category sets: (1) Full: all 557 HOI triplet categories, (2) Rare: 315 categories with <25 instances in the validation set, and (3) Non-rare: 242 categories with ≥ 25 instances in the validation set. We apply the mAP computation method from QPIC <cit.>.
For the HOI anticipation task, the mAP does not well represent the model performance as it is evaluated on all predicted HOIs in a frame. Applications of HOI anticipation usually consider the top predictions for each human separately. For example, a robot may decide how to assist a human based on the most likely HOI forecasted. On the egocentric action anticipation benchmarks <cit.>, top-5 recall or top-5 accuracy are often employed to address such application scenarios. The egocentric videos only contain one person as the subject, and only one action is performed in each frame. Thus, evaluating the top-k predictions in one frame is equivalent to evaluating the top-k predictions for one human. Inspired by this idea, we propose a set of person-wise multi-label top-k metrics as additional evaluation metrics. For each frame, we first assign the detected human-object pairs to the ground-truth pairs. Then, the top-k triplets of each human are used to compute the metrics for this human. We follow <cit.> to calculate the multi-label recall, precision, accuracy, and F1-score. On the VidHOI dataset, we report the person-wise multi-label top-k metrics with k=5 and confidence threshold =0.3. The final results are averaged over all humans in the dataset, without frame-wise or video-wise mean computation. On the Action Genome dataset, most baselines only consider the Recall@k metric, which is identical to person-wise top-k recall since Action Genome only consists of single-person scenes. The final results are averaged frame-wise.
All models are trained with ground-truth object trajectories. We follow the two evaluation modes defined in ST-HOI baseline <cit.>: models in Oracle mode are evaluated with ground-truth object bounding boxes, while models in Detection mode are evaluated with object detector. During the evaluation in Detection mode, the ST-HOI baseline <cit.> removes the frames without any object detected. This trick could increase the recall as some not detected ground-truth HOIs are filtered out. We use their reported mAP value for comparison, but we evaluate our model without excluding any frames. In the frames with no valid object detection, all ground-truth HOIs are regarded as false negatives.
By observing a sequence of past T frames, the model is expected to detect HOIs in the last observed frame (detection task) or forecast HOIs in the τ_a-th future frame (anticipation task). For the anticipation task, we train and validate our models with τ_a ∈{1, 3, 5, 7}, where for example, τ_a=5 means 5 seconds in the future in VidHOI dataset. The anticipation times are intuitively selected to show the performance of HOI anticipation in the near future. The evaluations for the anticipation task are only conducted on those videos that are enough long for τ_a=7. A potential issue in HOI anticipation task in third-person videos is that the humans and objects in the current frame may disappear in the future due to the movement of humans or the camera. Thus, for mAP computation, we ignore the anticipations that are matched to a ground-truth human-object pair which is not available in the future. For our proposed person-wise top-k metrics, the persons out of frame in the future are excluded.
§.§ Implementation Details
For our object module, we employ YOLOv5 model <cit.> as the object detector. The weights are pre-trained on COCO dataset <cit.> and finetuned for the VidHOI dataset. We apply the pre-trained DeepSORT model <cit.> as the human tracker, ResNet-101 <cit.> as feature backbone, and GloVe model <cit.> for word embedding.
In the gaze module, we also apply YOLOv5 to detect heads from RGB frames. The model is pre-trained on the Crowdhuman dataset <cit.>. The gaze-following method introduced in <cit.> and pre-trained on the VideoAttentionTarget dataset <cit.> is adopted to generate gaze features. All weights in the object module and gaze module are frozen during the training of the spatio-temporal transformer.
The training procedure from STTran <cit.> has a limitation that it collapses to overfitting quickly as it samples a batch of windows from the same video at each training step. To tackle this issue, we design a new data sampling strategy to sample a batch of windows from different videos, and each video is only visited once in an epoch. In addition, we introduce random horizontal flipping as data augmentation. The hyperparameters of our model are finetuned on the VidHOI dataset. For the experiment on the Action Genome dataset, we simply reuse the same setup as on the VidHOI dataset.
Following the original STTran <cit.>, our spatio-temporal transformer model has 2048-d FFN layers and 8 heads in multi-head attention layers. The spatial encoder consists of 1 layer while the temporal encoder contains 3 layers. The sliding window length is set to 6 according to the ablation study. We adopt CB Focal loss with γ=0.5 and β=0.9999 which are recommended for large-scale and extremely imbalanced datasets in <cit.>. Mini-batch learning is used to accelerate the training. We train the model using AdamW optimizer <cit.> with 3 warming-up epochs with an initial learning rate of 1 × 10^-8, a peak learning rate of 1 × 10^-4, and an exponential decay with factor 0.1. The weight decay factor is set to 1 × 10^-2 and the dropout rate is 0.1. All trainings are run for 25 epochs. For reproducibility, we set a fixed random seed for all training. The experiments are performed on a single NVIDIA RTX 4090 GPU.
§.§ Quantitative Results
Table <ref> shows the experimental results of baselines and our framework in the HOI detection task on the VidHOI dataset. In Oracle mode, our model consistently outperforms all recent baselines. Moreover, our extensions to the STTran <cit.> lead to a significant performance boost. In Detection mode, we additionally validate our model with the object traces generated by ST-HOI baseline using Detectron2 <cit.>. The results imply that the quality of the object detector plays an important role in two-stage HOI detectors and our adopted YOLOv5 model is superior to Detectron2 in this case. However, the critical performance gap between the Oracle mode and Detection mode indicates that the object detector still has a large space for improvement.
The experimental results in the HOI detection task on the Action Genome dataset are listed in Table <ref>. We only evaluate our model in Oracle mode (or also called PredCLS in the baseline approaches) and Semi Constraint setup, where Semi Constraint means all HOI predictions with a confidence score higher than a threshold are regarded as positives. Even without a specific hyperparameter finetuning, our model still outperforms all baselines in all Recall@k metrics. These results indicate the robustness of our model.
The quantitative results in the HOI anticipation task are reported in Table <ref>. The non-rare and rare splits for mAP are not applicable as some ground-truth triplets are not available for anticipation due to too short videos or invisible future human-object pairs. Our model outperforms the STTran <cit.> by a great margin in all metrics except the person-wise top-5 recall. The reason for this phenomenon is that the recall value highly depends on the confidence threshold. We additionally plot the person-wise top-5 score-threshold curves in Figure <ref>. According to these curves, we set 0.3 as the threshold for our model, which corresponds to the peak of accuracy and F1-score. With this threshold, our model achieves a slightly lower recall than the STTran baseline but much higher precision, accuracy, and F1-score. If a higher recall is preferred, we can shift the threshold to 0.2, where our model beats the baseline in all metrics, but the average gain drops.
In addition, we test the average inference time of each module in our framework. The object module with YOLOv5 object detector and DeepSORT tracker can operate at 61.3 FPS, while the gaze module with YOLOv5 head detector and the gaze following model Chong <cit.> runs at 48.1 FPS. Our proposed spatio-temporal transformer consists of 147.7M parameters and can process 9.7 sliding windows per second, which is real-time capable for the VidHOI dataset with a sample rate of 1 FPS. In comparison, the original transformer in STTran <cit.> contains 124.8M parameters and achieves 25.1 windows per second inference speed. The main reason for this speed difference lies in that the sliding window length L in STTran is 2, whereas our model has L=6. If we also set the sliding window length to 2, we can achieve 23.8 windows per second inference speed. In this setup, our model performs slightly worse with the mAP Full of 37.50, which is still much higher than STTran. Moreover, for real applications, our model's inference time can be reduced by using a buffer to store the spatial encoder output for consecutive windows.
§.§ Qualitative Results
To further investigate the performance of our model, we show the qualitative results for the HOI anticipation task in Oracle mode in Figure <ref>. For simplification, we show only the top-5 results for one human in each scene. In the upper scene, our model forecasts that the human0 will wave the bat1 at any time in the future, which is logical. In the bottom scene, the gaze cues can probably help the model to understand that the baby is focusing on the toy1 and will not play with another toy in the near future.
Figure <ref> shows two more HOI anticipation results on the VidHOI dataset <cit.> in Detection mode. In the first scene, our model predicts that the child is going to kick the ball. However, in fact, the child is playing the ball with a racket. Our object detector fails to recognize that racket, thus, our spatio-temporal transformer is unable to fully understand the scene. When we provide the model with the ground-truth object annotations, it does not produce the triplet ⟨human0, kick, ball1⟩. In the second video clip, our framework successfully detects the necessary objects to understand the scene. Nevertheless, it still cannot forecast that the adult will lift the child and the child will lift the ball. This is also hard to predict for us humans since the interactions between two humans are more uncertain in the future. In addition, the bench detected in the background is irrelevant to the two humans. However, the gaze direction of the child estimated by the gaze-following model is roughly in the direction of the bench. The transformer may capture misleading contexts that could affect the model performance. Thus, overall, the gaze cue is a useful feature, but there is room to improve its usage.
§.§ Ablation Study
We conduct an extensive ablation study to investigate the effectiveness of gaze features and our improvements to the STTran model <cit.>. The experiments for different tricks and components are performed on the HOI detection task. The best setup is applied to anticipation tasks with all anticipation times. We first examine the usage of gaze cues as an additional component in the human-object relation representations, i.e., we concatenate the gaze feature with the visual appearance, spatial relation, and semantic feature in the input embedding block. The temporal encoder only contains stacked self-attention layers. The dependencies between the human gaze and other features are then extracted solely through the self-attention mechanism. We apply the setting with the highest mAP Full as the base setting for further experiments using gaze features in cross-attention layers.
Table <ref> shows that all of our modified or added components are able to increase the mAP Full. Changing the loss function from multi-label margin (MLM) loss to CB Focal loss improves our model performance the most. The rare mAP is increased by 33.6%. This observation meets our aim of applying CB Focal loss, which should address the challenge of extreme dataset imbalance. By increasing window length to 6, the model achieves overall the best performance in gaze concatenation mode. However, further raising the window length to 8 instead reduces the mAP. This performance drop might be caused by the fact that a longer window of frames may capture more temporal information which is no more related to the current interactions. After changing the gaze usage from concatenation to cross-attention, our model gains further performance boost. More experiments also confirm that the pair-wise sliding window and the explicit global context are beneficial for HOI detection from videos.
Finally, in Table <ref>, we show that the gaze features are beneficial for both HOI detection and anticipation tasks. However, the performance improvement is not as significant as we expect. The main reason could be that the spatio-temporal transformer is trained with noisy gaze cues as the VidHOI dataset lacks ground-truth gaze annotations. The performance of the adopted gaze following model <cit.> might be a limitation of our framework, but could be improved by leveraging more recent works in that field, such as <cit.>. In addition, even though the gaze does not result in big improvement, other extensions we proposed in the spatio-temporal transformer still boost the model performance and allow us to achieve state-of-the-art in HOI detection and anticipation in videos.
§ CONCLUSION
In this work, we propose a multimodal framework to detect and anticipate HOIs from a third-person video by additionally leveraging gaze cues in the cross-attention mechanism. We utilize an object tracker to enable the temporal encoder to focus on the temporal evolution of each human-object pair separately. Addressing the extreme dataset imbalance issue in VidHOI dataset <cit.>, we adopt the class-balanced Focal loss. Furthermore, we propose a person-wise multi-label criterion to evaluate the models in HOI anticipation tasks in multi-person scenarios. Experimental results demonstrate that our framework outperforms the current state-of-the-art for HOI detection and anticipation tasks on the VidHOI dataset and the gaze features are beneficial to both tasks. For future works, adding more modalities such as depth information or human pose features could be advantageous. Furthermore, based on the HOI anticipation results, policies could be developed for human-assistive robots.
§ ACKNOWLEDGMENTS
This work is funded by Marie Sklodowska-Curie Action Horizon 2020 (Grant agreement No. 955778) for project “Personalized Robotics as Service Oriented Applications” (PERSEO).
model2-names
|
http://arxiv.org/abs/2306.01633v1
|
20230602155317
|
Monodromy Groups of Dessins d'Enfant on Rational Polygonal Billiards Surfaces
|
[
"Richard A. Moy",
"Jason Schmurr",
"Japheth Varlack"
] |
math.NT
|
[
"math.NT"
] |
Combining stochastic density functional theory with deep potential molecular dynamics to study warm dense matter
Mohan Chen
July 31, 2023
================================================================================================================
A dessin d’enfant, or dessin, is a bicolored graph embedded into a Riemann surface, and the monodromy group is an algebraic invariant of the dessin generated by rotations of edges about black and white vertices. A rational polygonal billiards surface is a Riemann surface that arises from the dynamical system of billiards within a rational-angled polygon. In this paper, we compute the monodromy groups of dessins embedded into rational polygonal billiards surfaces and identify all possible monodromy groups arising from rational triangular billiards surfaces.
§ INTRODUCTION
In <cit.>, the authors investigated the connection between rational triangular billiards surfaces and Cayley graphs. In <cit.>, the authors modified this approach by drawing dessins d'enfant on the rational triangular billiards surfaces and classifying their monodromy groups. In this paper, we generalize the main result in <cit.> by computing the monodromy groups of dessins d'enfant drawn on billiard surfaces of k-gons with k≥ 3.
We show that all such monodromy groups can be expressed as the semidirect product N⋊ C_k, where N is isomorphic to the column span of a circulant matrix over ℤ/nℤ for an appropriate integer n (Theorem <ref> and Lemma <ref>) and C_k is the cyclic group of order k.
In Section <ref>, we show how to use the Smith Normal Form to explicitly compute the monodromy group of any given rational billiards surface (Theorem <ref>).
Next, for the case when n=p for some prime p, we establish a correspondence between k-gons modulo p and elements of 𝔽_p[x] which has the useful property that the monodromy group of the k-gon is completely determined by the greatest common divisor of the polynomial and x^k-1(Proposition <ref>). This correspondence allows us to complete the classification of all monodromy groups of polygonal billiard surfaces for k-gons when n=p is prime and p>k (Theorem <ref>).
Finally, in Section <ref>, we provide some preliminary results for composite n which are sufficient to give a complete classification for triangles and an analogue of the main result in <cit.> for quadrilaterals.
Throughout this paper, we will reference many well known algebraic and number theoretic results. See any introductory graduate abstract algebra book, such as <cit.>, or number theory book, such as <cit.>, for a reference.
§ BACKGROUND
§.§ The Rational Billiard Surface Construction
A rational billiards surface is constructed by gluing together copies of a polygon that result from consecutive reflections across the sides. This name is motivated by the task of examining the paths of balls that bounce around the interior of a billiard table. When a ball hits a side of the table, the resulting bounce is instead represented by gluing a reflection of the table across that side and continuing the billiard path in the reflected copy in the same direction. This way, the path of a ball is represented by a single geodesic on a flat surface instead of a jagged path that may cross back on itself. Equipped with this intuition, a rational billiards surface is constructed from all of the reflections required to account for every possible path a ball could take.
More formally, a rational billiard surface can be constructed from a k-gon P whose angles are rational multiples of π, in the following way. Label the sides of P as e_0,… e_k-1, in consecutive counterclockwise order around P. Label the angles of P as θ_i=a_iπn, where θ_i is the internal angle formed by sides e_i and e_i+1 and n∈ is the least common denominator for the various a_in. Let Γ be the dihedral group generated by the reflections r_0,…,r_k-1 across lines through the origin parallel to the corresponding sides of P. This group consists of 2n elements <cit.>, consisting of n Euclidean rotations and n Euclidean reflections. The rotation subgroup of Γ is generated by rotation by the angle 2πn. Hence we may label the rotations using the notation ρ_m for rotation by an angle of 2mπn. Let 𝒫={γ( P):γ∈Γ}. For each γ (P)∈𝒫 and each r_i, we glue together γ(P) and γ r_i (P) along their copies of e_i. The resulting object is called a translation surface, since it is a Riemann surface whose change-of-coordinate maps are translations. See <cit.> and <cit.> for a detailed description of the rational billiards construction.
§.§ Defining a Monodromy Group on the Surface
Next, we draw a graph on this surface by placing a vertex in the center of each copy of P and labeling it with the corresponding element of Γ. We draw an edge between two vertices α and β precisely when α=β r_i for some i. This graph is the Cayley graph for Γ with generating set r_0,…,r_k-1. See <cit.> for a more in-depth exposition on this graph.
Since the generating set consists of reflections, this graph is bipartite, where one partite vertex set is the set of Euclidean rotations in Γ and the other partite vertex set is the set of Euclidean reflections in Γ.
We will define a labeling scheme, introduced in <cit.>, for the edges of the graph in following way. Take an arbitrary edge of the graph; one endpoint will be a vertex labeled ρ_m and the other endpoint will be ρ_m r_i, for integers m and i. We label this edge with the ordered pair (m,i)∈ C_n× C_k where C_n× C_k is viewed as a set and not a group. (Here, C_n represents the cyclic group of order n.) In fact this defines a bijection between the edge set of the graph and C_n× C_k.
We can define a dessin d'enfant on the surface by assigning a color to each of the partite sets (say, black for rotation and white for reflection) and by defining a cyclic ordering of the edges (oriented counterclockwise) around each vertex <cit.>. The ordering around a black vertex ρ_m is (m,0),(m,1),…,(m,k-1), and the ordering around a white vertex ρ_m r_i is (m,i),(m-a_i-1,i-1),(m-a_i-a_i-1,i-2),…,(m+a_i+1,i+1). See Figure <ref>.
The ordering around a black vertex is apparent from our labeling scheme. To justify the ordering around a white vertex, observe that r_i+1r_i=ρ_-a_i and ρ_aρ_b=ρ_a+b, by basic facts about the composition of Euclidean reflections and rotations <cit.>.
See Figure <ref> for an example of this construction for the equilateral triangle, and see <cit.> for further exposition on triangular billiards surfaces.
The monodromy group of this dessin is a group ⟨σ_0,σ_1⟩ of permutations of the edges generated by two permutations σ_0 and σ_1. We define σ_0 to be the permutation that takes each edge to the next edge in the cyclic ordering about its black vertex. Similarly, we define σ_1 to be the permutation that takes each edge to the next edge in the cyclic ordering about its white vertex.
Therefore, we have that for any edge (m,i),
σ_0[(m,i)]=(m,i+1)
and
σ_1[(m,i)]=(m-a_i-1,i-1).
§.§ Representing Polygons by k-tuples
Let P be a rational polygon with consecutive internal angles a_iπn, where a_0+…+a_k-1=(k-2)n and (a_0,…,a_k-1,n)=1. We shall use the notation [a_0,a_1,…,a_k-1] to represent P. Although this notation does not uniquely define P up to geometric similarity when k>3, it does uniquely define the dessin drawn on P up to graph isomorphism. This motivates the following definition.
If k,n∈ with k≥ 3, then an ordered k-tuple of positive integers [a_0,…,a_k-1] represents a geometric polygon, or geometric k-gon, modulo n when a_0+…+a_k-1=(k-2)n and a_i<2n, a_i n for all i and (a_0,…,a_k-1,n)=1. Throughout this paper, we will regularly use the term k-gon to refer to a geometric k-gon.
The angles of a k-gon represented by [a_0,…,a_k-1] modulo n are a_0/nπ,…,a_k-1/nπ.
It is not obvious that every k-tuple [a_0,…,a_k-1] that represents a polygon modulo n corresponds to a polygon in the plane with zero crossings. However, it is in fact true.
Suppose that θ_0,…,θ_k-1 is a sequence of angles (in radians) in the set (0,π)∪(π,2π). If θ_0+…+θ_k-1=(k-2)π, then there exists a polygon in the plane with no crossings with angles θ_0,…,θ_k-1 in that sequence.
Using this same convention, if the polygon P is represented by [a_0,a_1,…,a_k-1] then we will use the notation X(a_0,…,a_k-1) for the rational billiards surface arising from P and D(a_0,…,a_k-1) to represent the dessin drawn on X(a_0,…,a_k-1). Finally, we will use G(a_0,…,a_k-1) to represent the monodromy group of that dessin.
§ SEMIDIRECT PRODUCT STRUCTURE OF THE MONODROMY GROUP
The goal of this section is to describe the monodromy groups as semidirect products of abelian groups.
Let [a_0,…, a_k-1] represent a k-gon modulo n. Let G(a_0,…,a_k-1) = ⟨σ_0, σ_1 ⟩ be the monodromy group of the dessin D(a_0,…,a_k-1) drawn on the rational polygonal billiards surface X(a_0,…,a_k-1). Setting N=⟨σ_0^xσ_1^x: 0< x<k ⟩ and H=⟨σ_0 ⟩, we have G(a_0,…,a_k-1) = N ⋊ H.
The permutations σ_0^xσ_1^x and σ_0^yσ_1^y commute.
Let (m,i)∈ C_n× C_k be an arbitrary edge of the dessin.
From (<ref>) and (<ref>) we have that
σ_0^xσ_1^x[(m,i)]=σ_0^x[(m-∑_j=1^xa_i-j,i-x)]=(m-∑_j=i-x^i-1a_j,i).
Therefore,
σ_0^yσ_1^yσ_0^xσ_1^x[(m,i)]=σ_0^yσ_1^y[(m-∑_j=i-x^i-1a_j,i)]=(m-(∑_j=i-x^i-1a_j+∑_j=i-y^i-1a_j),i).
Finally,
σ_0^xσ_1^xσ_0^yσ_1^y[(m,i)]=σ_0^xσ_1^x[(m-∑_j=i-y^i-1a_j,i)]=(m-(∑_j=i-x^i-1a_j+∑_j=i-y^i-1a_j),i),
establishing commutativity.
Let N=⟨σ_0^xσ_1^x: 0<x<k ⟩. Observe that σ_1^yσ_0^y=(σ_0^k-yσ_1^k-y)^-1.
The subgroup N is precisely the subgroup of G(a_0,…,a_k-1) that fixes the second component of the coordinates (m,i).
Let N' be the collection of elements in G(a_0,…,a_k-1) that fix the second component of (m,i). Clearly the identity is an element of N'. If g,h∈ N' then g h and g^-1 also fix the second component of (m,i). Hence, N' is a subgroup of G(a_0,…,a_k-1) and the formula for σ_0^xσ_1^x in (<ref>) shows that σ_0^xσ_1^x∈ N'. Since σ_0^xσ_1^x generate N as x ranges from 1 to k-1, we see that N≤ N'.
Every element in G(a_0,…,a_k-1) (and thus in N') can be written as a product g=(σ_0^x_1σ_1^y_1)… (σ_0^x_nσ_1^y_n) of n pairs of the form σ_0^x_iσ_1^y_i where x_i,y_i∈. We will show that N'≤ N by induction on n. If g=σ_0^x_1σ_1^y_1…σ_0^x_nσ_1^y_n∈ N', we know that ∑x_i≡∑y_i k by (<ref>) and (<ref>).
Base Case: n=1
In this case, we see that x_1≡ y_1 k. Since the orders of σ_0 and σ_1 are both k, we can assume x_1=y_1. Furthermore, we can also assume that 0≤ x_1<k. Hence, g∈ N.
Induction Step:
Suppose our theorem is true for n≥ 1 and consider n+1. That is, suppose g=σ_0^x_1σ_1^y_1…σ_0^x_n+1σ_1^y_n+1∈ N'. Consider
g'=(σ_0^x_1σ_1^x_1)^-1 g (σ_0^y_n+1σ_1^y_n+1)^-1=σ_1^y_1-x_1σ_0^x_2σ_1^y_2…σ_0^x_nσ_1^y_nσ_0^x_n+1-y_n+1
Since g∈ N' then g'∈ N' and (g')^-1∈ N'. Let z_1=y_n+1-x_n-1, z_2=-x_n,…,z_n=-x_2 and w_1=-y_m,…,w_n-1=-y_2,w_n=x_1-y_1. Observe that (g')^-1=σ_0^z_1σ_1^w_1…σ_0^z_nσ_1^w_n. Thus by the induction hypothesis, (g')^-1∈ N. Hence, g'∈ N and g∈ N. By induction, we have proven the desired result.
N is a normal subgroup of G(a_0,…,a_k-1).
Since N=⟨σ_0^xσ_1^x: 0<x<k ⟩ and G(a_0,…,a_k-1) = ⟨σ_0, σ_1 ⟩, proving N G(a_0,…,a_k-1) is equivalent to proving the following statements:
* σ_1(σ_0^xσ_1^x)σ_1^-1∈ N
* σ_0(σ_0^xσ_1^x)σ_0^-1∈ N
To prove 1, observe that
σ_1(σ_0^xσ_1^x)σ_1^-1=(σ_1σ_0)(σ_0^x-1σ_1^x-1)=(σ_0^k-1σ_1^k-1)^-1(σ_0^x-1σ_1^x-1)∈ N.
To prove 2, observe that
σ_0(σ_0^xσ_1^x)σ_0^-1=(σ_0^x+1σ_1^x+1)(σ_1^k-1σ_0^k-1)=(σ_0^x+1σ_1^x+1)(σ_0σ_1)^-1∈ N.
N∩ H = {id}
Recall that N=⟨σ_0^xσ_1^x: 0<x<k ⟩ and H=⟨σ_0 ⟩. Suppose the intersection of these groups is not trivial. Then there is an element in N that is equal to σ_0^ℓ for some 0< ℓ<k. Observe that σ_0^ℓ(m,i)=(m,i+ℓ) and thus does not fix the second component of the edge labels. However, N is generated by elements that fix the second component of the edge labels (<ref>). Hence, we have reached a contradiction.
NH = G(a_0,…,a_k-1)
Recall that N=⟨σ_0^xσ_1^x: 0<x<k ⟩, H = ⟨σ_0 ⟩, and G(a_0,…,a_k-1) = ⟨σ_0, σ_1 ⟩. Since N G(a_0,…,a_k-1) and H≤ G(a_0,…,a_k-1), we know that NH≤ G(a_0,…,a_k-1). Observe that σ_0∈ NH and σ_1=(σ_0^k-1σ_1^k-1)^-1σ_0^-1∈ NH. Because NH contains the generators of G(a_0,…,a_k-1), we conclude that NH=G(a_0,…,a_k-1).
Now we proceed with the proof of Theorem 1.
The group G(a_0,…,a_k-1) is a semi direct product of subgroups N and H if and only if the three conditions are true:
* N G(a_0,…,a_k-1)
* N ∩ H = {id}
* NH = G(a_0,…,a_k-1)
Conditions I, II, and III are satisfied by Lemmas <ref>, <ref>, and <ref> respectively. Therefore, G(a_0,…,a_k-1) is a semidirect product of subgroups N and H.
The action of H on N in the semidirect product is via conjugation by elements of H.
§ COMPUTING THE STRUCTURE OF N
In this section, we prove several properties about the subgroup N◃ G(a_0,…,a_k-1), introduced in Definition <ref>, to provide more precise information about the structure of N and, by extension, G(a_0,…,a_k-1).
Let S={σ_1^-j(σ_0^-1σ_1^-1)σ_1^j:0≤ j <k}. We first show that one can generate N using the elements of S.
The subgroup N is generated by S.
Recall that N=⟨σ_0^xσ_1^x: 0<x<k ⟩. Let S={σ_1^-j(σ_0^-1σ_1^-1)σ_1^j:0≤ j <k}. We claim ⟨ S⟩=N. Using (<ref>) and (<ref>), we see that σ_1^-j(σ_0^-1σ_1^-1)σ_1^j fixes the second component of the coordinates (m,i) and is thus an element of N by Lemma <ref>. Hence, ⟨ S ⟩≤ N.
We will prove that σ_0^jσ_1^j∈⟨ S⟩ using induction. Observe that σ_1^-1(σ_0^-1σ_1^-1)σ_1^1=(σ_0σ_1)^-1. Hence, σ_0σ_1∈⟨ S⟩.
Suppose σ_0^j-1σ_1^j-1∈⟨ S⟩. Observe that σ_1^-j(σ_0^-1σ_1^-1)σ_1^j=(σ_0^jσ_1^j)^-1σ_0^j-1σ_1^j-1 which implies σ_0^jσ_1^j∈⟨ S⟩. Thus, σ_0^jσ_1^j∈⟨ S⟩ for all j>0 and hence N≤⟨ S⟩.
As we observed in Lemma <ref>, the subgroup N is precisely the subgroup of G(a_0,…,a_k-1) which fixes the second component of the edge (m,i). Hence, we may view any element g∈ N as a column vector [ x_0; ⋮; x_k-1 ]∈ (ℤ/nℤ)^k, where g(m,i)=(m+x_i,i). It follows from equations (<ref>) and (<ref>) that σ_1^-j(σ_0^-1σ_1^-1)σ_1^j(m,i)=(m+a_i-j,i).
Therefore the set S={σ_1^-j(σ_0^-1σ_1^-1)σ_1^j:0≤ j <k} can be identified with the columns of the matrix
C=[ a_0 a_k-1 … a_2 a_1; a_1 a_0 a_k-1 a_2; ⋮ a_1 a_0 ⋱ ⋮; a_k-2 ⋱ ⋱ a_k-1; a_k-1 a_k-2 … a_1 a_0 ]
in M_k() where M_k() is the set of k× k matrices with entries in . We make this statement more formal in the following lemma.
The subgroup N is isomorphic to the span of the columns of C.
From (<ref>), (<ref>), and Lemma <ref>, we see that an arbitrary element g∈ N has the form g(m,i)=(m+x_i,i) where =[ x_0; ⋮; x_k-1 ]∈()^k. We define a homomorphism φ:N→ ()^k via φ(g)=. It is easy to check that φ is a well-defined map with φ(g_1g_2)=φ(g_1)+φ(g_2).
It is also easy to see that φ is injective. If φ(g)=[ 0; ⋮; 0 ], then g fixes every edge of the dessin. Hence, g is the identity element since the monodromy group acts faithfully on the edges of the dessin. Thus, we may conclude that φ maps N bijectively onto φ(N).
Since the elements of the set S generate N, we conclude that the set of vectors of the form φ(σ_1^-j(σ_0^-1σ_1^-1)σ_1^j)=[ a_k-j; ⋮; a_k-j-1 ] where 0≤ j<k spans φ(N). And thus, N is isomorphic to the span of the columns of C.
It is worth noting that when viewing N as a set of vectors in ()^k, there is a natural group action of C_k ≅ H on N which is the cyclic permutation of the vector entries. That is, the homomorphic image of H in Aut(N) is precisely the subgroup of cyclic permutations of vector entries.
In order to determine the group structure of N, we will use row and column operations on the matrix C.
§.§ Smith Normal Form
In previous sections we establish that the monodromy group G(a_0,…,a_k-1) can be expressed as the semidirect product of C_k and some finite abelian subgroup N,
where N has a natural ℤ/nℤ-module structure. In this section we explore the explicit computation of N. This can be done via
the Smith Normal Form. See <cit.> or <cit.> for a reference.
The Smith Normal Form of a matrix A with entries from a ring R is a factorization A=UDV where
* D=[ d_1 ; ⋱; d_k ] is a diagonal matrix
* d_i|d_i+1 for all i
* U and V are square matrices with determinant ± 1
Consider the R-module M, which is a submodule of R^k, generated by the columns of A. Then as a group, M is isomorphic to the direct product
d_1 R×⋯× d_k R.
The elements d_1…,d_k are called the elementary divisors of M. In <cit.>, Kaplansky defines an elementary divisor ring R to be a ring
over which all matrices have a Smith Normal Form. It is well-known (see <cit.>) that all PID's are elementary divisor rings. However, not all elementary divisor rings are domains. Indeed, it follows from Corollary 2.3 of <cit.> that ℤ/nℤ is an elementary divisor ring. Hence, we can always compute the group structure of one of our particular monodromy groups by computing the Smith Normal Form of the associated
circulant matrix.
In practice, algorithms exist for computing the Smith Normal Form of a matrix over ℤ. Therefore, to compute the Smith Normal Form of a matrix over
ℤ/nℤ, it is convenient to compute the Smith Normal Form of an associated matrix over ℤ and then apply the standard ring homomorphism to
reduce modulo n.
Since the matrices U and V in Definition <ref> are invertible over , their reductions modulo n (call them , ) are invertible over . Therefore, the transformation x↦^-1· x is an isomorphism from ()^k↦()^k.
Hence, the submodule generated by v_0 ^-1,…,v_k-1^-1 is isomorphic to the submodule generated by the columns of D which are
^-1v_1^-1=[ d_1; 0; ⋮; 0 ], …, ^-1v_k^-1=[ 0; ⋮; 0; d_k ].
Hence, N is isomorphic to d_1⊕…⊕d_k where d_i is the reduction of d_i modulo n. And therefore,
N≅⊕_i=1^kδ_i
where δ_i=n/(d_i,n). We summarize these results with the following theorem, combining the results from Theorem <ref>.
Let C be the matrix defined in (<ref>) and let d_1,…,d_k be the elementary divisors of C coming from its Smith Normal Form when viewing C as a matrix over . Then
G(a_0,…,a_k-1)=(⊕_i=1^kC_δ_i)⋊ C_k
where δ_i=n/(d_i,n).
Note that some of the δ_i may equal 1, in which case the group C_δ_i is trivial.
Consider the quadrilateral with angles (2/5π, 2/5π, 2/5π, 4/5π). This gives the billiards surface X(2,2,2,4) and dessin D(2, 2, 2, 4). To calculate the monodromy group G(2, 2, 2, 4) of the dessin, we compute the smith normal form for the circulant matrix
C=[ 2 4 2 2; 2 2 4 2; 2 2 2 4; 4 2 2 2 ]
=UDV=
[ -11 -12 -14 -3; -11 -12 -13 -3; -7 -8 -9 -2; -11 -13 -14 -3 ][ 2 0 0 0; 0 2 0 0; 0 0 2 0; 0 0 0 10 ][ 1 0 0 4; -1 1 0 0; 0 -1 1 0; 0 0 -1 -3 ]
where U and V are unimodular. This gives us
δ_1 = δ_2 = δ_3 = 5/gcd(2, 5) = 5, δ_4 = 5/gcd(10, 5) = 1.
Then we have
G(2, 2, 2, 4) = (C_5× C_5× C_5) ⋊ C_4.
As a consequence of Theorem 2, one can quickly compute the monodromy groups of any rational triangular billiards surfaces.
Let [a_0, a_1, a_2] represent a triangle modulo n.
Let G(a_0, a_1, a_2)= ⟨σ_0, σ_1 ⟩ be the monodromy group of the dessin D(a_0,a_1,a_2) drawn on the triangular billiards surface X(a_0,a_1,a_2). Setting N=⟨σ_0σ_1, σ_0^2σ_1^2 ⟩ and H=⟨σ_0 ⟩, we have G(a_0, a_1, a_2) = N ⋊ H. Furthermore, if n = a_0 + a_1 + a_2 and α = (n, a_0a_1 - a_2^2), then
G(a_0, a_1, a_2) ≅ (C_n × C_n/α) ⋊ C_3.
Consider the arbitrary rational triangle with angles (a_0πn,a_1πn,a_2πn), where the a_i are positive integers, a_0+a_1+a_2=n, and (a_0,a_1,a_2,n)=1. Observe that it follows that (a_0,a_1,n)=1 as well. The normal subgroup N of the associated monodromy group is represented by the column span of C=[ a_0 a_1 a_2; a_1 a_2 a_0; a_2 a_0 a_1 ]
over ℤ/nℤ.
Since (a_0,a_1,n)=1, there exist integers s, t, and u such that sa_0+ta_1+un=1, and hence sa_0+ta_1≡ 1 n.
We can perform the following transformations of C by applying row and column operations (working modulo n):
C=[ a_0 a_1 a_2; a_1 a_2 a_0; a_2 a_0 a_1 ]→[ a_0 a_1 a_2; a_1 a_2 a_0; 0 0 0 ]→[ a_0 a_1 0; a_1 a_2 0; 0 0 0 ]→[ 1 sa_1+ta_2 0; 0 -a_1^2+a_0a_2 0; 0 0 0 ]→[ 1 0 0; 0 -a_1^2+a_0a_2 0; 0 0 0 ].
This yields the factorization
[ 1 0 0; 0 -a_1^2+a_0a_2 0; 0 0 0 ]=[ s t 0; -a_1 a_0 0; 0 1 1 ][ a_0 a_1 a_2; a_1 a_2 a_0; a_2 a_0 a_1 ][ 1 0 1; -sa_1+ta_2 1 1; 0 0 1 ].
Or, equivalently,
C=[ a_0 a_1 a_2; a_1 a_2 a_0; a_2 a_0 a_1 ]=
[ a_0 -t 0; a_1 s 0; -a_1 -s 1 ][ 1 0 0; 0 -a_1^2+a_0a_2 0; 0 0 0 ][ 1 0 -1; sa_1-ta_2 1 -sa_1+ta_2-1; 0 0 1 ].
One easily checks that the diagonalizing matrices are unimodular. It then follows from Theorem 2 that the monodromy group of the (a_0,a_1,a_2) triangle is
(C_n× C_n/α)⋊ C_3,
where α=(n,a_0a_2-a_1^2).
The monodromy group of the dessin drawn on the rational billiards surface of the regular k-gon is C_k/(k,2)× C_k.
The angles of the regular k-gon are k-2/kπ. When k is odd, a_0=…=a_k-1=k-2 and n=k since (k-2,k)=1. When k is even, a_0=…=a_k-1=k-2/2 and n=k/2 since (k-2/2,k/2)=1.
Since a_0=…=a_k-1, we see that the matrix C is a k× k matrix whose entries are all equal to k-2/(k,2). We deduce that the Smith Normal Form matrix D=[ k-2/(k,2) 0 … 0; 0 0 … 0; ⋮ ⋮ ⋱ 0; 0 0 0 0 ].
By Theorem <ref>, it follows that G(a_0,…,a_k-1)≅ C_k/(k,2)⋊ C_k. Since the subgroup H≅ C_k acts on N≅ C_k/(k,2) via cyclic permutation of the vector entries of C, we see that the the semidirect product action of H on N is trivial since all the columns of C are identical. Hence, the semidirect product is actually a direct product.
In <cit.>, Howell addresses the problem of computing the span of a set of vectors over /n. Howell considers a matrix A with entries
in /n. He then shows that A can be reduced via elementary row operations to an upper triangular matrix U whose rows have the same span as A. This matrix U is known as Howell Normal Form. However, Smith Normal Form has the advantage of directly computing the isomorphism class of the vector span as an abelian group, via the ordered list of elementary divisors.
§ ALGEBRAIC POLYGONS
In this section, we introduce the notion of an algebraic polygon and develop the relevant theory with the goal of proving results about actual polygons. We arrive at the concept of an algebraic polygon by relaxing the constraints on polygons modulo n slightly:
If k,n∈ with k≥ 2, then an ordered k-tuple of nonnegative integers [a_0,…,a_k-1] represents an algebraic polygon, or k-gon, modulo n if a_0+…+a_k-1≡ 0 n and (a_0,…,a_k-1,n)=1. Observe that [0,…,0] is not an algebraic k-gon.
Every geometric polygon modulo n is also an algebraic polygon modulo n. We shall define a “monodromy group” for any algebraic polygon in a natural way which coincides with the monodromy groups associated to geometric polygons described in Section <ref>. It turns out that it is relatively easy to classify the possible monodromy groups for all algebraic polygons modulo a prime p (we do this in Theorem <ref>). The challenge is to determine when, for a given monodromy group G of an algebraic polygon, there exists a geometric polygon with a monodromy group isomorphic to G. Lemmas <ref> and <ref> show that this is always possible if none of the entries in the algebraic polygon are zero modulo n. This motivates work in Section <ref> to produce algebraic polygons with nonzero entries.
Note that the definition of an algebraic polygon allows for an algebraic 2-gon even though no geometric 2-gons exist. Despite this fact, algebraic 2-gons can be used to produce geometric k-gons via Proposition <ref>.
§.§ Results About Algebraic Polygons
We say that two algebraic polygons, [a_0,…,a_k-1] and [b_0,…,b_k-1] modulo n are associates if there exists c∈ ()^× such that b_i≡ c a_i n for all i.
Our definition of associate algebraic polygons coincides with the definition of associate triangles from Aurell and Itzykson <cit.>.
Observe that reflex angles lead to interesting associate polygons. For example, the (algebraic) polygons [3,5,11,1] and [3,15,1,1] are associates modulo 10.
Suppose that [a_0,…,a_k-1] represents an algebraic polygon modulo n. Further suppose that 0<a_i<2n, a_i n for all i and a_0+…+a_k-1≤(k-2)n. Then there exists an associate polygon [b_0,…,b_k-1]. Consequently, there exists a polygon in the plane with consecutive angles b_0/nπ,…,b_k-1/nπ and zero crossings.
Observe that Proposition <ref> produces a polygon, not simply an algebraic polygon.
If a_0+…+a_k-1=(k-2)n, then, letting a_i=b_i, [a_0,…,a_k-1]=[b_0,…,b_k-1] represents an associate polygon modulo n. If a_0+…+a_k-1<(k-2)n, then let d=(k-2)n-(a_0+…+a_k-1)/n. We can find a_i_1,…,a_i_d with i_1,…,i_d distinct such that a_i_j<n. Add n to each of these a_i_j to obtain b_i_j= a_i_j+n≡ a_i_j n. Let b_i= a_i for all other indices i i_j. Thus, [b_0,…,b_k-1] represents an associate k-gon modulo n. By Proposition <ref>, there exists a polygon in the plane with consecutive angles b_0/nπ,…,b_k-1/nπ and zero crossings.
Consider the algebraic polygon [1,2,2,7] modulo 12. Using the procedure in Proposition <ref>, we produce the associate geometric polygon [13,2,2,7].
We will use the following lemma many times to verify that an algebraic k-gon satisfies the hypotheses of Proposition <ref>.
Suppose that [a_0,…,a_k-1] is an algebraic polygon modulo n with a_i≢0 n for all i. Then, [a_0,…,a_k-1] has an associate k-gon [b_0,…,b_k-1] that is a polygon modulo n. If n=p is a prime and p≥ k-1, then there exists an associate convex k-gon [b_0,…,b_k-1] that is a polygon modulo p.
Let d_i=(a_i,n). Observe that the subgroup ⟨ a_i: 0≤ i< k⟩ of is isomorphic to ⟨ d_i⟩. Let d=min(d_i:0≤ i<k). Without loss of generality, assume that d=gcd(a_0,n). The proof is analogous in all other cases.
There exists c∈ ()^× such that ca_0=d where ca_0 is the reduction of ca_0 modulo n. Further observe that ca_j≤ n-d_j≤ n-d for 1≤ j<k since ca_j≢0 n and ca_j is a multiple of d_j. Hence,
ca_0+ca_1+… +ca_k-1≤ d+(n-d)+… +(n-d)<(k-1)n.
Observe that
ca_0+ca_1+… +ca_k-1≡ c(a_0+… +a_k-1)≡ 0 n.
Therefore, ca_0+ca_1+… +ca_k-1≤ (k-2)n. Using Proposition <ref>, we obtain the desired [b_0,…,b_k-1].
Now consider the case where n=p is a prime and p≥ k-1. Since a_i≢0 p for all i, the reduction of a_i modulo p can be chosen so that 0<a_i<p for all i. Since [a_0,…,a_k-1] is an algebraic polygon, we know that a_0+… +a_k-1≡ 0 p. Therefore, a_0+…+a_k-1=cp where 0<c<k. Choose c'∈ p so that c'· c≡ k-2 p. We see that c'·a_0+…+a_k-1/p≡ c'· c≡ k-2 p. Hence, c'a_0+…c'a_k-2=(k-2)p and thus, letting b_i=c'a_i, [b_0,…,b_k-1] is a k-gon modulo p. Since 0<b_i<p for all i, we see that [b_0,…,b_k-1] represents a convex polygon.
§.§ Monodromy Groups of Algebraic Polygons
The purpose of introducing algebraic polygons is to understand monodromy groups of actual polygons. Therefore, we must associate to each algebraic polygon a monodromy group that coincides with the monodromy group in Section <ref> for geometric polygons.
The monodromy group associated with an algebraic k-gon [a_0,…,a_k-1] modulo n is the group N⋊ C_k where N is the additive group generated by the columns of the matrix
C=[ a_0 a_k-1 … a_2 a_1; a_1 a_0 a_k-1 a_2; ⋮ a_1 a_0 ⋱ ⋮; a_k-2 ⋱ ⋱ a_k-1; a_k-1 a_k-2 … a_1 a_0 ]
in the module ()^k. The group C_k acts on the columns of C by cyclicly permuting the entries of a vector.
The monodromy groups that arose in Section <ref> were monodromy groups of dessins d'enfant drawn on rational billiards surfaces. Although these surfaces and dessins do not exist for algebraic polygons, associating a monodromy group with them will still prove quite useful theoretically.
If [a_0,…,a_k-1] is a k-gon modulo n, then its monodromy group above is the same as the monodromy group of D(a_0,…,a_k-1) drawn on the rational polygonal billiards surface X(a_0,…,a_k-1). See Sections <ref> and <ref> for reference.
The following lemma illustrates that the monodromy group of associate algebraic polygons are isomorphic.
Fix n∈. If [a_0,…,a_k-1] and [b_0,…,b_k-1] are associate algebraic polygons, then their monodromy groups are the same.
Since [a_0,…,a_k-1] and [b_0,…,b_k-1] are associates, there exists c∈ ()^× such that b_i≡ c p_i for all i. Let C' and C” be the corresponding circulant matrices for [b_0,…,b_k-1] and [a_0,…,a_k-1] respectively. Therefore,
C'=[ b_0 b_k-1 … b_2 b_1; b_1 b_0 b_k-1 b_2; ⋮ b_1 b_0 ⋱ ⋮; b_k-2 ⋱ ⋱ b_k-1; b_k-1 b_k-2 … b_1 b_0 ] ≡ c[ a_0 a_k-1 … a_2 a_1; a_1 a_0 a_k-1 a_2; ⋮ a_1 a_0 ⋱ ⋮; a_k-2 ⋱ ⋱ a_k-1; a_k-1 a_k-2 … a_1 a_0 ]≡ c· C” n.
Since C' and C”, are scalar multiples of each other by a unit, the spans of their columns are equal. The result follows.
Suppose that [a_0,…,a_k-1] and [b_0,…,b_k-1] represent algebraic k-gons modulo n_1 and n_2 respectively where (n_1,n_2)=1. Suppose their respective monodromy groups are N_1⋊ C_k and N_2⋊ C_k. Then there exists an algebraic k-gon [c_0,…,c_k-1] modulo n_1n_2 with monodromy group (N_1× N_2)⋊ C_k. Furthermore, if a_i≢0 n_1 or b_i≢0 n_2 for every i, then c_i≢0 n_1n_2 for all i.
By the Chinese Remainder Theorem, there exist unique integers c_i with 0<c_i<n_1n_2 such that c_i≡ a_i n_1 and c_i≡ b_i n_2 for all i. Since c_i≡ a_i n_1, we see that c_0+… +c_k-1≡ 0 n_1 and (c_0,…,c_k-1,n_1)=1. A similar argument shows that c_0+… +c_k-1≡ 0 n_2 and (c_0,…,c_k-1,n_2)=1. Hence, c_0+… +c_k-1≡ 0 n_1n_2 and (c_0,…,c_k-1,n_1n_2)=1 since (n_1,n_2)=1.
Now, we will will compute the monodromy group of [c_0,…,c_k-1] which is N⋊ C_k where N is an abelian group and submodule of ( n_1n_2)^k. Since c_i≡ a_i n_1 for all i , we see that
C'=[ c_0 c_k-1 … c_2 c_1; c_1 c_0 c_k-1 c_2; ⋮ c_1 c_0 ⋱ ⋮; c_k-2 ⋱ ⋱ c_k-1; c_k-1 c_k-2 … c_1 c_0 ] ≡[ a_0 a_k-1 … a_2 a_1; a_1 a_0 a_k-1 a_2; ⋮ a_1 a_0 ⋱ ⋮; a_k-2 ⋱ ⋱ a_k-1; a_k-1 a_k-2 … a_1 a_0 ]=C” n_1.
Let d_1,…,d_k be the elementary divisors of C'. They are the same modulo n_1 as the elementary divisors of C”. By Theorem <ref>, we know the monodromy group of [c_0,…,c_k-1] is
⊕_i=1^kC_δ_i=⊕_i=1^kC_n_1n_2/(d_i,n_1n_2)=⊕_i=1^kC_n_1/(d_i,n_1)⊕ C_n_2/(d_i,n_2)
since (n_1,n_2)=1. Thus, the monodromy group of [a_0,…,a_k-1] is N_1=⊕_i=1^kC_n_1/(d_i,n_1). Therefore, N_1≅ n_2N≅ N n_1N. If N_2 is the monodromy group of [b_0,…,b_k-1], then a similar argument shows that N_2≅ n_1N≅ N n_2N. We conclude that N≅ N_1× N_2 and the main result follows.
In essence, Proposition <ref> allows one to combine two algebraic k-gons with coprime moduli and create a new algebraic k-gon [c_0,…,c_k-1]. The monodromy group of [c_0,…,c_k-1] is a combination of the monodromy groups of the original two algebraic k-gons.
One can actually combine two algebraic k-gons with no k-gon associates to create an algebraic k-gon with a k-gon associate. Consider the algebraic 3-gon [0,1,1] modulo 2 with monodromy group C_2^2⋊ C_3 and the algebraic 3-gon [1,0,4] modulo 5 with monodromy group C_5^2⋊ C_3. Neither of these algebraic 3-gons have a polygonal associate. However, if we combine them using Proposition <ref>, we obtain the algebraic 3-gon [6,5,9] modulo 10. This algebraic 3-gon has a 3-gon associate [4,5,1] modulo 10 obtained by scaling by 9 10. The 3-gon [4,5,1] has monodromy group (C_2^2× C_5^2)⋊ C_3≅ C_10^2⋊ C_3.
Suppose that [c_0,…,c_k-1] is an algebraic k-gon modulo n_1n_2 with n_1,n_2>1 and with monodromy group N⋊ C_k. Then there exists an algebraic k-gon [a_0,…,a_k-1] modulo n_1 with monodromy group (n_2N)⋊ C_k. If (n_1,n_2)=1, then the monodromy group (n_2N)⋊ C_k≅ (N n_1N)⋊ C_k.
Observe that c_i≢0 n_1 for some i. If n_1|c_i for all i, then (c_0,…,c_k-1,n_1n_2)>1, a contradiction with the definition of an algebraic polygon.
Choose a_i≡ c_i n_1 for all i. We see that a_0+… +a_k-1≡ 0 n_1 since c_0+… +c_k-1≡ 0 n_1n_2. Suppose that the monodromy group of [a_0,…,a_k-1] is N_1⋊ C_k. By the exact same calculation as in (<ref>), we conclude that N_1≅ n_2N.
Now suppose (n_1,n_2)=1. Using (<ref>), we see that the monodromy group of [c_0,…,c_k-1] has the form N⋊ C_k where
N=⊕_i=1^kC_δ_i=⊕_i=1^kC_n_1/(d_i,n_1)⊕ C_n_2/(d_i,n_2)
Observe that n_2N≅⊕_i=1^kC_n_1/(d_i,n_1) and n_1N≅⊕_i=1^kC_n_2/(d_i,n_2). Thus, n_2N≅ N n_1N.
If n_1 and n_2 are coprime in Proposition <ref>, then N_1≅ N n_1N. However, this is not the case when n_1 and n_2 have a non-trivial gcd. We illustrate this phenomenon in the following example.
Consider the k-gon [1,2,24,23] modulo 25. The monodromy group is N⋊ C_4 where N≅ C_25× C_5. If we apply Proposition <ref> when n_1=5, we obtain the k-gon [1,2,4,3] which has monodromy group N_1⋊ C_4 where N_1≅ C_5≅ 5N≇N 5N.
The following proposition allows us to lift an algebraic k-gon modulo n to an algebraic ℓ-gon modulo n if k|ℓ.
Suppose that k,ℓ∈ and k|ℓ. Further suppose that [a_0,…,a_k-1] is an algebraic k-gon modulo n with monodromy group N⋊ C_k. Then there exists an algebraic ℓ-gon [c_0,…,c_ℓ-1] modulo n with monodromy group N⋊ C_ℓ.
Let c_i=a_j where j is the least nonnegative integer satisfying i≡ j k. In essence,
[c_0,…,c_ℓ-1]=[a_0,…,a_k-1,a_0,…, a_k-1,a_0,…,a_k-1]
where the pattern a_0,…,a_k-1 repeats itself ℓ/k times. Let
C=
[ a_0 a_k-1 … a_2 a_1; a_1 a_0 a_k-1 a_2; ⋮ a_1 a_0 ⋱ ⋮; a_k-2 ⋱ ⋱ a_k-1; a_k-1 a_k-2 … a_1 a_0 ]
and observe that
C'=[ c_0 c_ℓ-1 … c_2 c_1; c_1 c_0 c_ℓ-1 c_2; ⋮ c_1 c_0 ⋱ ⋮; c_ℓ-2 ⋱ ⋱ c_ℓ-1; c_ℓ-1 c_ℓ-2 … c_1 c_0 ]
=
[ C C … C C; C C C C; ⋮ C C ⋱ ⋮; C ⋮ ⋮ C; C C … C C ]
where the matrix C appears ℓ/k times in each row and column. Therefore, the group generated by the columns of C' is isomorphic to the group generated by the columns of C and thus the monodromy group of [c_0,…,c_ℓ-1] is N⋊ C_ℓ.
The following example illustrates how Proposition <ref> is used to lift an algebraic k-gon to an algebraic ℓ-gon.
Let k=2, ℓ=4 and consider the algebraic 2-gon [3,4] modulo n=7. Using Proposition <ref>, lift [3,4] to the algebraic 4-gon [3,4,3,4] modulo 7. The monodromy group of [3,4] is C_7⋊ C_2 and the monodromy group of [3,4,3,4] is C_7⋊ C_4.
A quick lemma about semidirect products is needed to complete our series of results about combining algebraic polygons to form new algebraic polygons.
Suppose that N_1,H_1,N_2,H_2 are finite groups. If G_1≅ N_1⋊ H_1 and G_2≅ N_2⋊ H_2 then G_1× G_2≅ (N_1× N_2)⋊ (H_1× H_2).
An element of the group G_1× G_2 has the form ((n_1,h_1),(n_2,h_2)) where n_1∈ N_1, n_2∈ N_2, h_1∈ H_1, and h_2∈ H_2. Let ϵ represent the identify in the respective group. Consider the subgroups
N=⟨ ((n_1,),(,)), ((,),(n_2,)):n_1∈ N_1, n_2∈ N_2⟩
and
H=⟨ ((,h_1),(,)),((,),(, h_2)):h_1∈ H_1, h_2∈ H_2 ⟩.
It is easy to see that N≅ N_1× N_2 and H≅ H_1× H_2 and NH=G_1× G_2. It is also easy to see that N∩ H contains only the identity of G_1× G_2. In order to prove that G_1× G_2 is isomorphic to N⋊ H, we need to prove that N⊲ G_1× G_2. This follows immediately from the fact that N_1⊲ G_1 and N_2⊲ G_2.
Now, let us combine the results from Propositions <ref> and <ref> to obtain the following corollary.
Fix n_1,n_2,k,ℓ∈ with k,ℓ≥ 2. Suppose that (n_1,n_2)=1 and (k,ℓ)=1. If [a_0,…,a_k-1] is an algebraic k-gon modulo n_1 with monodromy group N_1⋊ C_k and [b_0,…,b_ℓ-1] is an algebraic ℓ-gon modulo n_2 with monodromy group N_2⋊ C_ℓ, then there exists an algebraic kℓ-gon [c_0,…,c_kℓ-1] modulo n_1n_2 with monodromy group (N_1× N_2)⋊ C_ℓ k≅ (N_1⋊ C_k)× (N_2⋊ C_ℓ).
Combining Propositions <ref> and <ref> give us the desired algebraic kℓ-gon [c_0,…,c_kℓ-1] with monodromy group (N_1⋊ N_2)⋊ C_kℓ. Since (k,ℓ)=1, C_kℓ≅ C_k× C_ℓ. Thus, by Lemma <ref>, (N_1⋊ N_2)⋊ C_kℓ≅ (N_1⋊ C_k)× (N_2⋊ C_ℓ).
The following example illustrates how to use Corollary <ref>.
Let k=3, ℓ=4, n_1=7 and n_2=5. Let [1,2,4] be our algebraic 3-gon modulo 7 and let [2,3,3,2] be our algebraic 4-gon modulo 5. The monodromy group of group of [1,2,4] is C_7⋊ C_3 and the monodromy group of [2,3,3,2] is C_5^2⋊ C_4. Using Proposition <ref>, we lift [1,2,4] to [1,2,4,1,2,4,1,2,4,1,2,4] and we lift [2,3,3,2] to [2,3,3,2,2,3,3,2,2,3,3,2]. Using Proposition <ref>, we combine these algebraic 12-gons to obtain [22,23,8,22,2,18,8,2,32,8,23,32] modulo 35 which has monodromy group (C_7× C_5^2)⋊ C_12≅ (C_7⋊ C_3)× (C_5^2⋊ C_4).
§ RESULTS ABOUT CIRCULANT MATRICES
The following results on circulant matrices will be needed to compute monodromy groups of polygons modulo p when p is prime. The results are well known over , and we provide the proofs for the corresponding results over finite fields for completeness.
A k× k circulant matrix C has the following form
C=[ a_0 a_k-1 … a_2 a_1; a_1 a_0 a_k-1 a_2; ⋮ a_1 a_0 ⋱ ⋮; a_k-2 ⋱ ⋱ a_k-1; a_k-1 a_k-2 … a_1 a_0 ].
For the purposes of this paper, the entries c_i are integers or integers modulo n.
We call the polynomial f(x)=a_0+a_1x+…+a_k-1x^k-1 the associated polynomial of the circulant matrix C.
Assume that you have a k× k circulant matrix C with entries in a field . Also assume that ω is a primitive kth root of unity. The vectors
v_j=[ 1; ω^j; ω^2j; ⋮; ω^(k-1)j ]
for j=0,…,k-1 are eigenvectors of a circulant matrix C with respective eigenvalues λ_j=a_0+a_k-1ω^j+a_k-2ω^2j+… +a_1ω^(k-1)j.
Let (C· v_j)_i denote the ith entry of the column vector C· v_j. Observe that
(C· v_j)_i = ∑_n=0^ia_i-n·ω^nj + ∑_n=0^k-i-2a_k-1-n·ω^(i+1+n)j
=ω^ij[ ∑_n=0^ia_i-n·ω^(n-i)j + ∑_n=0^k-i-2a_k-1-n·ω^(1+n)j]
=ω^ij[ ∑_n=0^ia_i-n·ω^(k+n-i)j + ∑_n=0^k-i-2a_k-1-n·ω^(1+n)j].
If we re-index the two sums letting n'=i-n in the first sum and n'=k-1-n in the second sum, we obtain:
(C· v_j)_i = ω^ij[∑_n'=0^ia_n'ω^(k-n')j+∑_n'=i+1^k-1a_n'ω^j(k-n')]
=ω^ij[ a_0+a_k-1ω^j+a_k-2ω^2j+… +a_1ω^(k-1)j]=λ_jω^ij.
It follows that C· v_j = λ_j· v_j, so v_j is an eigenvector with eigenvalue λ_j.
Assume that you have a k× k circulant matrix C with entries in a field that has k distinct kth roots of unity. Then the eigenvectors
v_j=[ 1; ω^j; ω^2j; ⋮; ω^(k-1)j ]
form a basis for the vector space ^k and thus C is diagonalizable.
Let V be the matrix [v_j:0≤ j ≤ k-1]. Note that V is a Vandermonde matrix. Thus
(V) = ∏_0≤ i<j≤ k-1(ω^j-ω^i).
Suppose (V) = 0. Then ω^j - ω^i = 0 for some i j. This is a contradiction with the fact that has k distinct kth roots of unity. Hence, (V) 0 and the eigenvectors v_j form a basis of ^k and C is diagonalizable.
If C is a k× k circulant matrix over a field which has an algebraic extension with k distinct kth roots of unity,
(C)=∏_j=0^k-1(a_0+a_1ω^j+…+a_k-1ω^(k-1)j)=∏_j=0^k-1f(ω^j)
where f is the associated polynomial of C.
The determinant of a diagonalizable matrix is the product of the eigenvalues (including multiplicities) listed in Lemma <ref>.
The rank of a k× k circulant matrix C over a field which has an algebraic extension with k distinct kth roots of unity is equal to k-d where d is the degree of (f(x),x^k-1).
Let d be the dimension of the null space of C which is equal to the multiplicity of the eigenvalue 0. An eigenvalue λ_j=0 if and only if a_0+a_k-1ω^j+…+a_1ω^(k-1)j=f(ω^j)=0. Hence, the dimension of the null space is equal to the number of kth roots of unity which are also roots of f(x). Therefore, d is the degree of the polynomial (f(x),x^k-1) and we obtain (C)=k-d.
Suppose p_1 and p_2 are distinct prime integers and p_1 is a generator for the cyclic group _p_2^×. Then x^p_2-1+…+x+1 is irreducible over _p_1.
Let ω be a primitive p_2th root of unity of _p_1. The group Gal(_p_1(ω)_p_1) is generated by the Frobenius automorphism ϕ:x↦ x^p_1 <cit.>. Since p_1 generates _p_2^×, we see that |ϕ|=p_2-1. Thus, [_p_1(ω):_p_1]=p_2-1 and x^p_2-1+…+x+1 is irreducible over _p_1.
Let p_1 and p_2 be primes such that p_1 is a generator for the cyclic group _p_2^×. Suppose that C is a p_2× p_2 circulant matrix with entries in _p_1. Then (C)=0, 1, p_2-1, or p_2.
By Lemma <ref>, we know that x^p_2-1+…+x+1 is irreducible over _p_1. Hence x^p_2-1 factors as (x-1)(x^p_2-1+…+x+1) over _p_1. By Lemma <ref>, we see that d=0, 1, p_2-1, or p_2 from which our result follows.
§ RESULTS FOR N=P PRIME
In Section <ref>, we gave a description of the monodromy group in terms of the elementary divisors of a particular circulant matrix. Although this result (Theorem <ref>) allows one to easily compute the monodromy group, the result is not explicit. We will prove several results below in the special case when n is equal to a prime p. In other words, [a_0,…,a_k-1] represent an algebraic k-gon modulo a prime p. In this case, the group N can be viewed as a p= module and is thus a vector space. In this section, will denote the finite field with p elements and ^× will denote its group of units.
Suppose that [a_0,…,a_k-1] represents an algebraic k-gon modulo a prime p. Let f(x)=a_0+a_1 x+…+a_k-1x^k-1 and let d be the degree of (f(x),x^k-1). Then the monodromy group of [a_0,…,a_k-1] is G(a_0,…,a_k-1)=C_p^k-d⋊ C_k.
By Lemma <ref>, we know that the rank of the matrix
C=[ a_0 a_k-1 … a_2 a_1; a_1 a_0 a_k-1 a_2; ⋮ a_1 a_0 ⋱ ⋮; a_k-2 ⋱ ⋱ a_k-1; a_k-1 a_k-2 … a_1 a_0 ]
is equal to k-d where d is the degree of (f(x),x^k-1). The rank of a subspace of a vector space determines the group structure and the result follows.
This allows us to translate the problem of finding the rank of a matrix to that of a degree of a gcd. The following corollary shows how we can use this connection to compute the monodromy groups of a large collection of dessins on rational billiards surfaces.
Suppose p_2 is a prime number and p_1 is a prime number that generates the cyclic group (_p_2)^×. Suppose that [a_0,…,a_p_2-1] represents an algebraic p_2-gon modulo p_1 with monodromy group G(a_0,…,a_p_2-1). Let f(x)=a_0+a_1 x+…+a_p_2-1x^p_2-1, then G(a_0,…,a_p_2-1)≅ C_p_1^p_2-1⋊ C_p_2.
By Corollary <ref>, the rank of the appropriate matrix C is 0, 1, p_2-1, or p_2. Since f(1)≡ 0 p_1, we know x-1|f(x) and thus (C)≤ p_2-1. Since x^p_2-1+…+x+1 is irreducible over _p_1 by Lemma <ref>, the ((f(x),x^p_2-1))=1 or p_2. If ((f(x),x^p_2-1))=p_2 then a_0=…=a_p_2-1=0 since (f)≤ p_2-1, which is a contradiction. Hence, ((f(x),x^p_2-1))=1 and the result follows.
Choose p_2=17. Observe that p_1=41 generates the multiplicative group _17^×. Hence, any algebraic 17-gon modulo 41 has monodromy group C_41^16⋊ C_17.
§.§ Possible Monodromy Groups
Now, let's prove a general theorem that lists all possible monodromy groups for polygons [a_0,…,a_k-1] modulo p.
Suppose that [a_0,…,a_k-1] represents an algebraic polygon modulo a prime p. Let f(x)=a_0+a_1 x+…+a_k-1x^k-1 and suppose x^k-1=∏g_i(x) where the g_i(x) are irreducible over . Further suppose that (f(x),x^k-1)=∏_j=1^ℓg_i_j(x). Then the monodromy group of [a_0,…,a_k-1] is G(a_0,…,a_k-1)=C_p^k-d⋊ C_k where d=∑_j=1^m(g_i_j(x)).
In essence, Proposition <ref> gives a list of all potential monodromy groups of algebraic k-gons modulo p. If [a_0,…,a_k-1] is an algebraic k-gon modulo p with monodromy group C_p^k-d⋊ C_k then d must be equal to the sum of degrees of distinct irreducible factors of x^k-1 in . The factor x-1 must be one of these factors. If there is no way to add up to d the degrees (g_i_j(x)) of a subset of the irreducible factors g_i(x) of x^k-1 in , then such a monodromy group cannot occur.
Consider k=3 and p=5. We see that x^3-1 factors as (x-1)(x^2+x+1) modulo 5. Since x-1 is required to be a factor of (f(x),x^k-1), we see that this gcd cannot have degree two. Therefore, the monodromy group C_5^3-2⋊ C_3 is not achieved by any algebraic 3-gon modulo 5.
By Proposition <ref>, we know that d is the degree of (f(x),x^k-1). Since the gcd must be a product of some subset of {g_i(x)}, we see that d is the sum of the degrees of some subset of {g_i(x)}. The theorem follows.
Observe that ℓ≥ 1 because f(1)=a_0+… a_k-1≡ 0 p implies x-1 divides f(x).
Fix a prime p∤ k. Suppose x^k-1=∏g_i(x) where the g_i(x) are irreducible over . Let d=∑_j=1^ℓ(g_i_j(x)). Further suppose that g_i_j=x-1 for some i_j. Then there exists an algebraic k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)≅ C_p^k-d⋊ C_k.
Let f(x)=∏g_i_j(x). We see that ((f(x),x^k-1))=d. If f(x)=a_0+… +a_k-1x^k-1 then a_0+…+a_k-1≡ 0 p since (x-1)|f(x). Since f(x) is not the zero polynomial over , we see that (a_0,…,a_k-1,p)=1. Therefore, [a_0,…,a_k-1] is an algebraic k-gon with monodromy group G(a_0,…,a_k-1)≅ C_p^k-d⋊ C_k by Proposition <ref>.
Theorem <ref> proves that all possible monodromy groups from Proposition <ref> are achieved by algebraic polygons modulo p for a fixed prime p. Therefore, it is natural to ask which groups can occur for k-gons modulo p. The following theorem shows that for primes p>k, all possible monodromy groups from Proposition <ref> are achieved by k-gons modulo p.
Fix a prime p>k≥ 3. Suppose x^k-1=g_1(x)⋯ g_ℓ(x) where the g_i(x) are irreducible over . Let d=∑_j=1^m(g_i_j(x)) where m is a positive integer less than ℓ and 1≤ i_1<…<i_m≤ℓ. Further suppose that g_i_j=x-1 for some i_j. Then there exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)≅ C_p^k-d⋊ C_k.
We only consider primes p>k in Theorem <ref>, because p∤ k in this case. The polynomial, x^k-1, has no repeated factors over when p∤ k which implies that there is an algebraic extension of with k distinct kth roots of unity. Furthermore, Theorem <ref> is not true for primes p≤ k in its current formulation. Consider k=3 and p=3. Since x^3-1=(x-1)^3 modulo 3, Theorem <ref> would predict the existence of 3-gons with monodromy groups C_3^2⋊ C_3 and C_3⋊ C_3. However, the only 3-gon is [1,1,1], and thus the only possible monodromy group of a 3-gon modulo 3 is C_3⋊ C_3.
§ PROVING THEOREM <REF>
Here we lay out the basic strategy and supporting lemmas we will use to prove Theorem <ref>.
§.§ Strategy for Proving Theorem <ref>
Recall that Lemmas 8 and 9 allow us to construct a geometric polygon with monodromy group G if we can find an algebraic polygon with all nonzero entries that has an isomorphic monodromy group. To control the number of nonzero entries in an algebraic polygon, we define:
For a polynomial f(x)=a_nx^n+…+a_1x+a_0 with a_n 0, let w(f(x)) be the maximum number of consecutive coefficients of f(x) that are zero. For example, if g(x)=x^7-x^3+1, then w(g(x))=3 since a_6=a_5=a_4=0 while a_3,a_7 0.
Now, our strategy for proving Theorem <ref> is the following:
* For a given monodromy group G≅ C_p^k-d⋊ C_k described in Theorem <ref>, find an appropriate polynomial g(x) satisfying g(x)|x^k-1, x-1|g(x), and (g(x))=d.
* Using Proposition <ref>, multiply g(x) by a series of linear polynomials to produce a polynomial f(x), each of which reduces the value of the w function but leaves (f(x),x^k-1)=g(x). Repeat until g(x) has been transformed into a polynomial f(x)=∑ b_ix^i of degree k-1 with w(f(x))=0 and (f(x),x^k-1)=g(x).
* Use Lemmas <ref> and <ref> to transform [b_0,…,b_k-1] into a geometric polygon with monodromy group G.
The proofs of Theorem <ref> and Proposition <ref> follow the above approach. However, the proof of Proposition <ref> differs slightly.
In the following proposition, we show that if we choose α appropriately, then w((x-α)· f(x))=max(w(f(x))-1,0).
Let be a field. Suppose that f(x)=a_nx^n+a_n-1x^n-1+… +a_1 x+a_0∈[x] with a_0,a_n 0. If {α_0,…,α_n} are distinct non-zero elements of , then there exists at least one α_i such that w(f(x)· (x-α_i))=max(w(f(x))-1,0).
Consider the coefficients of f(x)· (x-α_i)=b_n+1x^n+1+b_nx^n+…+b_1 x+b_0. Observe that b_0,b_n+1 0. Further observe that for 0<j<n+1, b_j=a_j-1-α_i a_j. If b_j=0 then one of three situations must arise:
* a_j-1=a_j=0
* a_j-1=α_i=0
* α_i=a_j-1/a_j and a_j 0
Situation (b) cannot arise, because α_i is chosen from non-zero elements of . By the pigeon hole principle, there exists at least one α_i in {α_0,…,α_n} such that α_ia_j-1/a_j for all 0≤ j≤ n. Our choice of α_i prevents situation (c) from arising. Since situation (a) cannot occur if w(f(x))=0 then w(f(x)· (x-α_i))=0 in this case.
Now we consider the case where w(f(x))>0. By our choice of α_i, b_j=0 implies a_j=a_j-1=0. Assume that w(f(x))=d+1 which implies there exist a_ℓ,…,a_ℓ+d with a_ℓ-1 0 and a_ℓ+d+1 0. We see that b_ℓ 0 and b_ℓ+d+1 0 and b_ℓ+1,…,b_ℓ+d=0. Hence, we have shown that w(f(x)· (x-α_i))=w(f(x))-1.
Now, we prove a useful result about the gcd of collections of polynomials with x^k-1.
Let be a field and let f(x)=a_dx^d+…+a_1x+a_0∈[x]. Then (f(x),x^k-1)=(x· f(x)-a_k-1(x^k-1),x^k-1).
Observe that (f(x),x^k-1)=(x· f(x),x^k-1) since x does not divide x^k-1. If g(x)=(x· f(x),x^k-1) then it is clear that g(x) divides x· f(x)-a_k-1(x^k-1). If h(x)=(x· f(x)-a_k-1(x^k-1),x^k-1), then h(x) divides x· f(x)-a_k-1(x^k-1)+a_k-1(x^k-1)=x· f(x). Hence, (f(x),x^k-1)=(x· f(x)-a_k-1(x^k-1),x^k-1).
Consider the field _2. Let f(x)=x^5+x^2+x+1. Using Lemma <ref>, we deduce that
(x^5+x^2+x+1,x^7-1)=(x^6+x^3+x^2+x, x^7-1)=(x^4+x^3+x^2+1,x^7-1)
In the following proposition, we prove a result about the maximum value of w(f(x)) for polynomials f(x) dividing x^k-1.
Let be a field and let f(x)=a_dx^d+… a_1 x+a_0∈[x]. Suppose that f(x) is a non-zero polynomial with (f(x))=d and f(x)|x^k-1. Then w(f(x))<k-d.
If k-d≥ d=(f(x)), then the result is trivial. Otherwise, suppose that w(f(x))≥ k-d. This implies that f(x) has k-d consecutive coefficients equal to zero. For the purposes of this proof, assume a_ℓ,… ,a_ℓ+k-d-1=0 for some ℓ<d.
Use Lemma <ref> exactly k-(ℓ+k-d-1)-1=d-ℓ times. That is, consider
g(x)=x^d-ℓ f(x)-(a_k-1x^d-ℓ-1+a_k-2x^d-ℓ-2+…+x· a_ℓ+k-d+1+a_ℓ+k-d )(x^k-1)
which can be rewritten as
g(x)=∑_i=d^k-1a_i-d+ℓ x^i+∑_i=d-ℓ^d-1a_i-d+ℓ x^i+∑_i=0^d-ℓ-1a_i+k-d+ℓx^i.
Lemma <ref> states that (g(x),x^k-1)=(f(x),x^k-1). Since the first summation above is equal to zero, we see that (g(x))≤ d-1. This implies that ((g(x),x^k-1))<d which is a contradiction since (f(x),x^k-1)=f(x) and (f(x))=d.
This proposition allows us to immediately obtain the following interesting corollary. Though the result is surely known, we could not find a reference for it.
If f(x)=a_k-1x^k-1+a_k-2x^k-2+… +a_1x+a_0 divides x^k-1 in a field and (f(x))=k-1 then a_i 0 for 0≤ i≤ k-1.
§.§ Proving Theorem <ref> for p>k+1
In this section, we prove Theorem <ref> in the case where p>k+1.
Fix an integer k≥ 3. Suppose that d|k and d<k. For primes p>k, there exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)≅ C_p ^k-d⋊ C_k.
Consider f(x)=(x^d-1)^k/d-1(x^d-1+⋯+x+1)=b_0+b_1x+…+b_k-1x^k-1. Since p>k, the binomial coefficients in the expansion of (x^d-1)^k/d-1 are nonzero, and thus b_i≢0 p for 0≤ i≤ k-1. Further observe that x^k-1 has no repeated factors since p∤ k. Since x^d-1+… +x+1 divides x^d-1 and x^d-1 divides x^k-1, we deduce that (f(x),x^k-1)=x^d-1. Therefore, [b_0,…,b_k-1] is an algebraic k-gon modulo p.
By Lemma <ref>, Lemma <ref>, Proposition <ref>, and Proposition <ref>, [b_0,…,b_k-1] has a k-gon associate [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)≅ C_p^k-d⋊ C_k.
The following theorem is crucial in the proof of Theorem <ref>.
Let k≥ 3 be an integer, and let p>k be a prime. Suppose x^k-1=∏g_i(x) where the g_i(x) are irreducible over . Let d=∑_j=1^ℓ(g_i_j(x)). Let M equal the number of roots of x^k-1/∏g_i_j in . Further suppose that g_i_j=x-1 for some i_j. If p>k+M, there exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)=C_p^k-d⋊ C_k.
Let g(x)=∏g_i_j(x) which implies (g(x))=d. By Proposition <ref>, w(g(x))<k-d. To produce a degree k-1 polynomial f(x) with (f(x),x^k-1)=g(x), we will use Proposition <ref> exactly k-d-1 times. The result of this process will be a new polynomial f(x) equal to g(x) times k-d-1 linear polynomials, and f(x) will have the property that w(f(x))=0.
To use Proposition <ref>, we must have at least k distinct nonzero α∈. Furthermore, these α cannot be roots of x^k-1/g(x). If α were a root of x^k-1/g(x), then ((x-α)· g(x),x^k-1) would have degree greater than d. Since has p-1 nonzero elements, we need p-1≥ k+M to satisfy the assumptions of Proposition <ref> and thus we need p>k+M.
The result of using Proposition <ref> exactly k-d-1 times is a degree k-1 polynomial f(x)=b_k-1x^k-1+…+b_1x+b_0 with b_k-1,…,b_0≢0 p. Observe that b_0+…+b_k-1≡ 0 p since x-1|f(x). By Lemma <ref>, Lemma <ref>, and Proposition <ref>, there exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)≅ C_p^k-d⋊ C_k.
Theorem <ref> proves Theorem <ref> for most k and p as illustrated in the following corollary.
Fix an integer k≥ 3. Theorem <ref> is true for primes p>k+1.
Fix p>k+1. Suppose that (p-1,k)=d. We claim that _p^× contains exactly d distinct kth roots of unity. Observe that _p^×≅ C_p-1≅ (p-1). Finding the number of kth roots of unity in _p^× is equivalent to finding the number of solutions to kx≡ 0 p-1 in (p-1). Since (k/d,p-1)=1, we see that the number of solutions to kx=k/d(dx)≡ 0 p-1 is the same as the number of solutions to dx≡ 0 p-1. Since d|p-1, there are d solutions to dx≡ 0 p-1 and thus _p^× contains exactly d distinct kth roots of unity. The remaining kth roots of unity lie in an algebraic extension of .
In Theorem <ref>, M≤ d-1 since the factor g_i_j=x-1 for some i_j. Since p k+1 and (p-1,k)=d, we deduce that p>k+d>k+M. Thus, Theorem <ref> is true when p>k+1.
To prove Theorem <ref>, one need only verify it for integers k≥ 3 where p=k+1 is prime.
§.§ Proving Theorem <ref> for p=k+1
In this section, we prove Theorem <ref> in the remaining cases in which p=k+1.
If p=k+1 then x^k-1 splits completely into linear terms over since x^p-x=x(x^k-1) is the polynomial whose roots are the elements of .
Suppose p=k+1 is an odd prime. Let d|k with d>1. There exists a polynomial x^d-a∈[x] with no roots in .
Since ^× is a cyclic group under multiplication, let a be a generator of this cyclic group. We claim x^d-a has no roots in . Suppose x^d-a had a root in . This would imply that there exists an element b∈ satisfying b^d=a. However, this would imply that a^k/d=(b^d)^k/d=b^k=1, a contradiction with the fact that the order of a under multiplication is k.
Suppose p=k+1 is an odd prime. Further suppose 0<d<k/2. There exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)=C_p^k-d⋊ C_k.
By Lemma <ref>, there exists a polynomial x^k/2-a that has no linear factors in . Thus, (x^k/2-a,x^k-1)=1. We need to produce a polynomial g(x) of degree k/2-1 so that w(g(x))=0 and the (g(x),x^k-1) has degree d. If we can find such a g(x), then h(x)=(x^k/2-a)· g(x) has degree k-1, the (h(x),x^k-1) has degree d, and w(h(x))=0.
Consider (x-1)^k/2-d whose coefficients are nonzero modulo p. We need to find a sequence of distinct elements α_i∈ so that if we set g(x)=(x-1)^k/2-d∏_i=1^d-1(x-α_i) then w(g(x))=0. We proceed by induction. Suppose we have already found j distinct elements α_i∈ so that g̃(x)=(x-1)^k/2-d∏_i=1^j(x-α_i) and w(g̃(x))=0. How many choices for α_j+1 are there? By Proposition <ref>, since (g̃)=k/2-d+j, we need more than k/2-d+j choices to select α_j+1 so that w(g̃· (x-α_j+1))=0. We also remove j possible nonzero elements of from consideration when we choose α_j+1 to ensure all α_i are distinct. Since j<d<k/2, we see that k/2-d+j<k-j. Thus, by the pigeon hole principle, there exists a nonzero α_j+1 so that the α_i are distinct for 1≤ i≤ j+1 and w(g̃· (x-α_j+1))=0.
By induction, we have shown there exists a polynomial g(x)=(x-1)^k/2-d∏_i=1^d-1(x-α_i) where the α_i are distinct and w(g(x))=0. Now, let h(x)=g(x)· (x^k/2-a). We see that (h(x))=k-1, the (h(x),x^k-1) has degree d, and w(h(x))=0.
By Lemma <ref>, Lemma <ref>, and Proposition <ref>, there exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)≅ C_p^k-d⋊ C_k.
Suppose k≥ 3 and p=k+1 is prime. Further suppose k/2< d<k. There exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)=C_p^k-d⋊ C_k.
Since k is even, observe that x^k/2+1 divides x^k-1. Let S be the set of roots of x^k/2+1 in . Choose a set T={α_1,…,α_d-k/2}⊂_p^× so that the α_i are distinct, α_1=1, and α_i∉S for all i.
Setting g̃(x)=∏_i=1^d-k/2(x-α_i), observe that g̃(x) divides x^k/2-1. By Proposition <ref>, we see that w(g̃(x))<k/2-(d-k/2)=k-d<k/2. Now, we want to use Proposition <ref> exactly k-d-1 times to find β_j in so that g(x)=∏_i=1^d-k/2(x-α_i)·∏_j=1^k-d-1(x-β_j) and w(g(x))=0 and each β_j∈ S∪ T. If we have at least k/2 eligible distinct nonzero elements of , we can use Proposition <ref> exactly k-d-1 times. Since there are d nonzero elements in S∪ T and d>k/2, we can use Proposition <ref> to select our β_j. The result of using Proposition <ref> these k-d-1 times is the polynomial g(x)=∏_i=1^d-k/2(x-α_i)·∏_j=1^k-d-1(x-β_j) which has the properties that w(g(x))=0 and each β_j∈ S∪ T.
Now, let h(x)=g(x)· (x^k/2+1). We see that (h(x))=k-1, that (h(x),x^k-1) has degree d, and that w(h(x))=0. By Lemma <ref>, Lemma <ref>, and Proposition <ref>, there exists a k-gon [a_0,…,a_k-1] modulo p with monodromy group G(a_0,…,a_k-1)≅ C_p^k-d⋊ C_k.
Now, we proceed with the proof of Theorem <ref>.
The case where p>k+1 was proven in Corollary <ref>. Now consider the case when p=k+1 is an odd prime. If 1≤ d≤ k-1, we claim there exists a k-gon modulo p with monodromy group C_p^k-d⋊ C_k. The case where d<k/2 was proven in Proposition <ref> and the case where d>k/2 was proven in Proposition <ref>. The case where d=k/2 is a consequence of Proposition <ref> because k/2 divides k. Thus, the proof of Theorem <ref> is complete.
§ RESULTS FOR COMPOSITE N
In this section, we will prove several results about monodromy groups when n is composite relying heavily on the theory of algebraic polygons from Section <ref>. This first proposition shows that you can combine k-gons with relatively prime moduli to create a new k-gon whose monodromy group is closely related to the monodromy groups of the initial k-gons.
Suppose that [a_0,…,a_k-1] and [b_0,…,b_k-1] represent k-gons modulo n_1 and n_2 respectively where (n_1,n_2)=1. Suppose their respective monodromy groups are N_1⋊ C_k and N_2⋊ C_k. Then there exists a k-gon [c_0,…,c_k-1] modulo n_1n_2 with monodromy group (N_1× N_2)⋊ C_k.
This proposition is an immediate consequence of Proposition <ref>, Lemma <ref>, and Lemma <ref>.
Here is an example of the use of Proposition <ref>.
Consider the quadrilateral [a_0,a_1,a_2,a_3]=[1,4,4,1] which has modulus n_1=5. The monodromy group of D(1,4,4,1) is C_5^2⋊ C_4. Also consider the quadrilateral [b_0,b_1,b_2,b_3]=[2,3,4,3] which has modulus n_2=6. The monodromy group of D(2,3,4,3) is C_6^2⋊ C_4. We can solve a system of four congruences modulo 5· 6=30. Observe that if we set [c_0,c_1,c_2,c_3]=[26, 9, 4,21] then we have c_i≡ a_i 5 and c_i≡ b_i 6. We see that c_0+c_1+c_2+c_3=2· 30. If this had not been the case, we could have modified the coefficients using Lemma <ref> and Lemma <ref> without changing the monodromy group. Finally, one can compute that the monodromy group of D(26,9,4,21) is C_30^2⋊ C_4≅ (C_5^2× C_6^2)⋊ C_4.
You can use Proposition <ref> to project a k-gon modulo n_1n_2 to an algebraic k-gon modulo n_1. However, this proposition does not guarantee that the new algebraic k-gon will have a k-gon associate as illustrated in the following example.
Consider the polygon [c_0,c_1,c_2]=[1,1,4] modulo 6 which has monodromy group (C_6× C_2)⋊ C_3. Consider the reduction c_i≡ a_i 2 to obtain [a_0,a_1,a_2]=[1,1,0]. The monodromy group of [1,1,0] modulo 2 is C_2^2⋊ C_3. However, there do not exist any 3-gons modulo 2.
The above example illustrates how we must understand monodromy groups of algebraic polygons, and not polygons, in order to classify all possible monodromy groups for k-gons modulo composite n.
Fix an abelian group N and a positive integer n=∏p_j^x_j where the p_j are distinct primes. There exists a k-gon [c_0,…,c_k-1] modulo n with monodromy group N⋊ C_k if and only if there exist algebraic k-gons [a_0^(j),…,a_k-1^(j)] modulo p_j^x_j with monodromy groups (N p_j^x_jN)⋊ C_k and for every 0≤ i≤ k-1 there exists some j for which a_i^(j)≢0 p_j^x_j.
If [c_0,…,c_k-1] is a k-gon with the desired monodromy group N⋊ C_k, then the forward direction of the proof follows immediately from Proposition <ref> and the fact that c_i≢0 n for all i.
Suppose there exist algebraic k-gons [a_0^(j),…,a_k-1^(j)] modulo p_j^x_j with monodromy groups (N p_j^x_jN)⋊ C_k and for every 0≤ i≤ k-1 there exists some j for which a_i^(j)≢0 p_j^x_j. The reverse direction of the proof follows from Proposition <ref>, Lemma <ref>, and Lemma <ref>.
The condition that a_i^(j)≢0 p_j^x_j in Proposition <ref> is satisfied if at least one of the algebraic k-gons [a_0^(j),…,a_k-1^(j)] is an actual k-gon. This is sufficient but not necessary.
Proposition <ref> translates the problem of understanding the monodromy groups of all algebraic k-gons to the problem of understanding monodromy groups for algebraic k-gons with prime power moduli.
There does not exist a 3-gon modulo 35 with monodromy group N⋊ C_3 where N≅ C_35 or where N≅ C_35× C_7. Suppose there were such a 3-gon [c_0,c_1,c_2] modulo 35. Then the projection of [c_0,c_1,c_2] modulo 5 (using Proposition <ref>) would have monodromy group 7N⋊ C_3≅ (N 5N)⋊ C_3 which is isomorphic to C_5⋊ C_3 in both the case where N≅ C_35 and N≅ C_35× C_7. However, C_5⋊ C_3 is not a possible monodromy group for any algebraic 3-gon modulo 5 by Proposition <ref>.
§.§ Triangular Billiards Surfaces
One well-known property of the Smith Normal Form for is summarized in the following lemma.
If d_1,…,d_k are the elementary divisors of the Smith Normal Form of a matrix A over , then d_1⋯ d_j is equal to the gcd of the determinants of all j× j minors of the matrix A.
This property allows us to reprove Corollary <ref> using a method that will extend to the higher k-gons.
Consider the arbitrary rational triangle with angles (a_0πn,a_1πn,a_2πn), where the a_i are positive integers, a_0+a_1+a_2=n, and (a_0,a_1,a_2,n)=1. The normal subgroup N of the associated monodromy group is represented by the column span of C=[ a_0 a_1 a_2; a_1 a_2 a_0; a_2 a_0 a_1 ]
over ℤ/nℤ. Observe that
C=[ a_0 a_1 a_2; a_1 a_2 a_0; a_2 a_0 a_1 ]
=
[ 1 0 0; 0 1 0; -1 -1 1 ][ a_0 a_1 0; a_1 a_2 0; 0 0 0 ][ 1 0 -1; 0 1 -1; 0 0 1 ].
The elementary divisors of C are the same as the elementary divisors of C'=[ a_0 a_1 0; a_1 a_2 0; 0 0 0 ]. Using Lemma <ref>, we deduce that d_1=(a_0,a_1,a_2,n)=1. By looking at the 2× 2 minors of C', we further deduce that d_1d_2=d_2=(a_0a_2-a_1^2,n). It then follows from Theorem 2 that the monodromy group of the (a_0,a_1,a_2) triangle is
(C_n× C_n/α)⋊ C_3,
where d_2=α=(n,a_0a_2-a_1^2).
Although Corollary <ref> gives a formula for computing the monodromy group of the dessin drawn on a triangular billiards surface, it does not specify which monodromy groups can arise. The following theorem classifies the monodromy groups of all rational triangular billiards surfaces modulo n.
Fix n∈ with n> 3. The set of possible monodromy groups for triangles modulo n includes precisely those groups of the form (C_n× C_n/α)⋊ C_3 where α|n and α=3^i∏_jp_j^n_j where the p_j are primes congruent to 1 modulo 3, i∈{0,1}, and n_j≥ 0. If n=3, the only possible monodromy group is C_3⋊ C_3.
The proof of this theorem utilizes results from algebraic number theory. Use any introductory graduate book on the topic, such as <cit.>, as a reference.
Recall that the monodromy group associated to the triangle (a_0,a_1,a_2) modulo n is (C_n× C_nα)⋊ C_3 where α=(a_0a_2-a_1^2,n). What values can a_0a_2-a_1^2 take modulo n?
Observe that a_2≡ -a_0-a_1 n. Hence, a_0a_2-a_1^2≡ a_0(-a_0-a_1)-a_1^2≡ -(a_0^2+a_0a_1+a_1^2) n. Further observe that a_0^2+a_0a_1+a_1^2=N(a_0-a_1ζ_3) where ζ_3 is a third root of unity and N is the norm map from [ζ_3] to . So we can answer the question about the possible values of α by asking what values are in the image of the norm map. However, there are some restrictions on a_0 and a_1. Since (a_0,a_1,a_2,n)=1 and a_0+a_1+a_2=n, we deduce that (a_0,a_1,n)=1. Hence, if a_0 and a_1 have a common factor greater than 1, that factor does not divide n. Therefore, to find a triangle modulo n with monodromy group (C_n× C_nα)⋊ C_3, we must find an ideal (a_0-a_1ζ_3) in [ζ_3] with the properties that (N(a_0-a_1ζ_3),n)=α and (a_0,a_1,n)=1.
The fact that the norm map is multiplicative will allow us to answer the question by examining ideals with norm of prime power order. Since ideals factor uniquely as products of prime ideals in [ζ_3], suppose the ideal (a_0-a_1ζ_3)=∏p_j^n_j where the p_j are distinct prime ideals in [ζ_3]. If p_j=(b_0-b_1ζ_3) then (b_0,b_1,n)=1. If (b_0,b_1,n) 1, then (a_0,a_1,n) 1. Secondly, if p^n_j|(N(a_0-a_1ζ_3),n) one of the following three situations must arise:
* p^n_j/2=(p)^n_j/2 is in the factorization of the ideal (a_0-a_1ζ) if p is an inert prime with N(p)=p^2.
* p_1^xp_2^n_j-x is in the factorization of the ideal (a_0-a_1ζ_3) if p_1 and p_2 are the two primes above (p) in [ζ_3]. In this case, N(p_1)=N(p_2)=p.
* p^n_j is in the factorization of the ideal (a_0-a_1ζ) if p is a ramified prime with N(p)=p.
To summarize, we want to know if, when p is a prime dividing n, does there exist an ideal (b_0-b_1ζ_3) satisfying N(b_0-b_1ζ_3)=p^n_j with p^n_j|n and (b_0,b_1,n)=1?
First consider a prime p≡ 2 3. Observe that the ideal (p)⊂[ζ_3] is an inert prime ideal that has norm p^2. Hence, p is not in the range of the norm map. If b_0-b_1ζ_3∈[ζ_3] has norm p^n_j then the ideal generated by b_0-b_1ζ_3 has the property (b_0-b_1ζ_3)=(p^n_j/2) since ideals factor uniquely as products of prime ideals in [ζ_3]. Hence, p divides b_0 and b_1, which implies p∤ n. Hence, p≢2 3.
Now consider a prime p≡ 1 3. There is a prime ideal (y-zζ_3) of norm p since the ideal (p) splits in [ζ_3]. Note that (y,z)=1 since N(y-zζ_3)=y^2+yz+z^2=p. Set (y-zζ_3)^n_j=(b_0-b_1ζ_3). Observe that the ideal (b_0-b_1ζ_3) is an ideal with norm p^n_j. Now, we deduce (b_0,b_1)=1 from the fact that ideals factor uniquely in [ζ_3]. Since N(b_0-b_1ζ_3)=p^n_j, the only factor they could have in common is p. But if p|b_0 and p|b_1 then the ideal (p) would divide (y-zζ_3)^n_j, which is a contradiction since the ideal (p) factors as a product of two distinct prime ideals of norm p, namely (y-zζ_3) and (y-zζ_3^2). Clearly, (y-zζ_3^2) is not in the unique factorization of (y-zζ_3)^n_j. Hence, (b_0,b_1)=1. Therefore, if p≡ 1 3 is a prime dividing n, then there exist b_0,b_1 with (b_0,b_1)=1 and N(b_0-b_1ζ_3)=p^n_j.
Now consider the case when p=3. The unique prime ideal of norm 3 in [ζ_3] is (1-ζ_3). If N(b_0-b_1ζ_3)=3^i where i>1 then the ideal (3) would divide (b_0-b_1ζ_3) since the ideal (1-ζ_3)^2=(3). Since ideals have unique prime ideal factorizations in [ζ_3], we would have 3|b_0 and 3|b_1, a contradiction. Hence, when p=3, the only ideal (b_0-b_1ζ_3) satisfying N(b_0-b_1ζ_3)=3^i with 3^i|n and (b_0,b_1,n)=1 occurs when i∈{0,1}.
Using the multiplicative property of the norm map, if α=3^i∏_jp_j^n_j divides n where the p_j are primes congruent to 1 modulo 3, i∈{0,1}, and n_j≥ 0, then there exist positive integers a_0,a_1 with (a_0,a_1,n)=1, and (a_0a_2-a_1^2,n)=α if a_2=n-a_0-a_1. To use Lemma <ref>, we must verify that a_0,a_1,a_2≢0 n.
Assume α 1. By way of contradiction, assume one of the a_i≡ 0 n. Without loss of generality, assume a_2≡ 0.
In this case, a_0≡ -a_1 n. Thus, a_0^2+a_0a_1+a_1^2≡ a_0^2 n. Hence, (N(a_0-a_1ζ_3),n)=(a_0^2+a_0a_1+a_1^2,n)=(a_0^2,n). Since, (a_0,a_1,n)=(a_0,-a_0,n)=1, then (N(a_0-a_1ζ_3),n)=(a_0^2,n)=1, a contradiction.
Thus, if α 1, we can use Lemma <ref> to adjust [a_0,a_1,a_2] so that it is a geometric 3-gon modulo n without altering the 's above. Thus by Corollary <ref>, we have obtained the required monodromy group when α 1.
Now consider the case when α=1. Instead of showing a_i≢0 n in the above construction, we instead find explicit geometric triangles with monodromy group (C_n× C_n)⋊ C_3. If 3∤ n, then consider the triangle [1,1,n-2]. Observe that (a_0^2+a_0a_1+a_2^2,n)=(3,n)=1. Thus, [1,1,n-2] has monodromy group (C_n× C_n)⋊ C_3 when 3∤ n. Now consider the case when 3|n. Consider the triangle [n/3-1,n/3,n/3+1]. This is a geometric triangle when n>3. Observe that a_0^2+a_0a_1+a_1^2=(n/3-1)^2+(n/3-1)n/3+(n/3)^2=1-n+n^2/3. Since 3|n, we see that a_0^2+a_0a_1+a_1^2≡ 1 n. Thus (a_0^2+a_0a_1+a_1^2,n)=1 and the monodromy group of [n/3-1,n/3,n/3+1] is (C_n× C_n)⋊ C_3. In the case when n=3, there is only one geometric triangle, [1,1,1], which has monodromy group C_3⋊ C_3.
The following example illustrates how Theorem <ref> can be used to classify the possible monodromy groups modulo a composite number n.
If n=81, there are only two possible monodromy groups. The triangle [1,2,78] has associated monodromy group (C_81× C_81)⋊ C_3 and the triangle [1,1,79] has associated monodromy group (C_81× C_27)⋊ C_3. However, there does not exist a triangle with associated monodromy group (C_81× C_9)⋊ C_3 or (C_81× C_3)⋊ C_3 or (C_81)⋊ C_3.
§.§ Quadrilateral Billiards Surfaces
One can also use Lemma <ref> to produce an analogue of Corollary <ref> in the quadrilateral case.
Suppose that [a_0,a_1,a_2,a_3] represents a 4-gon modulo n. Let G(a_0, a_1, a_2, a_3) be the monodromy group of the dessin D(a_0,a_1,a_2,a_3) drawn on the quadrilateral billiards surface X(a_0,a_1,a_2,a_3). Then
G(a_0, a_1, a_2, a_3) ≅ (C_n × C_n/d_2× C_n/d_3) ⋊ C_4.
where
d_2=(a_0a_2-a_3^2,a_0a_1-a_2a_3, a_0^2-a_2^2,a_1a_3-a_2^2,a_0a_3-a_1a_2,a_0a_2-a_1^2,n)
and
d_3=((a_0+a_2)((a_0+a_1)^2+(a_1+a_2)^2)/d_2,n) if d_2 n
n if d_2=n.
The normal subgroup N of the associated monodromy group is represented by the column span of C=[ a_0 a_1 a_2 a_3; a_1 a_2 a_3 a_0; a_2 a_3 a_0 a_1; a_3 a_0 a_1 a_2 ]
over ℤ/nℤ. Let ã_3=-a_0-a_1-a_2. Consider the matrix C'=[ a_0 a_1 a_2 ã_3; a_1 a_2 ã_3 a_0; a_2 ã_3 a_0 a_1; ã_3 a_0 a_1 a_2 ]. Observe that C≡ C' n and thus they have the same elementary divisors modulo n. We will proceed by finding the elementary divisors of C' over and then reducing them modulo n to get the elementary divisors of C'. Let d_1,d_2,d_3,d_4 be the elementary divisors of C and let d̃_1,d̃_2,d̃_3,d̃_4 be the elementary divisors of C'. Since (a_0,a_1,a_2,ã_3,n)=(a_0,a_1,a_2,a_3,n)=1, the of the one by one minors is 1. Hence, d_1=d̃_1=1 by Lemma <ref>.
Observe that
C'=[ a_0 a_1 a_2 ã_3; a_1 a_2 ã_3 a_0; a_2 ã_3 a_0 a_1; ã_3 a_0 a_1 a_2 ]
=
[ 1 0 0 0; 0 1 0 0; 0 0 1 0; -1 -1 -1 1 ][ a_0 a_1 a_2 0; a_1 a_2 ã_3 0; a_2 ã_3 a_0 0; 0 0 0 0 ][ 1 0 0 -1; 0 1 0 -1; 0 0 1 -1; 0 0 0 1 ].
Thus the elementary divisors of C' are the same modulo n as the elementary divisors of
C”=[ a_0 a_1 a_2 0; a_1 a_2 ã_3 0; a_2 ã_3 a_0 0; 0 0 0 0 ].
Hence, d_4=d̃_4=0. To compute d_2, we compute the gcd of the 2 by 2 minors of C” of which there are only 9 that are nonzero. Three of the minors are duplicates, thus leaving us with 6. These minors are
{a_0a_2-ã_3^2,a_0a_1-a_2ã_3, a_0^2-a_2^2,a_1ã_3-a_2^2,a_0ã_3-a_1a_2,a_0a_2-a_1^2 }. Using Lemma <ref>, we obtain d_2=(d̃_2,n)=(a_0a_2-a_3^2,a_0a_1-a_2a_3, a_0^2-a_2^2,a_1a_3-a_2^2,a_0a_3-a_1a_2,a_0a_2-a_1^2,n).
Lastly, d̃_3 will be equal to the third elementary divisor of C' which is the same as the third elementary divisor of [ a_0 a_1 a_2; a_1 a_2 ã_3; a_2 ã_3 a_0; ]. By Lemma <ref>, we know that d̃_2d̃_3=[ a_0 a_1 a_2; a_1 a_2 ã_3; a_2 ã_3 a_0; ]=a_0^2a_2+2a_1a_2ã_3-a_2^3-a_0ã_3^2-a_0a_1^2=-(a_0+a_2)((a_0+a_1)^2+(a_1+a_2)^2). Hence, d̃_3=(a_0+a_2)((a_0+a_1)^2+(a_1+a_2)^2)/d̃_2 provided d̃_2 0. If d̃_2=0 then d̃_3=0. Therefore, d_3=(d̃_3,n)=((a_0+a_2)((a_0+a_1)^2+(a_1+a_2)^2)/d_2,n) unless d_2=n in which case d_3=n.
§ FUTURE DIRECTIONS
There are many questions that naturally arose in the study of monodromy groups of dessin drawn on rational billiards surfaces. Here are some possible future questions to investigate.
Throughout this paper, we used Proposition <ref>, Lemma <ref>, and Lemma <ref> many times to produce a polygon with the same monodromy group as a particular algebraic polygon. Using Lemma <ref>, we can produce an associate convex polygon in the case where the modulus n=p is prime and p≥ k. It is natural to ask if G is the monodromy group of a k-gon modulo n, is it the monodromy group of a convex k-gon modulo n?
How can one generalize Theorem <ref> to primes p≤ k? For p≤ k, a monodromy group attained by an algebraic k-gon may not be attainable by a k-gon. For example, x^6-1=(x-1)^2(x^2+x+1)^2 modulo 2. Thus, there exist algebraic 6-gons modulo 2 with monodromy groups C_2⋊ C_6, C_2^2⋊ C_6, C_2^3⋊ C_6, C_2^4⋊ C_6, and C_2^5⋊ C_6. However, there is only one 6-gon modulo 2, namely [3,1,1,1,1,1], which has monodromy group C_2⋊ C_6.
Can one generalize Proposition <ref> to k-gons where k>4?
In Theorem <ref>, we classified which groups appear as the monodromy group of a triangle. Can one prove an analogous result for the monodromy groups that arise for an arbitrary k-gon?
plain
|
http://arxiv.org/abs/2306.03622v1
|
20230606121905
|
FaaSwap: SLO-Aware, GPU-Efficient Serverless Inference via Model Swapping
|
[
"Minchen Yu",
"Ao Wang",
"Dong Chen",
"Haoxuan Yu",
"Xiaonan Luo",
"Zhuohao Li",
"Wei Wang",
"Ruichuan Chen",
"Dapeng Nie",
"Haoran Yang"
] |
cs.DC
|
[
"cs.DC"
] |
FaaSwap: SLO-Aware, GPU-Efficient Serverless Inference via Model Swapping
Minchen Yu^† Ao Wang^ Dong Chen^† Haoxuan Yu^† Xiaonan Luo^† Zhuohao Li^†
Wei Wang^† Ruichuan Chen^ Dapeng Nie^ Haoran Yang^
^†Hong Kong University of Science and Technology ^Alibaba Group ^Nokia Bell Labs
=============================================================================================================================================================================================================================================
The dynamic request patterns of machine learning (ML) inference workloads have driven an increasing trend towards exploiting serverless computing for scalable ML model serving.
However, today's serverless platforms lack efficient support for GPUs — provisioning functions on GPUs incurs extremely high overhead, forcing them to keep long-running even when idling for reduced cold starts.
This leads to significant resource waste to perform ML inference and hinders the pay-per-use billing for GPUs.
In this paper, we present , a serverless platform enabling fine-grained, request-level GPU sharing for resource-efficient ML inference.
leverages model swapping to support fast inference execution at low resource cost. It keeps models in a host which has a large amount of cheap memory and quickly swaps models to GPUs when requested, reducing per-function keep-alive cost and enabling efficient GPU sharing across much more functions.
also supports swapping models between GPUs for load balancing and improved inference performance.
In , we design sophisticated request scheduling and memory management algorithms that efficiently exploit model swapping to reduce GPU cost and meet latency service-level objectives (SLOs) for all inference functions.
We have implemented and integrated into Alibaba Cloud Function Compute (FC), one of the world's largest commercial serverless platform.
Evaluation results show that can achieve low-latency model swapping, efficiently share a GPU across hundreds of functions, and satisfy per-function latency SLOs at scale.
§ INTRODUCTION
Machine learning (ML) models have been increasingly deployed in the cloud to deliver ML inference services and boost real-world applications <cit.>.
Model inference is typically performed in real-time under dynamic, bursty request arrival patterns, and thus needs to accommodate changing demands.
Serverless computing offers a compelling approach to enabling scalable model inference: users can simply package models as stateless functions, let cloud providers handle resource provisioning and autoscaling, and be charged by per-request resource usage at a fine granularity (e.g., 1 ms <cit.>).
Mainstream serverless platforms, such as AWS Lambda <cit.>, Azure Functions <cit.> and Alibaba Cloud Function Compute <cit.>, have reported model inference as a popular use case.
However, today's serverless platforms lack efficient support for GPUs, thus exposing a hard tradeoff between inference performance and cost.
Model inference typically has stringent request-level latency service-level objectives (SLOs) such as tens of milliseconds <cit.>, while starting an inference function on a GPU can take a few or tens of seconds (Table <ref>).
Therefore, one has to keep functions alive in GPUs for a long duration, or even use provisioned instances <cit.>, to avoid cold starts and meet strict latency requirements, the practice of which is costly and violates the pay-per-use billing of serverless computing.
In addition, inference requests can be highly dynamic <cit.>, therefore the long-running functions are often idle, leading to significant waste of expensive GPU resources.
An ideal serverless platform should allow fine-grained, efficient GPU sharing to achieve cost-effectiveness for both cloud users and providers, while meeting latency SLOs for inference functions, i.e., being SLO-aware.
For cloud users, desired GPU functions should follow a pay-per-use billing model without incurring high overhead to serve inference requests and, if needed, resume from idling.
This ensures low-latency, SLO-aware inference, and allows users to easily reap economic benefits under dynamic request patterns.
For cloud providers, GPUs are much more expensive than CPUs and should be efficiently shared across functions to improve resource efficiency and reduce overall inference cost.
In this paper, we propose to leverage model swapping to enable fine-grained, request-level GPU sharing and efficient model inference.
Our key insight is to keep inference functions alive in host, and only swap their models to GPUs when they are activated to serve arriving requests; a GPU is shared by multiple functions across requests.
This incurs no GPU memory footprint when functions are idle, which in turns enables pay-per-use billing for GPUs (i.e., cost-effective for users).
As host memory is much larger than GPU memory, it substantially increases the number of functions each GPU can accommodate, leading to efficient GPU sharing and improved GPU utilization (i.e., cost-effective for providers).
Swapping models between host and GPU can be performed efficiently through PCIe, and thus leads to much lower latency than function cold starts and can be easier to meet request-level latency SLOs (i.e., SLO-aware).
In addition, whenever desired, we also swap models between GPUs via fast NVLink connections for lower latency and load balancing.
However, enabling model swapping in serverless platforms poses both systematic and algorithmic challenges.
First, a serverless platform must efficiently perform model swapping and make it transparent to users, who should not be required to write “swapping logic” in their inference functions and should be unaware of underlying swapping actions.
Since model swapping enables fine-grained GPU sharing, the platform also needs to ensure proper isolation across multiple functions.
Second, model swapping can incur considerable PCIe traffic and cause bandwidth contention during concurrent inference executions of multiple functions, which leads to increased end-to-end latency.
Hence, the platform must design efficient request scheduling and model management algorithms to exploit model swapping such as to meet latency SLOs for all inference functions at low resource cost.
To address these challenges, we present , a GPU-enabled serverless platform with efficient model swapping.
adopts an architecture of GPU pooling, where each worker manages a pool of local GPUs and lets its functions access this pool for inference execution (e.g., via CUDA API redirection), such that model swapping can be easily performed in a GPU pool and be transparent to users.
Specifically, to solve the aforementioned systematic challenges, proposes three key designs that exploit the characteristics of inference to deliver low-latency model swapping and execution.
First, it proposes asynchronous API redirection to avoid frequent synchronizations between the functions and the GPU pool, which effectively eliminates high communication overhead for model inference.
Second, leverages pipeline execution to overlap model swapping and inference execution, which hides the latency of model swapping and results in reduced end-to-end latency.
It also leverages high-speed NVLink between GPUs for fast model swapping whenever possible.
Together with the low-latency API redirection, can efficiently execute models on any idle GPU.
Third, designs an efficient GPU memory management system to facilitate model swapping and inference execution.
It automatically tracks the addresses of models when they get swapped even across multiple GPUs, and easily adjusts each memory access of CUDA APIs accordingly during inference execution.
It also effectively organizes and shares memory blocks to avoid high memory allocation overhead, improving overall performance of model swapping.
In addition, ensures resource and fault isolation in its GPU pool.
To further address the aforementioned algorithmic challenge to meet latency SLOs for all inference functions at low GPU cost, we propose three policies:
First, designs a request scheduling algorithm to reduce model swapping overhead, leading to low end-to-end inference latency.
It divides models into two categories, i.e., heavy or light, according to whether they cause high overhead of swapping through PCIe.
In request scheduling, prioritizes NVLink over PCIe to transmit heavy models across GPUs, and effectively reduces interference caused by concurrent model swapping.
Second, also exploits model heaviness to guide the eviction when GPU memory is insufficient.
It tends to cache heavy models in GPUs and evicts light ones; together with request scheduling, it can substantially minimize swapping overhead.
Third, proposes a SLO-aware request queueing policy, which prioritizes requests to functions that have higher chance to meet SLOs and thus effectively improves the total number of SLO-compliant functions.
We have implemented and evaluated atop Alibaba Cloud Function Compute (FC) <cit.>, one of the world's largest commercial serverless platforms.
Evaluation results show that achieves low-latency model inference and swapping in its GPU pool, which leads to comparable performance with native execution.
can share a GPU across hundreds of functions and load-balance GPUs with model swapping, resulting in over 10× cost reduction compared with current GPU offering in .
With its efficient policies, can enable serving 480 functions at a single 4-GPU worker while achieving low tail latency and satisfying millisecond-scale SLOs for all functions.
Cluster experiments further show that can effectively scale with function numbers at low resource cost, which meets per-function latency SLOs for thousands of functions using 6 GPU workers.
§ BACKGROUND AND MOTIVATION
In this section, we motivate the need of having a GPU-enabled serverless
platform for high-performance inference and identify three key requirements
in this regard. We also discuss the inefficiency of existing solutions.
§.§ Serverless Inference and the Need of GPU
As a prominent serverless platform with a global presence, has observed
a growing adoption among enterprise customers who choose to provision their
inference services using serverless functions, known as “serverless inference.” In this approach, users package models and
inference code into containers and publish them as serverless functions,
which can be dynamically invoked to make predictions. With serverless
inference, users are relieved of the burden of server management as it is
automatically handled by , such as provisioning, autoscaling,
scheduling, and fault tolerance. Serverless inference can also enable
significant cost savings as users do not pay for idle resources under
the pay-per-use pricing model <cit.>.
In , a function's requests typically exhibit dynamic, bursty arrival
patterns as shown in Fig. <ref>,
consistent with previous research findings <cit.>. Leveraging the high elasticity of
serverless computing, inference functions can quickly scale in response to
the changing workload, while users are billed based on the function runtime, with billing granularity as fine as 1 ms <cit.>.
However, both and other leading serverless platforms currently lack
efficient support for GPUs, impeding their ability to achieve
high-performance inference. In fact, numerous users have expressed a
compelling need to execute their models on GPU-enabled functions, indicating
the strong market demand for GPU-accelerated inference in current FaaS
platforms.
§.§ Key Requirements
Based on our operational experiences and interactions with customers,
we have identified three key requirements for building an efficient GPU-enabled
serverless inference platform.
Compliance to latency SLOs.
Enterprise users often have stringent latency requirements for online
inference, which is the key driver behind their demand for GPU support
in . Therefore, our platform should allow users to specify their latency
requirements as SLOs, such as ensuring that at least 99% of inference
requests are served within 200 ms <cit.>. The platform
should strive to meet the latency SLOs for all functions, if possible.
Pay-per-GPU-use.
Compared to the traditional “serverful” approach, one of the key advantages
of serverless computing is its pay-per-use billing model. Users hence requires that
their inference functions are billed based on the actual GPU
usage, with charges incurred only when the functions are invoked and running on
GPUs (pay-per-GPU-use).[Note that
in our experience, enterprise customers are willing to pay a nominal fee to
retain idle functions in host memory for substantially improved performance,
similar to the current function keep-alive charge meant to avoid cold-start
overhead <cit.>.] This is crucial for achieving
substantial cost savings in the presence of dynamic inference workloads (Fig. <ref>), considering the high cost of GPUs.
GPU-efficient inference.
For serverless providers like , minimizing the resource provisioning cost
is the key to maintaining market competitiveness. Given the
significantly higher cost of GPUs compared to other resources, the platform
should serve as many inference functions as possible using a minimum number of
GPUs, thereby attaining the highest GPU utilization. This essentially requires
fine-grained and efficient GPU sharing.
§.§ Existing Solutions and Their Inefficiency
Inefficiency of existing solutions.
Achieving the three requirements presents non-trivial challenges. Compared to
CPU functions, running inference functions on GPUs incurs considerable
startup overhead. Table <ref> provides a comparison of
model execution times when the inference functions are warm- and cold-started
on V100 GPUs in .[For cold-start, we exclude the delay of
fetching a remote container image or model file, which can take extra seconds
to minutes to complete <cit.>.] Cold-start results in a
two orders of magnitude slowdown due to the need for GPU container setup, ML
framework startup (PyTorch in our case), GPU runtime creation, and model
loading. This leads to extremely long latency that far exceeds the SLO requirement
of model inference.
To avoid cold-start, a common approach is to maintain provisioned functions
that remain active on GPUs <cit.>. However, this
approach deviates from the serverless paradigm and is costly for both cloud
users and providers. First, as provisioned functions, even when
idling, occupy GPUs for extended duration, users are obligated to pay for the
allocated GPUs regardless of actual usage <cit.>, leading to high
expenses that undermine the cost-saving benefits of serverless
inference. Second, it results in severe GPU underutilization,
considering that the majority of functions exhibit low to medium request
rates. Fig. <ref> (left) depicts the distribution of the
average request rates of functions in a one-week trace, revealing that
85% (97%) of functions were invoked only once per minute(second) on
average[For confidentiality reasons, we only depict the request rate
of CPU functions, which exhibits similar patterns as those running on GPUs
(see Fig. <ref>).]. These findings align with
observations from other production traces <cit.>.
Table <ref> provides a comparison of existing solutions that offer GPU support in serverless platforms and our system, .
Alibaba Cloud Function Compute (FC) <cit.>, as a prominent commercial
serverless platform, fails to meet latency SLOs and achieve resource
efficiency for GPU functions. Molecule <cit.> introduces a serverless platform that supports GPUs
and other hardware devices, while DGSF <cit.> enables serverless
functions to access GPUs in a remote cluster. However, both works primarily
target general-purpose workloads and suffer from GPU inefficiency.
INFless <cit.> presents a serverless system specifically
designed for model inference with GPU function support. Although it aims to
minimize inference latency, it still results in GPU idling and cost
inefficiency.
Request-level GPU sharing and its limitations.
To enhance resource efficiency, it is intuitive to implement finer-grained GPU sharing, whereby multiple functions can be consolidated onto a single GPU. Each function exclusively utilizes the GPU when activated and relinquishes it upon completion, allowing other functions to execute requests. This approach can potentially improve overall GPU utilization but has two limitations.
First, it still requires to cache a large number of idle inference
functions, which ultimately saturates GPU memory and compromises resource
efficiency. Fig. <ref> (right) illustrates the expected
GPU load, measured as the proportion of the GPU busy period, under varying
per-function request rates when multiple functions are running on a
single GPU that fully occupies its 32 GB memory capacity. A
higher GPU load signifies improved utilization. However, the GPU load
consistently remains low due to the limited GPU memory. Even when the
request rate per function exceeds 1 r/s (the 97^th percentile in
Fig <ref> left), the GPU load still remains below 60%.
Second, packing multiple functions into a GPU can make it overloaded for a short period due to the bursty request patterns.
In a multi-GPU machine, this can inevitably result in hot spots and load imbalance across GPUs, which cannot meet request-level latency SLOs nor achieve high GPU utilization.
The impact of load imbalance is demonstrated in Fig. <ref>, with
details given in <ref>.
§ KEY INSIGHT AND OVERVIEW
We next discuss our solution to aforementioned limitations.
Key insight.
As described in <ref>, existing solutions have to keep inference function alive in expensive, limited GPU memory, which leads to not only high function idling cost but also GPU underutilization even under fine-grained resource sharing.
Therefore, to enable efficient request-level GPU sharing, a serverless platform must support fast inference execution and low function keep-alive cost, without incurring GPU memory footprint when idling.
We propose to leverage model swapping for low-latency, resource-efficient serverless inference.
We keep functions alive by caching the models in host memory, and swap models into GPUs only when requested.
Since host memory can be cheaper with much larger amount than GPU memory (e.g., a few TB vs. tens of GB), our solution not only avoids charging users for GPU resources during function idling (i.e., pay-per-GPU-use), but also significantly increases the number of functions each GPU can serve, thereby improving overall resource efficiency.
Moreover, model swapping can be efficiently performed through PCIe and incurs much less overhead than function cold starts, which can sustain low inference latency and be easier to satisfy request-level millisecond-scale SLOs.
We also swap models between GPUs via high-speed NVLinks, which further improve swapping performance and effectively mitigates load imbalance across GPUs.
Challenges and overview.
Following this insight, we present , a serverless platform to enable model swapping for low-latency inference and GPU resource efficiency.
Achieving so in can pose both systematic and algorithmic challenges.
First, it is non-trivial to enable efficient model swapping and ensure isolation in serverless platforms.
In serverless paradigm, users only deliver their inference functions, while the platform holds no knowledge of their models, e.g., model structure and parameters.
This requires the platform to automatically track memory footprint of each function, efficiently transmit models, and make it transparent to users.
That is, users are not required to write “swapping logic” in their inference code and should be unaware of swapping actions taken by the platform.
Since model swapping enables fine-grained GPU sharing across a large number of functions, the platform should also carefully protect in-memory models across various functions and ensure the isolation.
Second, it remains challenges to efficiently perform request scheduling such as to exploit model swapping to satisfy latency SLOs for all functions while achieving high resource utilization.
Unlike existing serverless platforms <cit.>, model swapping allows a function instance to run on various GPUs across requests, requiring the platform to carefully design scheduling and memory management policies around GPUs.
In addition, model swapping can incur bandwidth contention across functions during concurrent model swapping through PCIe, which impacts end-to-end latency and request-level SLOs.
Therefore, the platform needs to judiciously design request scheduling, model swapping, and GPU memory management policies to meet the properties in Table <ref>.
Therefore, should first enable efficient model swapping in serverless platforms, and then design effective algorithms to exploit it for SLO-aware inference and resource efficiency.
We will next discuss how addresses systematic challenges in <ref>, and defer its algorithm design to <ref>.
§ SYSTEM DESIGN
In this section, we present the system design of .
§.§ Architecture overview
adopts an architecture of GPU pooling to enable efficient GPU sharing and effectively make model swapping transparent to users.
runs a GPU server at each worker node to manage all its local GPUs as a pool, such that functions can directly interact with the GPU server to access any GPU to facilitate resource sharing.
In addition, the GPU server holds models of all local inference functions and can easily perform model swapping, without needing functions to realize.
Fig. <ref> shows the architecture overview of .
has two components, cluster manager and worker nodes.
The cluster manager takes charge of cluster-level tasks, including request routing, node allocation, and resource scaling.
Each worker node hosts a number of functions, runs a GPU server, and uses an intra-node router to control requests to local function instances.
The GPU server uses its model repo to manage models in host memory, and runs an executor for each GPU in the pool, which handles CUDA calls, swaps required models, and manages memory accordingly .
The server also has a controller that holds a global view of GPU memory and executor status and can determine how to schedule requests to GPUs (executors).
Once a request arrives at the target function, it interacts with the scheduled executor using a GPU client and remotes CUDA API calls during inference execution.
GPU server, router, and functions in the worker node run as containers.
Key to is to design and build an efficient GPU server that enables fast, resource-efficient inference and model swapping. This poses four challenges:
(1) how enables efficient GPU remoting (<ref>);
(2) how obtains the knowledge of model and achieves low-latency model swapping (<ref>);
(3) how efficiently manages GPU memory (<ref>);
and (4) how ensures the isolation and handles failures (<ref>).
We next elaborate the designs of to address these challenges.
§.§ GPU Remoting
GPU remoting is fundamental to pooling, and we describe how enables GPU remoting and addresses the challenges therein.
CUDA API redirection.
enables GPU remoting by redirecting CUDA API calls from function instances to GPU executors.
Each function instance runs a GPU client that can intercept CUDA APIs from ML frameworks, e.g., PyTorch inference programs, and remotes them to GPU executors for CUDA execution.
This allows a function instance to access various GPUs at a request granularity: after scheduling a request (2 in Fig. <ref>), the GPU client can redirect all its CUDA calls to the target executor (4); following requests to this function can be scheduled to other executors, and the client varies the target accordingly.
Hence we can effectively migrate load between GPUs with the support of model swapping (<ref>).
However, CUDA API redirection can incur significant synchronization overhead compared with native execution, which dramatically slows down model inference.
According to our measurement, it can need thousands of CUDA API calls in an inference execution, e.g., over 4k calls for ResNet-152, and thus a large number of synchronizations between a function and a GPU executor cause a substantial delay, e.g., hundreds of milliseconds (Table <ref>), violating request-level SLOs.
avoids such synchronization overhead via asynchronous API redirection.
Asynchronous redirection.
We observe that intermediate steps in an inference execution are typically performed asynchronously in GPU — the intermediate data get generated and consumed on GPU memory without requiring any data transfer to the host, until the execution is completed and the host receives an output result.
Therefore, a function can redirect intermediate CUDA calls to the GPU executor asynchronously without waiting for their results, and perform synchronizations only for the final output.
This approach does not affect the execution order and thus can ensure the correctness of model inference.
Following this insight, we perform asynchronous redirection for CUDA APIs that can be executed asynchronously.
In particular, we divide the set of CUDA APIs into two categories based on their semantics: synchronous, blocking APIs and asynchronous, non-blocking APIs.
The former needs the host to wait for their completion and use the outputs in following steps, e.g., , and thus we by default perform synchronizations.
The latter do not change the runtime state in the host, e.g., , allowing asynchronous API redirection without blocking.
supports common CUDA runtime APIs and CUDA libraries, e.g., cuDNN, and we show the category of each API in Appendix <ref>.
With asynchronous API redirection, we can fuse multiple consecutive API calls into a single group and send them together.
Such group-level API redirection can further reduce communications, but it requires an effective grouping strategy.
Fusing too many calls into one group, e.g., all intermediate API calls, can greatly eliminate communications between the function and executor, which however needs functions to wait until all calls are issued.
On the other hand, having too few calls in each group, e.g., one call per group, incurs no extra delay but can need a large number of communications, i.e., frequent API redirections.
Therefore, we conduct extensive profiling and choose a “good” group size, which can balance the two factors for overall high redirection performance.
We show the performance advantages of asynchronous, group-level API redirection in Table <ref> (<ref>).
Compared with synchronous API redirection, can cut the inference latency of popular models by up to an order of magnitude due to significantly reduced communications.
Surprisingly, can even outperform the native execution, i.e., using local GPU without GPU remoting, for evaluated CNN models.
The performance gain is owed to parallel execution, as many asynchronous CUDA APIs in these models require only CPU, such that API redirection in fact distributes CPU-side workloads across both functions and executors.
For Bert-qa that requires no CPU-side CUDA APIs, still leads to comparable performance with native execution, indicating a negligible overhead of asynchronous API redirection.
0
In addition, can cache the results of a few CUDA APIs having consistent outputs, e.g.,, and directly return them for following calls without repeatedly querying the executor.
These APIs do not affect inference execution yet can be frequently accessed, and thus caching their results further reduces communications.
GPU runtime sharing.
GPU programs need GPU runtime to manage GPU-side states, which can account for a considerable portion of memory footprint, e.g., about 1 GB for models in Table <ref>.
To improve memory efficiency, in each GPU executor shares a single GPU runtime across functions it hosts.
This dramatically reduces GPU memory footprint and alleviates the need of creating a new runtime after model swapping, which can take a few second.
also preloads all CUDA kernels on each GPU to avoid loading overhead.
We will discuss isolation of in <ref>.
§.§ Model Swapping
We next describe how manages and efficiently swaps models in its GPU server.
Model management.
can automatically track model knowledge during function cold starts.
We note that the model access pattern typically remains the same across requests, e.g., access order of parameters, and thus can be easily obtained without needing user input.
In particular, when a function instance starts loading the model, a GPU executor receives relevant CUDA API calls that contain its parameters, and tracks every GPU memory access for its first run.
leverages access order of parameters to guide future model swapping, and also keeps a model copy in host memory.
Both model copies and access patterns are maintained in the model repo (Fig. <ref>) until associated function instances are terminated.
Model swapping.
With model knowledge, can perform model swapping at request level.
The GPU server schedules a request to an executor (2 in Fig. <ref>), which can trigger model swapping if the requested model is not loaded on the target GPU.
Together with CUDA API redirection (<ref>), can easily execute a model on various GPUs across requests.
Fig. <ref> gives an example of 's model swapping.
Host-to-GPU model swapping is performed through PCIe, and requires a model to be pinned at first to enable DMA transfers, which triggers additional memory copy.
therefore shares a pinned memory pool across models to reduce memory copy with only small pinned memory footprint.
also supports GPU-to-GPU model swapping when a GPU is too busy to serve its models.
It can swap a model to other idle GPUs via fast NVLinks, and schedule its requests to corresponding GPU executors, which ensures fast function switching across GPUs for load balancing, and achieves low request latency.
performs model swapping when GPU memory is insufficient.
Like cache eviction, we simply invalidate GPU memory region of a model without needing to swap it back to host, which holds a copy for each model.
This avoids the swapping overhead with only an insignificant cost.
We defer detailed model swapping and eviction policies, e.g., when and where to swap models, to <ref>.
However, model swapping can pose two challenges.
First, it requires to carefully manage and translate GPU memory addresses.
As functions are agnostic to the underlying model swapping, model addresses of their CUDA calls remain the same across various requests, even when they run on different GPUs.
Therefore, automatically tracks memory addresses for model swapping and updates relevant memory access in each CUDA API call accordingly, which is made transparent to functions.
We defer the details of memory management to <ref>.
Second, model swapping can incur extra delay to end-to-end inference latency.
To improve overall performance, exploits model pipeline to hide the overhead of model swapping, which we describe below.
Optimize model swapping via pipeline.
Note that model inference executes only a forward pass and is generally performed layer by layer.
This allows us to overlap the transmission of next layers and the computation of previous layers in GPUs, thus enabling pipeline execution <cit.>.
We exploit two characteristics of to design its pipeline execution.
First, 's GPU server executes a model and keeps track of its parameters at CUDA API level (<ref>), by which we perform CUDA-level model pipeline.
In particular, it loads parameters of the requested model following their access order obtained at the first run;
during inference execution, it checks if model parameters required by each CUDA API are loaded in GPU; otherwise it waits until they become ready.
In this way, the executor concurrently swaps model parameters and executes CUDA APIs on those loaded to reduce overall latency.
Such pipeline execution can apply to both host-to-GPU and GPU-to-GPU model swapping.
Second, CUDA-level model pipeline can require frequent synchronizations between host and GPUs to ensure each CUDA API call get issued only when its data are loaded.
reduces such synchronization overhead by group-level model pipeline.
It swaps multiple consecutive parameters as a group, and performs synchronization once an entire group is loaded into a GPU.
Determining how parameters are grouped poses a tradeoff between the swapping performance and pipeline efficiency: grouping more parameters incurs less synchronizations with reduced swapping overhead, but leads to less overlap between model transmission and computation.
's model pipeline needs a “good” group size that can balance the tradeoff and be generally applied to various models.
We notice that grouping too many parameters can have little improvement on swapping performance, as synchronizations therein cause only negligible overhead.
Therefore, we profile the performance of transmitting various-size data, and choose a knee point as a desired group size, which can achieve good swapping performance without impacting pipeline efficiency too much.
Such group size can be directly applied to different models,
and only depends on hardware configurations, e.g., PCIe bandwidth.
We show the performance gain of 's pipeline execution in Table <ref> (<ref>).
Compared with separate model swapping and inference execution, i.e., non-pipeline, 's pipeline execution achieves better end-to-end performance, reducing the latency by about 50%.
Model pipeline through high-speed NVLink further improves the performance due to reduced swapping overhead, which can be comparable to inference execution only (“Remote Async.”).
0
In addition, overlaps pageable-to-pinned memory copy and host-to-GPU model transmission (Fig. <ref>) to further optimize the swapping.
This enables the pipeline to span across both inference execution and the two stages of model swapping.
Since memory copy (i.e., the first stage) can be faster than host-to-GPU transmission through PCIe (i.e., the second stage), it incurs only negligible overhead during model pipeline according to our measurement.
§.§ Memory Management
GPUs have a memory management system that provides similar functionalities with CPUs, e.g., memory allocation, and also supports unified memory to transparently handle data movement between host and GPU memory.
However, native GPU memory management is designed for general-purpose workloads and cannot be directly applied to 's model swapping.
There are mainly two challenges.
First, in model swapping not only requires data transmission, but also involves CUDA-level pipeline executions that can run on multiple GPUs across requests (<ref>).
This requires to hide the memory details across various GPUs and performs fine-grained synchronizations, which is not natively supported by GPUs.
Second, native memory allocation (e.g., ) incurs considerable overhead, while can need frequently model loading and eviction, significantly degrading overall swapping and inference performance.
To address the above challenges, we design a GPU memory management system for 's model swapping.
Memory address management.
When model swapping occurs, either host-to-GPU or GPU-to-GPU, actual memory addresses of model parameters can differ from original ones, which requires to carefully manage memory layout and automatically translate each memory access in CUDA execution.
exploits the memory layout of ML frameworks to facilitate address management.
ML frameworks, such as PyTorch and TensorFlow, typically organize data into blocks for ease-of-management, where each GPU memory block can contain a number of parameters.
This hence allows to perform memory mapping at block level.
In particular, tracks memory blocks for each function and maintains a mapping to their actual physical addresses after model swapping.
Internel data layout in each block, e.g., the offsets of its parameters, remains the same, such that can easily obtain the physical address of a particular parameter using its belonging block address and corresponding offset.
Therefore, needs not to maintain much metadata for individual data pointers, effectively handling address translation without high management overhead.
Memory allocation and block management.
needs to allocate and free memory blocks during model swapping and evictions.
Using native GPU memory management for block allocation can cause high overhead, e.g., tens to hundreds of milliseconds for a single model (Fig. <ref>), which impairs overall swapping performance.
hence pre-allocates all GPU memory and internally manage all blocks to avoid using native APIs in block allocation.
However, this poses a challenge for block management: the size of blocks and their popularity can vary across models, requiring to efficiently manage blocks and avoid many memory fragments; otherwise it needs to frequently release existing blocks for reallocation and lead to high overhead.
Buddy memory allocation <cit.> is a classic approach to reduce memory fragments, which divides and merges idle blocks based on power-of-two multiples.
We revise this approach by exploiting two characteristics of .
First, we leverage block usage patterns of ML frameworks for reduced memory fragments.
For example, PyTorch by default uses fixed-size blocks to host small- and moderate-size data, and thus these block sizes have high popularity across various models, e.g., 20 MB.
therefore divides blocks into two categories based on their sizes, i.e., regular fixed-size blocks and others irregular, and manages them separately.
In particular, divides all GPU memory into a number of memory partitions at bootstrap.
Memory partitions are created via native CUDA allocation API and have the same size, each hosting either category of blocks.
can perform Buddy-based management in each memory partition to allocate and reclaim blocks.
For partitions hosting regular blocks, adopts a revised policy, e.g., directly dividing memory into fixed-size blocks rather than native Buddy allocation, which can further reduce fragments.
Once all blocks of a partition are reclaimed, it becomes idle and can be later used by any block category.
Second, in all blocks of a model are expected to be accessed entirely during model swapping and execution, and also reclaimed together after model eviction.
Motivated by this observation, we can package blocks from the same model as tight as possible, e.g., collocating them on a single memory partition, such that model eviction can easily free entire memory partitions and make them available for future block allocation.
can also periodically consolidates blocks to reduce fragments.
§.§ Isolation and Fault Handling
We next discuss how handles resource and fault isolation across function instances.
Resource isolation
provides container-level isolation for CPU and memory resources [makes no assumption on sandboxes and can also support microVMs<cit.>.], similar to existing serverless platforms.
For GPUs, performs software-based isolation at its GPU server, which ensures GPU compute resources, e.g., SM, are exclusively allocated to function instances at a request granularity.
It also isolates GPU memory by prohibiting functions from accessing memory regions of others, which can be easily achieved by GPU memory virtualization (<ref>).
Fault handling and isolation
sustains various component failures.
In case that function instances fail, simply restarts them to resume the execution, which has no side effect due to the stateless nature of model inference.
For executor failures at GPU server, migrates affected models to other active executors via swapping, and restarts failed ones.
The GPU server can also persist runtime states in local storage, e.g., models and metadata, such as to allow fast recovery from the failure of an entire server.
Therefore, can effectively handle faults occurring in function execution, and isolate them across various functions.
At cluster level, persists metadata of individual nodes in a database, and thus the cluster manager can easily retain these states and recover from failures.
It also keeps periodic health checks with the router of each worker node, and handles node failures by launching a new node and migrating all relevant functions.
§ POLICY DESIGN
In this section, we present how achieves desired properties, i.e., SLO-aware and resource-efficient (Table <ref>), with its policy designs.
We start with the design overview, followed by individual policies.
§.§ Design Overview
Objectives.
The overall objective of is to meet latency SLOs for all inference functions while minimizing resource cost.
We define a function to comply with SLOs if its tail request latency is less than a user-specified deadline, and meter resource cost by the number of worker nodes.
Key to achieving this goal is to maximize the number of SLO-compliance functions at each worker, such that can efficiently exploit per-worker GPU resources to host as many functions as possible, which in turn reduces the total number of workers required.
Challenges.
Maximizing the number of SLO-compliance functions at nodes can pose three challenges.
First, model swapping allows to host much more functions on a worker node, the dynamic request patterns of which can cause short load burst, overload its GPUs, and result in request queueing.
This requires to carefully determine how requests should be prioritized in the queue such as to meet latency SLOs for as many functions as possible.
Second, schedules requests to GPUs when they arrive at GPU server (2 in Figure <ref>), which accordingly determines where to swap target models.
However, the performance of model swapping can be impaired and hard to predict due to bandwidth contention, which makes it challenging for request scheduling to ensure low-latency inference.
For example, Table <ref> shows the latency of model pipeline when concurrently swapping other models through PCIe, which leads to diminished performance especially for large models.
Third, needs to determine how to evict models and which models should be kept in GPUs such as to reduce model swapping and improve overall inference performance.
Achieving so in model eviction is non-trivial due to dynamic, hard-to-predict request arrival patterns.
We note that it is fundamentally unable to jointly find the optimal solutions to these challenges, as both future request arrival patterns and swapping performance are unpredictable in our settings.
Even with perfect future knowledge and predictable performance, the problem is still NP-hard and do not apply to online inference <cit.>.
We therefore address the challenges separately with heuristic solutions, which we describe below.
We summarize how leverages these policies to achieve overall objective in <ref>.
§.§ Request Queueing
At each worker, intra-node router queues and dispatches requests (Figure <ref>), which aims to satisfy latency SLOs for as many functions as possible.
Intuitively, can prioritize functions according to the possiblity to comply with SLOs, such that requests to functions with higher possibilities should get executed first for reduced queueing delay.
Following this insight, we can divide all functions at a node into two sets based on whether they can potentially satisfy SLOs, and respectively maintains two queues for their requests, i.e., high- and low-priority.
When there are available GPUs, first executes requests from the high-priority queue, and only dispatches low-priority requests if the former is empty.
Functions can be moved between the two sets at runtime according to their possibilities to comply with SLOs.
However, enabling this needs to address two key challenges: (1) how can we quantify the SLO-compliance possibility for a function, and (2) how shoud we partition functions into high- and low-priority sets.
Metric for function prioritization.
A desired metric should capture “how much effort” required to satisfy SLOs, and thus functions with less effort can be easier and have more chance to achieve so.
We therefore propose as the metric required request count (RRC), which measures the expected number of future requests served within deadlines in order to meet SLOs.
Let n be the current number of requests for a function, and m be the number of requests served within deadlines out of total n requests.
The RRC of the function can be defined as pn - m/1 - p, where p is the tail percentile specified in SLOs, e.g., 98%.
This is simply derived from the equation: m + RRC/n + RRC = p.
RRCs of various functions can be normalized by average request latency.
Functions with negative RRCs have already satisfied SLOs until now, while the larger a function's RRC is, the lower it has a possibility to make it.
This allows us to prioritize requests to functions with smaller RRCs.
Divide functions into two priority sets.
With RRCs, we can divide functions into high- and low-priority sets and determine request execution order in each function set.
Intuitively, we can put functions with small RRCs (i.e., both small positive and negative) in the high-priority set, and the rest in the low-priority set, which can sustain existing SLO-compliance functions and increase the number whenever possible.
Determining the RRC boundary between the two sets can be challenging — having too many (few) high-priority functions can be too aggressive (conservative) to enable more SLO-compliance functions.
In we use a threshold α∈ [0, 1] to indicate the boundary and determine how many functions should be prioritized:
we can prioritize more functions by increasing α, and aggressively put all functions in the high-priority set when α is 1.
Consider a node with N functions sorted by RRCs, and let RRC_i be the RRC of function i.
We put the first k functions in the high-priority set, where k is the largest integer such that ∑_j=1^kmax (RRC_j, 0) ≤α·∑_i=1^Nmax (RRC_i, 0).
can automatically configure α at runtime based on overall load and function SLOs.
When there is a short load surge and the number of SLO-compliance functions decreases, turns to be conservative with a small α; otherwise α can increase to prioritize more functions.
We defer the detailed algorithm of α auto-configuration to the Appendix <ref>.
also periodically adjusts the two set of functions according to their up-to-date RRCs and α.
The request execution order in each priority queue can also be determined with function RRCs.
In high-priority queue, requests are prioritized in a reverse order of RRCs, such that functions with small positive RRCs can be favored and easily made SLO-compliance.
In contrast, requests in low-priority queue should follow the order of RRCs, as the smaller a low-priority function's RRC is, the more chance it can have to be prioritized and satisfy SLOs.
§.§ Scheduling and Model Swapping
We next describe how schedules requests to GPUs and swaps models at request execution.
Once 's GPU server receives a request (2 in Figure <ref>), its controller determines which GPU (executor) should load the target model and process this request.
Each GPU executes a request at one time to ensure the resource isolation (<ref>).
The objective of request scheduling is to minimize per-request inference latency, which however can be challenging due to accompanying model swapping.
Bandwidth contention in model swapping.
While the latency of model execution is often stable <cit.>, 's model swapping can incur unpredictable overhead due to PCIe bandwidth contention across GPUs <cit.>.
Fig. <ref> shows topology of a worker node in , where each pair of GPUs shares a PCIe switch and GPUs are inter-connected via NVLinks with various bandwidths, e.g., faster NVLinks delivering 2× higher throughput than slower ones.
We measure the performance of concurrent model pipeline execution on a pair of GPUs, as shown in Table <ref>.
The performance slowdown caused by bandwidth contention can vary in models, and is generally more significant for large models that require more data transmission, e.g., Bert-qa.
Having said that, we observe that swapping with light models can lead to reduced contention and lower latency compared with bandwidth-intensive models, e.g., ResNet-152 with DenseNet-169/Bert-qa.
Therefore, we propose to leverage this characteristic to reduce interference in model swapping for improved overall performance.
Interference-aware scheduling.
For each request, aims to minimize its interference with concurrent workloads, which in turn reduces inference latency.
We propose two designs to achieve this.
First, avoids concurrent swapping for bandwidth-intensive models whenever possible.
It divides models into two categories based on their bandwidth intensiveness in swapping, i.e., heavy and light models, which can be easily done via simple model profiling: if model pipeline significantly slows down inference execution, data transmission can be bottleneck and thus the model is heavy (see Table <ref>).
We do not need accurate swapping performance under concurrency, which in fact is hard to obtain.
Second, exploits direct NVLink connections between GPUs to reduce PCIe contention.
prioritizes GPU-to-GPU over host-to-GPU model swapping such as to enable faster model transmission and avoid interference with concurrent PCIe traffic.
Algorithm <ref> shows 's scheduling and swapping policy.
For a request, first checks whether the target model is loaded on an available GPU, and if so, directly executes it without swapping overhead (line 8).
If the model hosted by busy GPUs, then schedules the request to perform GPU-to-GPU swapping, in that the source and target GPUs should have fastest NVLink connection (line 11).
Otherwise, resorts to host-to-GPU swapping and prioritizes target GPUs whose neighbors are idle or running light models to reduce PCIe contention (line 18).
In a nutshell, can minimize interference and overhead of model swapping for each request, and thus provides low inference latency.
§.§ Model Eviction
We next describe 's model eviction policy.
can cache models in GPUs for future requests, and determine how to evict inactivated models from a GPU when its memory is fully occupied and cannot load others activated.
Unlike traditional cache eviction, model eviction in aims to improve overall inference performance by reducing swapping overhead.
Swapping performance can directly impact end-to-end latency compared with cache hit rate, and thus model eviction should be made aware of swapping overhead of various models, i.e., heavy or light.
Similar to interference-aware scheduling (<ref>), we perform model eviction to minimize swapping overhead for future requests.
We notice that swapping light models lead to negligible overhead for end-to-end performance, while heavy models can causes considerably long latency under pipeline execution due to model transmission and accompanying bandwidth contention (Table <ref> and <ref>).
Therefore, we tend to evict models that have almost no impact on performance when swapping, such as to cache more heavy models in GPUs for reduced host-to-GPU data transmission and interference.
Following this insight, we divide models into two priority sets: heavy models hosted by only a single GPU should be prioritized, and the rest are low-priority and can be evicted earlier, including light models and heavy ones with multiple copies in various GPUs.
As a result, only needs to frequently perform host-to-GPU and cross-GPU swapping for light and heavy models respectively, both of which lead to no PCIe bandwidth contention and thus achieve low swapping overhead.
In each priority set, we adopt a common Least-Recently-Used (LRU) policy to determine the eviction order for its models.
§.§ Put All Together
Finally, we describe how achieves desired properties at cluster, i.e, SLO-aware and resource-efficient, with above policies.
's cluster manager monitors node-level request load and function SLOs, and can eliminate potential SLO violations by load migration and node scaling.
Since aims to maximize the number of SLO-compliance functions at node (<ref>), it can by design discard popular functions (i.e., with high request load) when a node is overloaded.
Therefore, can migrate these functions onto other nodes with available resources and provision new nodes when needed, which effectively meets SLOs for all functions at low resource cost.
§ IMPLEMENTATION
We have implemented atop .
's GPU server and GPU client are implemented in 4k and 1.5k lines of C++ code, respectively.
Intra-node router and cluster manager are directly implemented atop relevant components in .
We also implement a demo router in 530 lines of Python code to provide basic functionalities for single-node tests.
's cluster manager can maintain and track a resource pool of GPU nodes to ensure fast node allocation and scaling, which is a common practice in .
We provide a container image as a function template based on PyTorch, where original CUDA libraries are replaced by GPU clients to enable GPU remoting, e.g., .
This requires no modification to the PyTorch framework.
§ EVALUATION
In this section, we evaluate using production traces from . Our evaluation ansewrs the following questions:
* Can enable efficient GPU remoting and model swapping (Table <ref>)?
* How much benefit 's model swapping can bring in terms of overall performance (<ref>)?
* Can maximize the number of SLO-compliance functions at node and how does its individual design policies contribute to overall performance gain (<ref>)?
* Can satisfy per-function latency SLOs and improve resource utilization at cluster (<ref>)?
Settings.
We deploy in a cluster following production environments.
runs on a cluster with up to 6 workers.
Each worker node has 48 vCPU cores, 384 GB memory, and 4 NVIDIA V100 GPUs, each with 32 GB memory.
We use 8 popular ML models in evaluation, as shown in Table <ref>, and distribute them across inference functions in a round-robin manner.
Table <ref> also shows the performance of GPU remoting and model pipeline, which we discuss in <ref> and <ref>, respectively.
We warm up all functions before running test workloads to exclude cold starts.
Metrics
We focus on the ratio of functions meeting SLOs and GPU load in evaluation.
For a function, it complies with SLOs only when its tail request latency is less than a deadline.
By default, we use 98^th tail latency, and set the deadlines for CV models and Bert-qa to 80 ms and 200 ms, respectively.
The load is measured by the proportion of duration when the GPU processes inference requests.
§.§ 's Model Swapping
We first discuss the benefits of 's model swapping.
Host-to-GPU model swapping.
performs host-to-GPU model swapping to reduce GPU memory footprint for high resource efficiency.
Fig. <ref> compares with native execution (Native), which simply keeps functions in GPUs without swapping and shares a GPU across requests.
Native can only host a small number of functions due to limited GPU memory, and thus leads to poor aggregate throughput under low or median request rates.
In contrast, can enable much more functions with sufficient host memory, e.g., over 10× under 10 r/m, which significantly improves throughput and leads to high GPU utilization.
Only with a high request rate, e.g., 120 r/m per function, and Native have similar throughput.
Note that 97% functions are requested less than once per second in production cluster (Fig. <ref> left).
Hence keeping models in host can dramatically improve GPU memory efficiency and effectively support more functions per GPU.
In addition, achieves comparable performance with Native due to efficient model swapping.
For example, the tail latency under 10 r/m only increases to about 50 ms with 's model swapping, which can effectively meet latency SLOs.
In , increasing the request rate can reduce model swapping with fewer models and thus lead to lower latency.
GPU-to-GPU model swapping.
Fig. <ref> compares with Native on a 4-GPU worker.
In Native functions are bound to specific GPUs, which can easily cause GPU hot spots.
In contrast, enables GPU-to-GPU swapping and can effectively migrate models for load balancing.
Fig. <ref> (left) shows per-GPU load normalized to the maximum, where can lead to less variance across 4 GPUs compared with Native.
Moreover, load imbalance in Native can greatly impair the performance of model inference due to severe request queueing.
Fig. <ref> (right) shows the 98^th tail latency of requests executed on each GPU, where Native leads to extremely long tail latency, e.g., over seconds.
Unlike Native, can distribute load across GPUs via efficient GPU-to-GPU model swapping, which consistently achieves fast model inference and cuts the tail latency to around 35 ms for all GPUs.
§.§ at Node
We next evaluate the performance of at node.
We evaluate using real-world workloads sampled from production traces (Fig. <ref> left).
The function request rates range from 5 to 30 r/m, which we believe is representative for GPU inference functions as we discuss in <ref>.
with its policies.
To understand the benefits of 's policies, we use four baselines that disable individual policies in .
(1) -FIFO uses a FIFO policy in request queueing compared with our SLO-aware policy (<ref>).
(2) -Random disables interference-aware scheduling (<ref>), which randomly schedules a request to an idle GPU if the target model is not loaded, and triggers model swapping through PCIe.
(3) -LRU directly adopts a LRU policy in model eviction rather than prioritizing models according to swapping overhead (<ref>).
(4) -Block disables block management policy (<ref>), which simply caches released memory blocks of various sizes in a single pool.
When model loading requires a new block, it directly returns a cached one in the pool if the requested size can be satisfied, otherwise it will free existing idle blocks until the required memory space is available.
Fig. <ref> shows the ratio of SLO-compliance functions using and four baselines.
Compared with , all the baselines suffer from various limitations and fail to effectively support a large number of functions.
In particular, -FIFO is oblivious to SLOs and unable to properly prioritize functions in request queueing, which leads to serious SLO violations under many functions, such as over 50% of total 560 functions cannot satisfy the SLOs.
-Block cannot reuse various-size blocks and forces frequent memory allocation via native CUDA API, which incurs long delay in block allocation and significantly harms overall performance as we will describe in Fig. <ref>.
-LRU can easily evict heavy models and cause PCIe bandwidth contention during host-to-GPU swapping, the high overhead of which also degrades the inference performance.
Therefore, the function ratios quickly drop to 0 for both -Block and -LRU.
-Random leads to the worst performance due to its inefficient scheduling and swapping policy, which does not exploits NVLink across GPUs and can cause more serious PCIe bandwidth contention than -LRU.
Consequently, it easily violates SLOs even under 320 functions.
Compared with baselines, can successfully support over 80% functions under 560 functions, effectively maximizing the number of SLO-compliance functions.
Policy behaviors.
Fig. <ref> further shows the behaviors of 's block management and model eviction policies.
In particular, we compare the latency of per-model block allocation under -Block and (Fig. <ref> left).
incurs only negligible overhead due to efficient block sharing (<ref>), which requires no native GPU memory allocation.
In contrast, -Block can easily trigger many CUDA allocation calls when swapping models and thus cause a long delay, e.g., up to hundreds of milliseconds.
Fig. <ref> (right) breaks down the proportion of three swapping cases under -LRU and , which include non-swapping and host-to-GPU (Swap PCIe) and GPU-to-GPU (Swap NVLink) swapping.
's eviction policy tends to keep heavy models in GPUs such as to reduce overall swapping overhead.
For example, over 90% of requests to heavy models incur no host-to-GPU swapping, which effectively avoids PCIe bandwidth contention.
While swapping through PCIe is required by most requests to light models, this leads to negligible impact on overall performance (Table <ref>).
On the other hand, -LRU is oblivious to model knowledge and thus both light and heavy models observe similar patterns.
SLO-aware request queueing.
We next evaluate 's SLO-aware request queueing policy (<ref>).
We compare with -FIFO under 560 ResNet-152 functions and vary their deadlines from 60 ms to 80 ms.
Fig. <ref> shows the ratio of SLO-compliance functions using -FIFO and .
's policy is designed to satisfy various user-specified SLOs and thus we vary its target deadline accordingly, by which adjusts request execution order to support as many functions as possible.
In particular, SA-60, SA-70, and SA-80 can achieve the best performance when setting deadlines to 60 ms, 70 ms, and 80 ms, respectively.
All of them can significantly outperform -FIFO in either deadline.
§.§ at Cluster
We next evaluate on a cluster deployment with 6 GPU workers.
As running on a cluster incurs additional system overhead, we relax the SLOs and set deadlines for CV models and Bert-qa to 150 ms and 250 ms, respectively.
Baselines.
We compare with three baselines.
(1) Native uses native GPU containers running on specific GPUs, which is a common practice in .
Each GPU worker can only host a fixed, small number of functions due to limited GPU memory (<ref>).
We perform request-level GPU sharing in Native, where requests to a GPU are sequentially executed in a FIFO manner.
(2) NonSwap allows GPU remoting similar to , but disables model swapping.
Compared with Native, NonSwap can share GPU runtime across functions, and thus reduces memory footprint and enables more models per GPU.
(3) SimpleSwap enables model swapping compared with NonSwap.
Unlike , it only supports simple policies discussed in <ref>, such as FIFO request queueing, random scheduling, and LRU model eviction.
Cluster evaluation.
Fig. <ref> compares the performance of and three baselines.
We first compare the ratio of SLO-compliance functions when increasing the function number from 200 to 1200.
As shown in Fig. <ref>, only can consistently satisfy per-function latency SLOs under a large number of functions, e.g., over 1000.
In particular, Native can easily saturate all GPU memory and only supports up to 500 functions, which leads to low GPU utilization.
Compared with Native, NonSwap can relax the constraint of GPU memory and enables more functions.
However, it still fixes the binding between functions and GPUs, which can cause a number of GPUs overloaded by requests and lead to long tail latency.
For example, the ratio of SLO-compliance functions under NonSwap dramatically drops from 800 functions.
While SimpleSwap can outperform NonSwap with model swapping, it still suffers from severe SLO violations under 1000 functions.
Since model swapping of SimpleSwap is inefficient with high overhead, it can result in long end-to-end latency.
Fig. <ref> further compares the behaviors of , SimpleSwap, and NonSwap under 1000 functions.
We show the per-request latency normalized to corresponding deadlines (left).
In almost every request can be served within its deadline, leading to a normalized latency less than 1.
However, both SimpleSwap and NonSwap suffer from long tail latency, which can be over 4× and 7× of the deadline, respectively.
We also compare per-worker GPU load of the three solutions.
For each node, we normalize loads of its four GPUs to the maximum, and calculate the variance.
Lower variance indicates better load balancing.
Fig. <ref> (right) plots per-worker load variance under , SimpleSwap and NonSwap, where there are 6 workers in each one.
Compared with NonSwap, and SimpleSwap can effectively balance GPU load across workers with model swapping, achieving much less load variance.
§ DISCUSSION AND RELATED WORK
Large models.
Swapping large models can incur considerable overhead with long end-to-end latency.
Compared with keeping a full model in GPUs using a large amount of memory, caching its partial parameters can be more efficient in model pipeline.
We can explore how a large model can be partly cached in order to navigate the tradeoff between inference performance and GPU memory cost, which we leave as a future work.
In addition, increasingly popular large language models, can even exceed the memory capacity of a single GPU, which requires careful designs of model parallelism and pipeline <cit.>.
Supporting these models in serverless cloud can be more challenging, which we will also leave as a future work.
GPU virtualization.
Though in this work we mainly discuss sharing an entire physical GPU across functions, the design of model swapping and request-level sharing can be naturally extended to virtual GPUs.
In fact, allows users to configure a function with a proportion of GPU, which is enabled by GPU virtualization <cit.>.
Such techniques can be integrated into 's GPU server to partition a GPU into multiple virtual instances, each shared across multiple functions at request level.
This enables much fine-grained GPU sharing, which is our future work.
Model swapping from local disk.
For functions with extremely low request rates (e.g., a few requests per hour in Fig. <ref>), keeping many models in host may even saturate the memory and still lead to resource inefficiency.
Therefore, the platform can further move those models to local disk, trading the swapping performance for lower keep-alive cost, which we leave for a future work.
GPU function snapshotting.
Models can serve as snapshots of inference functions, by that 's model swapping can also be exploited in function scaling.
When a serverless platform requires launching new instances for a running function, it can quickly duplicate the model across GPUs through NVLink, leading to substantially reduced startup latency compared with function cold starts.
We leave it as a future work.
Model pipeline.
Recent works have also leveraged model pipeline to reduce inference latency, such as PipeSwitch <cit.> and DeepPlan <cit.>.
These works focus on improving inference performance of individual models, which is orthogonal to and can be applied to further optimize model swapping.
Operator-level optimizations.
By redirecting CUDA API calls to the server, obtains the operator-level knowledge when executing a ML model.
Recent works have proposed to optimize the execution of operators and GPU kernels, such as operator fusion <cit.>, which can be exploited by to further speed up model inference.
§ CONCLUSION
We present , a GPU-enabled serverless platform for SLO-aware, resource-efficient model inference.
keeps GPU functions alive in host and supports efficient model swapping, which can easily enable pay-per-use billing and efficient GPU sharing across functions.
can also meet latency SLOs for inference functions at low cost with its scheduling and model management policies.
We have implemented atop , and evaluations show that can effectively comply with per-function latency SLOs and improve GPU efficiency.
§ APPENDIX
§.§ CUDA API
performs asynchronous CUDA API redirection to reduce communication overhead for efficient GPU remoting (see <ref>).
We divide CUDA APIs into two categories, i.e., asynchronous and synchronous APIs, according to whether they require GPU-to-host data transfer and update state in host.
Table <ref> lists the primary CUDA APIs supported in and their categories.
CUDA APIs issued by intermediate steps during model inference are generally asynchronous.
In addition to listed APIs, model inference can also trigger a few other CUDA APIs in our experiments, e.g., .
These APIs do not affect inference execution and thus can cache their results in GPU clients without repeatedly querying the executor, which further reduces communications.
§.§ Auto-configuration in Request Queueing
can automatically configure α based on overall load such as to maximize the number of SLO-compliance functions per node (see <ref>).
Intuitively, when the load is low and an increasing number of functions can satisfy SLOs, α should grow to prioritize more functions to enable more SLO-compliance functions.
On the contrary, should be conservative to prevent functions to violate their SLOs by decreasing α when a node is overloaded.
Therefore, we propose an auto-configuration algorithm for α, which is inspired by TCP congestion control.
Algorithm <ref> shows the pseudo code, where scalar and threshold are two parameters to determine how much and when α should change.
We by default set scalar to 2 and threshold to 0.04, which is able to properly adjust α according to our profiling.
plain
|
http://arxiv.org/abs/2306.06046v1
|
20230609171427
|
Redeveloping a CLEAN Deconvolution Algorithm for Scatter-Broadened Radio Pulsar Signals
|
[
"Olivia Young",
"Michael Lam"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
0000-0002-0883-0688]Olivia Young
School of Physics and Astronomy, Rochester Institute of Technology, Rochester, NY 14623, USA
Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, Rochester, NY
14623, USA
0000-0003-0721-651X]Michael T. Lam
SETI Institute, 339 N Bernardo Ave Suite 200, Mountain View, CA 94043, USA
School of Physics and Astronomy, Rochester Institute of Technology, Rochester, NY 14623, USA
Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, Rochester, NY
14623, USA
Broadband radio waves emitted from pulsars are distorted and delayed as they propagate toward the Earth due to interactions with the free electrons that compose the interstellar medium, with lower radio frequencies being more impacted than higher frequencies. Multipath propagation in the interstellar medium results in both later times of arrival for the lower frequencies and causes the observed pulse to arrive with a broadened tail described via the pulse broadening function. We employ the CLEAN deconvolution technique to recover both the intrinsic pulse shape and pulse broadening function. This work expands upon previous descriptions of CLEAN deconvolution used in pulse broadening analyses by parameterizing the efficacy on simulated data and developing a suite of tests to establish which of a set of figures of merit lead to an automatic and consistent determination of the scattering timescale and its uncertainty. We compare our algorithm to simulations performed on cyclic spectroscopy estimates of the scattering timescale. We test our improved algorithm on the highly scattered millisecond pulsar J1903+0327, showing the scattering timescale to change over years, consistent with estimates of the refractive timescale of the pulsar.
§ INTRODUCTION
Radio pulsars provide unique probes of the ionized interstellar medium (ISM) and allow us to gain insight into its structure and variability by modeling the effects of the delays and distortions on the emitted radio pulses as observed at the Earth <cit.>. While delays due to dispersion are routinely modeled in pulsar timing experiments <cit.>, distortions due to multipath propagation are not and it can be difficult to do so <cit.>. Determining the distortion level is difficult due to both the intrinsic pulse shape and the underlying geometry and spectrum of the turbulent medium being unknown <cit.>, and the time and path-dependent variations in the observed pulse broadening function <cit.>. Not only can separating these effects yield important insights into the nature of the ionized ISM but also provide proper pulse profile impact mitigation for pulsars used in precision timing experiments such as low-frequency gravitational wave detectors <cit.>.
CLEAN deconvolution, originally developed for radio interferometric imaging <cit.>, was applied to radio pulses in <cit.> to recover both the pulse broadening (scattering) timescale and the intrinsic shape simultaneously via the use of an assumed PBF. Unlike in synthesis imaging where the positions of the array elements are known while the sky brightness distribution is not, neither the analogous PBF nor intrinsic pulse shape, respectively, are known. <cit.> introduced figures of merit to iteratively test trial values of under an assumed PBF, demonstrating variation in the rebuilt intrinsic pulses for PSR J1852+0031 for different PBFs and application to several other pulsars.
We expand upon the CLEAN deconvolution algorithm presented in <cit.> to prepare for automated deployment on data sets of significantly more pulsars. In this work, we primarily focus on the broadening effects of the ISM and recovering with the intention of applying the algorithm to the multi-frequency profiles of pulsars distributed throughout the galaxy to understand both the bulk properties of the turbulence in the ISM and specific unique lines of sight. Understanding these properties inform priors on pulsar timing arrays and other high-precision pulsar timing experiments in which scattering biases estimates of the arrival times <cit.>. This work is the first of several papers on robust method development and deployment on real data from a larger selection of pulsar observations.
In <ref>, we describe the CLEAN deconvolution method as presented and expanded upon the work in <cit.>. In <ref>, we perform systematic tests on simulated data, demonstrating the level of recall in the input values and quantifying our uncertainties in the estimates. We also compare our results with the cyclic spectroscopy (CS) deconvolution technique and discuss the tradeoff of limitations in our method with the extensive computational complexity of the CS method. Finally, we apply our method to PSR J1903+0327 in <ref> and discuss our future directions in <ref>.
§ THE CLEAN DECONVOLUTION ALGORITHM
CLEAN deconvolution for radio pulsars exploits the one-dimensional nature of pulsar profiles and differs from traditional CLEAN approaches where the instrumental response function is known. The analogous function in this work, the PBF, must be assumed from a priori models. <cit.> developed a method that can both determine the pulse broadening timescale and recover the intrinsic pulse from observational pulsar profile data via the employment of a CLEAN deconvolution algorithm and figures of merit (FOMs). CLEAN can be applied using different models of the PBF of the ISM, making it a broadly encompassing method. In this work, we assumed the PBF for the commonly-used thin-screen approximation for the ISM's geometry.
<cit.> described the CLEAN algorithm for use in the deconvolution of radio pulsar pulses, along with the development of five FOMs used to determine the correct broadening timescale from a set of test values. In this section, we discuss the algorithm both as originally described and how the algorithm has been redeveloped for this work.
§.§ Modeling the Observed Pulse Profile
We assumed the observed pulse y(t) to result from the convolution of the intrinsic pulse x(t), the PBF g(t), and the instrumental response function r(t), given by
y(t) = x(t) ⊗ g(t) ⊗ r(t).
We simulated our intrinsic pulse x(t) as a normalized, single-peaked Gaussian shape, which minimizes the asymmetry of the rebuilt pulse and provides a baseline comparison against the use of the FOM Γ discussed later in Section <ref>.
The PBF for the ISM is commonly modeled as a thin screen <cit.> for simplicity. The thin-screen approximation simplifies calculations, separating out the physical turbulent processes from the geometry of the intervening gas, and in the case of the PBF, simplifies the form as well; the thin-screen model works reasonably well for lines of sight with a single overdense region. We used this model in our work, given by
g(t|) = 1/exp(-t/) U(t),
where U(t) is the Heaviside step function.
Lastly, the instrumental response function denoted as r(t) determines the resolution of the observed data. We assumed a delta function[For clarity, we use the digital signal processing definition of the unit-height sample function being δ(t) = 1 if t=0, otherwise 0, which allows us to multiply by a constant as in Eq <ref>.] as an approximation for the instrumental response function with a width of one phase bin.
§.§ CLEAN Deconvolution
CLEAN iteratively subtracts replicated components from an observed pulse until the residual structure falls below the root mean squared (rms) of the off-pulse noise. As we do not know the value of a priori, this iterative subtraction process is repeated for a range of test values, with the assumed correct chosen using FOMs. For the purposes of the algorithm, we treat to be measured in time-bin resolution units as measured across the folded pulse's phase with N_ϕ total bins. We step through our CLEAN deconvolution process below.
* CLEAN Component Creation:
We first identify the location of the maximum of the deconstructed pulse after the i-th iteration, t_i ≡*argmax[y_i(t)]; our first iteration begins with the originally observed pulse y_0(t). Each CLEAN component (CC) y_c(t|t_i) starts with a delta function δ(t-t_i) at the location of the maximum of the observed pulse, max[y_i(t)] multiplied by the loop gain value γ, i.e.,
y_c(t|t_i) = γ{max[y_i(t)] }δ(t-t_i) ≡ C_i δ(t-t_i).
Smaller loop gains result in a greater number of iterations before the stopping criterion is met but allow for finer intrinsic features to be resolved <cit.>; in this work, we used γ = 0.05.
* Iterative Subtraction off the Main Pulse: After we construct y_c(t|t_i), we convolve the CC with the instrumental response function r(t) and the PBF with a given test , and then subtract this shape from the i-th iteration pulse. The change in the profile at each iteration is described as
Δ y_i(t) = y_i(t) - { y_c(t|t_i) ⊗[g(t|) ⊗ r(t)] }
with y(t_i) as the input pulse profile to the i-th iteration. The CCs are then iteratively subtracted off the main pulse, with the resulting subtracted profile becoming the pulse profile for the next CLEAN iteration so that
y_i+1(t) = Δ y_i(t).
* Termination of CLEAN Algorithm: The CLEAN algorithm is terminated when the maximum of the input pulse profile falls below the rms of the off-pulse noise, i.e., max[y_i(t)] ≤σ_ off.
The CLEAN algorithm above will provide the list of CCs along with the residual noise. The CCs can be used to reconstruct the intrinsic pulse shape, but for the purposes of this work, our final goal was to determine . The algorithm can run with any input value of , therefore our iterative method is repeated with different trial , from which we derived FOMs based on the reconstructed intrinsic pulse shape and the residual noise that resulted from each trial .
§.§ Figures of Merit
We employed six FOMs as follows: a measure of positivity of the residual noise (f_ r), a measure of skewness of the recovered intrinsic pulse (Γ), a count of the on-pulse-region residual points below the off-pulse noise level (N_ f/N_ϕ), a measure of the ratio of the rms of the residual noise to the off-pulse noise rms (σ_ offc/σ_ off), a measure of the combined positivity and skewness measure (f_ c), and a count of the number of CLEAN components each test uses before the peak of the profile falls below the noise level (N_ iter). All except the last are described in <cit.>. These six FOMs fall into three broad categories: figures based on the rebuilt intrinsic pulse, figures based on the residual noise after the CLEAN algorithm terminates, and a figure based on the number of CLEAN components generated before the algorithm terminates. We describe the FOMs grouped into these three categories in the sections below.
In Figure <ref>, we see the ideal result of the use of six FOMs and the methods for determining the “correct” .
§.§.§ FOM Measuring the Shape of the Rebuilt Intrinsic Pulse
We examine the CC amplitudes C_i and locations t_i found during the CLEAN process (e.g., see Eq. <ref>) to compute the Γ FOM. In our simulations, we created intrinsic pulses that are symmetric Gaussians, and therefore the correct rebuilt pulse should always be a perfectly symmetric Gaussian if the correct is used. In reality, intrinsic pulses may not be perfectly symmetric, and we discuss these implications in <ref>.
The Γ of the rebuilt pulses is calculated for each test by computing the third standardized moment
Γ = <t^3>/< t^2>^3/2,
where <t^n> is
<t^n> = ∑_i = 1^n_c (t_i - t̅)^n C_i/∑_i = 1^n_c C_i
and t̅ is
t̅ = ∑_i = 1^n_c t_i C_i/∑_i = 1^n_c C_i.
The resulting Γ is ideally represented by the example in panel 3 of Figure <ref>, where the sharp fall-off point represents the general location of the correct .
§.§.§ FOMs Based on the Residual Noise
Three of our FOMs are built from measures of the residual noise after the completion of the CLEAN algorithm. We will also discuss a FOM that combines one of these FOMs (positivity) with the Γ FOM discussed previously – this is an important FOM as described in <cit.>.
The residual noise is one of the end products of the CLEAN deconvolution process. A test that is larger than the correct value of results in a progressively larger over-subtraction as shown in Figure <ref>. If the test is smaller than the correct value, it results in an unremoved noise floor in the baseline, again see Figure <ref>.
We can first calculate the rms of the residual noise
σ_ offc = 1/∑_j = 1^ [Δ y_i (t_j|) ]^2,
in comparison to the rms of the off-pulse region, σ_ off, where this ratio σ_ offc/σ_ off will grow whenever over- or under-subtraction is performed and should otherwise approach a value of 1 for the appropriate subtraction. This is roughly equivalent to the single metric used to automatically determine used in <cit.> for multi-frequency data from 347 pulsars.
Beyond the rms, we can count the total number of residual noise points N_ f within a certain threshold level (we chose 3σ_ off) of the noise that satisfies the condition
|y_i - y_off| ≤ 3σ_off.
As seen in Figure <ref>, for under-subtraction we expect all of the points to satisfy the condition and so the ratio N_ f/ = 1, but the ratio will drop as over-subtraction occurs.
Besides these two metrics which measure deviations away from the rms noise, we also wish to enforce non-negativity of the residual profile since we know pulsar signals must be above the baseline noise. A f_ r FOM was defined[<cit.> introduced a multiplicative weight of order unity but did not specify the value. Here we take that weight to be 1 and so ignore introducing it in the main text. They also include a Heaviside step function, U_Δ y. As this only changes the overall normalization of our FOM in our simulation runs, we ignore this in our work.] by <cit.> in terms of a sum over the bins of the residual noise,
f_ r = 1/σ_off^2∑_j = 1^ [Δ y_i (t_j|) ]^2 .
If Δ y_i(t) is Gaussian white noise with rms equal to σ_ off, then as with the previous FOM, we would expect f_ r≈ 1, while over-subtraction would force the sum to increase well beyond 1.
<cit.> defined the f_ c FOM, equally weighting the rebuilt intrinsic pulse shape and the residual noise by
f_ c = Γ + f_ r/2,
thus providing higher confidence in test values with favorable values of both skewness and positivity. The typical shape of this FOM is shown in panel 5 of Figure <ref>.
§.§.§ FOM Measuring the Number of Iterations Performed
We developed this FOM to more directly measure the fit of the re-convolved CCs broadening tails to the broadening of the observed pulse. As the amplitude for re-convolved CCs with larger broadening tails is smaller than those with smaller broadening tails (due to the normalization in Eq. <ref>), we expect a general increase in the number of iterations needed to deconvolve the observed pulse. Similarly, when re-convolved CCs with smaller broadening tails are subtracted from a pulse with a larger true broadening tail, more iterations will be required. However, when the CCs are convolved with the correct value of , neither under nor over-subtraction occurs, resulting in fewer iterations being needed. Therefore, we expect a dip in our FOM around the correct value of .
§.§ Automating the Choice of the Correct Value
These FOMs were originally constructed to pinpoint the correct value of by eye. This approach is impractical for large data sets, so we automated this process. We found that the simple approach of computing the numerical third derivative of each function with respect to and finding the maximum has yielded good results, though the exact recall depends on both the value of and the pulse S/N. More complicated algorithms will be employed in future works but the systematic error introduced by this choice is small in comparison to other noise sources, as shown next, so we opted to use it.
§ AUTOMATED ALGORITHM PERFORMANCE
In this section, we will discuss the performance of our automated CLEAN method.
An in-depth description of our redeveloped CLEAN algorithm in Python, as well as notes on how to use the open source versions available on <https://zenodo.org/badge/latestdoi/524167339>, can be found in <cit.>.
We wished to robustly quantify the “correctness” of our estimates in simulated data so that we could automatically assign uncertainties to our estimates on real data. To that end, we simulated multiple data sets with different input parameters to determine how these will affect the recall. Ideally, as in <cit.> for the cyclic spectroscopy (CS) algorithm, only the S/N and of a profile should affect the recall accuracy of our CLEAN deconvolution, though we tested several other parameters as well.
To quantify the algorithm's performance, we computed a measure of the fractional average error bar. Within this work, the values returned for each FOMs were given the same weight when calculating our error bars, which is defined as the fraction of the returned to the correct injected . Our fractional average error bars are defined as
ϵ_ ave = 1 - 1/N_ runs∑_i=1^N_ runs(ϵ_i/ N_ FOM)
where N_ runs is the number of simulations for a given data set, N_ FOM = 6 is the number of FOMs used, and ϵ_i, for readability, is defined as an unweighted sum of the values returned by our FOMs for each run (i = 1 … N_ runs),
ϵ_i = τ_N_ f/N_ϕ + τ_σ_ offc/σ_ off + τ_Γ + τ_ f_ r + τ_f_ c + τ_N_ iter.
§.§ Testing the Impact of S/N and on Recall
We first tested how CLEAN performs based on different injected pulse S/N and combinations. We simulated data sets using several of the characteristics of PSR B1937+21 (the first-known millisecond pulsar and a known scattered source) as follows; these parameters are also shown in Table <ref>.
PSR B1937+21 has a spin period of 1.557 ms and a full-width at half maximum (FWHM) of 38.2 μs <cit.>. To reduce computing time, we used different numbers of phase bins depending on the injected value as shown in Table <ref>; we show in the next subsection that there is minimal impact in the recovery of depending on the phase resolution of the pulses so long as the scattering tails are resolved. For each of our runs, we tested across 100 equally-spaced steps between 0.5 and 1.5 times the injected τ_ d, correct. For each S/N- pair, we simulated and ran CLEAN on 60 pulse shapes.
We choose our S/N- values in the same style as <cit.> to more directly compare our CLEAN deconvolution method with the CS algorithm. While CLEAN works on an averaged pulse profile, CS uses raw voltage data prior to folding to recover the full impulse response function, making the latter more computationally intensive (though assuming no specific PBF). These methods are therefore difficult to directly compare, but we can expect to see improved performance for both methods as either S/N or increase.
Indeed, after running our simulations, we see this expected behavior in Figure <ref>, where darker colors indicate better recovery of , which matches what is seen in <cit.> for CS. The numerical values shown in Figure <ref> are in terms of the percentage of the correct , given by ϵ_ ave.
lc
2
Automated Algorithm Simulation Parameters
Parameter Value
Spin period 1.557 ms
Pulse FWHM 38.2 μs
for = 1,2,4 μs 2048
for = 8,16, 32 μs 1024
for = 64, 128, 256 μs 512
Test range (0.5τ_ d, correct – 1.5τ_ d, correct)
Number of steps in test array 100
In Figure <ref>, we show how well each individual FOM performs, with each panel showing the recovery over the full range of S/N- pairs.
Smaller dots indicate smaller error bars, and thus a more accurate performance. Poor performance from one FOM will impact our averaged recall, as an unweighted average is currently employed. We see that in general, the performance of each FOM improved with higher S/N or like the average, though not all behave equally. For example, the N_ f/ and σ_ offc/σ_ off appear to perform better at somewhat lower than the other FOMs. While the skewness Γ does not perform as well at the high S/N- end, it does perform marginally better than the previously mentioned two FOMs otherwise. In future works, we will explore developing weights for each FOM in constructing the average ϵ_ ave to improve the average accuracy of the algorithm.
§.§ Testing Secondary Parameter Contributions to Recall Error
While we assumed the main contributors to the effectiveness of our algorithm to be our primary parameters, S/N and , we wanted to ensure that secondary parameters were not significant contributors to our recall error. We created a small-scale parameterization set via simulation of a base pulse profile with = 256 bins and S/N = 2600. We chose very large values for both and S/N as the method was able to reliably recall the correct for large and S/N values (see Section 3.2). This set was used to determine how the number of bins in our observation, the FWHM of the intrinsic pulse, and the user-defined step size and range of the test array affected the algorithm's performance. Additionally, our previous data set assumed an intrinsic pulse with similar parameters to B1937+21 only. Therefore, an additional motivation for probing these secondary parameters was to determine if we can extrapolate our results to observations of other pulsars with varying FWHMs and numbers of phase bins.
For these parameterization runs we used the base values shown in the second column of Table <ref> and iterated over the values shown in the third column. We run 20 simulations for each variation, which gave insight into these parameters' contribution to our recall and allowed for exploration into the expected larger contributions of and S/N to the recall error.
lll
3
Secondary Parameters Tested
Parameter Base Value Values
Number of phase bins 256 {128, 256, 512, 1024, 2048}
FWHM 1/8N_ϕ [1/64, 1/32, 1/16, 1/8, 1/4, 1/2]N_ϕ
Range of (0.5 τ_ d, correct – 1.5 τ_ d, correct) (0.1 τ_ d, correct – τ_ d, correct), (0.1 τ_ d, correct – 2.0 τ_ d, correct),
(0.4 τ_ d, correct – 1.6τ_ d, correct), (0.5 τ_ d, correct – 1.5 τ_ d, correct)
Number of steps in array 100 [10, 20, 50, 100, 200]
As many pulse profiles are recorded with a different number of phase bins <cit.>, we tested to see how the phase resolution of the observation affected our recall. We simulated data sets with ranging from 128 to 2048. In Figure <ref>, we see good agreement between the individual recall values for each run and the averages varying within 10%. Thus, the minor variations in the average recalls could be explained by our limited number of runs resulting in incomplete coverage of the algorithm's performance and we therefore assumed that the number of bins in the observed pulse profile was not a significant contributor to our total recall.
Results of testing how the FWHM of the intrinsic pulse affected our recall are shown in Figure <ref>, and reveal a large, though not unexpected, range in ϵ_ ave between the FWHMs tested. As the FWHM increases, the pulse takes up an increasing fraction of the observation window, thus making the CLEAN cutoff criterion of falling below the off-pulse noise level less effective. This result is corroborated by the findings of <cit.> when they found the CS method to be less effective on wider pulses.
In Figure <ref>, we see the contribution of the number of steps or interchangeably the step size of the test array to our recall error. We included in this analysis a correction factor of Δ/2, the largest base error induced due to large step sizes resulting in the correct not being directly tested. For example, if the correct is 10.5 μs, and our test array only samples every Δ = 1 μs, an error of 0.5 μs will be introduced, thus, we added this factor of Δ/2 to our ϵ_ ave to more conservatively estimate our uncertainties. The fractional average error bars returned vary within 5%, therefore we concluded that the number of steps in the test array was not a large contributor to the overall recall error.
Finally, we parameterized the contribution of the range of test iterated over to our recall. In Figure <ref>, we see that ranges that barely included the correct (second bar) result in poor performance as expected. This results from the shapes of the FOMs not being fully covered over the correct injected . Other ranges that include the correct have recalls within about 5% of each other, even when the ranges iterated over are much larger. Therefore, while more computationally intensive, we recommend running CLEAN over a large range of values to ensure the best estimate is chosen.
With the results of these runs, we see that these secondary effects have some small impact at large S/N and , but otherwise the most prominent influences on the recall of CLEAN deconvolution are the S/N and of the data. While there are some variations in the average recall for each of the parameters we tested, the average recalls varied within 10% or less for most tests, with the notable exceptions: large FWHMs and test range that barely included the correct value of , both expected. We can also conclude that the results using one simulated pulsar can be extrapolated to both simulations of other pulsars and real observational data.
§ APPLYING CLEAN TO PSR J1903+0327
To demonstrate the efficacy of our algorithm, we tested CLEAN on real data from the pulsar J1903+0327. PSR J1903+0327 is a millisecond pulsar that has been monitored by pulsar timing array collaborations such as the North American Nanohertz Observatory for Gravitational Waves (NANOGrav; ) in the effort to detect low-frequency gravitational waves. While these collaborations self-select for pulsars with low amounts of pulse broadening (narrower pulses have higher timing precision), PSR J1903+0327 has some of the most prominent scattering in these data sets, with the broadening tail visible by eye. With over a decade of timing data on this pulsar, we analyzed the lowest-radio-frequency pulses in the NANOGrav 12.5-year data set over time, where broadening is the strongest, to investigate if variations in are detectable by our algorithm.
We created six summed profiles on which to deploy our CLEAN algorithm, with one profile corresponding to each year from 2012-2017 in our data set. We restricted the frequency band for each observation to 10 MHz centered at 1200 MHz to mitigate additional broadening if frequency-averaging the pulses together to boost S/N. Each summed profile consists of twelve monthly observations summed via cross-correlation. Cross-correlation is used to ensure the peaks of our profiles are properly aligned in time before they are summed, resulting in the highest possible S/N of the summed profile. This process was performed iteratively, with each new profile being cross-correlated with and then added to the summed profile.
The refractive timescale of PSR J1903+0327 is estimated to be between 1 and 2 years <cit.>. Therefore, summing across one year of observations is consistent with the PBF remaining unchanged across this time span. To further increase the S/N values for each profile, we used different Savitzky-Golay filters to smooth the resulting summed profile to the desired S/N level. In Figure <ref>, we see an example of this summed and smoothed pulse profile.
For our time series analysis, we used two different filtering techniques, both employing a Savitzky-Golay filter using a polynomial of order zero to fit the samples: using a filter window size necessary to achieve S/N of 70 and using a filter window of 5% of the observation length to achieve a higher S/N. We chose to create time series at two different levels of S/N to showcase the dependence of the algorithm's performance on S/N. We iterated through test values ranging from 100 to 500 bins for each run, with a step size of one bin. We see the results of these runs in Figures <ref> and <ref>, where we converted our returned values into units of microseconds.
We see what is expected in the FOMs for these time series: greater precision and more visible points of change for the high S/N FOMs. Looking at Figures <ref> and <ref>, we see examples of how higher S/N results in sharper points of change in the FOMs, thus making choosing the correct a more precise process. While this is true for all FOMs prevented here, this difference can be seen most explicitly in the σ_ offc/σ_ off FOM (panel 2), where there is a noticeable location where the slope begins increasing in our larger S/N FOMs, versus our lower S/N FOMs where there is a more gradual increase in the slope of the σ_ offc/σ_ off FOM, making the correct more difficult to pinpoint. This increased sharpness of the points of change of our FOMs translated into greater accuracy and better agreement across our FOMs, which can be seen reflected in the tighter clusters around the average returned values in our time series.
We also note some interesting results of this time series analysis, particularly the dip in 2015, followed by a drastic increase the following year. However, coupled with the unusual scattering indices measured in <cit.>, it is clear that an exponential PBF is not supported along this line of sight and a more complex model is necessary (Geiger et al. in prep). Nonetheless, we have shown via this analysis that not only does our CLEAN algorithm perform as expected on observational radio pulsar data given our et of assumptions, but also that employment of this algorithm holds potential for scientific insight into the ever-changing ISM.
§ FUTURE WORK AND CONCLUSIONS
Within this work, we discussed our motivations, introduced CLEAN deconvolution as presented in <cit.>, discussed our results and products of our implementation of CLEAN, our parameterization work, and results on observational data of PSR J1903+0327. Through our parameterization work, we have concluded that our replicated CLEAN algorithm works as expected: the main factors that influence the recall of the algorithm are the S/N and of the pulse profile, and higher values of S/N and resulting in better recall. We have produced an algorithm that we can confidently deploy on larger sets of observational data. To that end, we have presented a brief analysis of PSR J1903+0327 at two S/N levels and discussed our findings, showing that our methods prove to be effective on observational data from radio pulsars and can thus provide insight into the time-dependence on pulse-broadening timescales for many pulsars after automatic deployment.
Moving forward, we aim to further develop our CLEAN algorithm into a broadly applicable tool, focusing on improving upon or removing the need for a number of simplifications used within this methods paper. We will also deploy our algorithm on the data set used in <cit.>, the followup to the original CLEAN method introductory paper, and on additional large-scale data sets <cit.>. Using these data sets, we will use our CLEAN algorithm to provide measurements of across multiple frequencies along many lines of sight. This will give us greater insight into both the composition of the ISM and the intrinsic emission of radio pulsars.
Within this work, we have extensively tested our algorithm's performance on simulated pulses broadened using a thin-screen model of the ISM for our PBF. Future work will entail testing the effects of different pulse broadening functions, namely PBFs based on thick and uniform medium ISM models, on the performance of our algorithm. In addition, while our third derivative method for determining the intrinsic from our FOMs works well given high levels of S/N and large values, this may not hold for low values and low levels of S/N as the FOMs are not as smooth. Therefore, we will work on improving our automation efforts via the implementation of machine learning, thus allowing our recall rates to better reflect the performance of the algorithm.
We have greatly simplified radio pulsar emission by assuming symmetric Gaussian intrinsic pulses. However, perfectly symmetric pulses are uncommon in radio pulsars <cit.>.
Should the intrinsic pulse be non-symmetric, our Γ FOM will either be completely ineffective or lead to incorrect values of being chosen. Therefore, we must further probe the effects of non-symmetry on our FOM, and develop new FOMs that do not rely on assumed symmetry.
We have developed a Python-based CLEAN algorithm that behaves as expected, have parameterized the performance of this rebuilt algorithm on a variety of simulated test data sets, have developed a method to automate the process of choosing the correct from our FOM, and have proved the efficacy of our algorithm on observational data. We have also defined areas of improvement for our algorithm, and look forward to continuing to develop a well-rounded method for probing the ISM using radio pulsars.
OY is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2139292. We graciously acknowledge support received from NSF AAG award number 2009468, and NSF Physics Frontiers Center award number 2020265, which supports the NANOGrav project. We acknowledge Research Computing at the Rochester Institute of Technology for providing computational resources and support that have contributed to the research results reported in this publication.
aasjournal
|
http://arxiv.org/abs/2306.03085v1
|
20230605175641
|
Nonparametric Detection of Gerrymandering in Multiparty Elections
|
[
"Wojciech Słomczyński",
"Dariusz Stolicki",
"Stanisław Szufa"
] |
stat.AP
|
[
"stat.AP",
"91F10 (Primary) 62G08 (Secondary)",
"J.4; G.3"
] |
Dismantling Hate: Understanding Hate Speech Trends Against NBA Athletes
Edinam Kofi Klutse Samuel Nuamah-Amoabeng Hanjia Lyu Jiebo Luo
=======================================================================
§ PRELIMINARIES
Most of the traditional methods developed for detecting gerrymandering in first-past-the-post electoral systems assume that there are only political parties really contesting the election, or, at least, that the party system is regular in the sense that all parties field candidates in every district. This is certainly a very reasonable assumption in many cases: under a well-known empirical regularity known as Durverger's law FPTP tends to be correlated with the emergence of two-party systems. Moreover, many of the authors working on gerrymandering detection are motivated by U.S. legislative elections (state and federal), where the regular two-party pattern of competition prevails. However, in many other systems using FPTP we discover significant deviations from such patterns in the form of regional parties, strong independent candidates, minor parties that forgo campaigning in districts where there is no local party organization, etc. In the face of such deviations, many of the traditional methods fail completely, as we shall point out below. Our objective, therefore, is to develop a method of detecting gerrymandering that can be applied to such partially-contested multiparty election.
§.§ Contribution
Our main contribution consists of the development of a nonparametric methods for detecting gerrymandering in partially-contested multiparty elections. By nonparametric we mean that, unlike most of the traditional statistical methods, the proposed method is free of assumptions about the probability distribution from which observed data points are drawn or the latent mechanism through which such data is generated. Instead we use statistical learning to identify regularities on the basis of the available empirical data.
By a partially-contested multiparty elections we mean any FPTP election where at least some candidates are affiliated into one or more political parties (after all, if every district-level election is completely independent and candidates cannot be affiliated into blocks, the very concept of gerrymandering as traditionally defined is meaningless), but for every party there is at most one affiliated candidate in every electoral district (so there is no electoral intra-party competition). For the sake of simplicity, we treat independent (i.e., non-party-affiliated) candidates as singleton parties.
In particular, we permit the following deviations from the two-party pattern of competition:
* the number of parties can differ from two,
* the number of candidates within each district can differ from two,
* a party can run candidates in any number of electoral districts,
* the set of parties contesting the election varies from one district to another.
Another area in which our approach differs from traditional methods for detecting gerrymandering is that they have been tailored towards testing a large ensemble of elections (not necessarily from the same jurisdiction) rather than a single election. For instance, our original scenario was to test for evidence of gerrymandering in close to 2,500 Polish municipal elections. In particular, the proposed methods, like all statistical learning methods, require the researcher who wants to use them to have a large training set of elections that they believe to be sufficiently similar insofar as the translation of votes into seats is concerned. If there is a large ensemble of elections being tested, they might form such a training set itself. There is no requirement that the training set and the tested set be disjoint as long as we can assume that gerrymandering is not ubiquitous in the testing set.
§.§ Prior Work
Among the methods of detecting gerrymandering that focus on the political characteristics of the districting plan (e.g., its impact on seats-votes translation or district-level vote distribution) the earliest focused on measuring how actual elections results deviate from a theoretically or empirically determined seats-votes curve. Such function, first introduced into political science by <cit.> with his rediscovery of the cube law, have been intensely studied from the 1950-s to the 1990-s (see, e.g., ). There is a somewhat broad consensus in the literature that a two-party seats-votes relation is usually described by a modified power law:
s_i/1-s_i = β_i (v_i/1-v_i)^ρ,
where s_i and v_i are, respectively, the seat- and vote-share of the i-th party, β_i is a party-dependent parameter, and ρ is a constant <cit.>. However, only few authors have considered the case of multiparty elections <cit.>, and their results are mostly heuristic in nature, lacking formal theoretical grounding.
The state-of-the-art approach to detecting gerrymandering is the partisan symmetry method. The general concept was first proposed by <cit.>, who noted that an election should not be regarded as gerrymandered if it deviates from a model seats-votes curve as long as the deviation is the same for each party, i.e., each party has the same seats-votes curve. The main challenge here lies in obtaining that curve from a single realization. The original idea has been to extrapolate therefrom by assuming a uniform partisan swing, i.e., that as the aggregate vote share of a party changes, its district-level vote shares increase or decrease uniformly and independently of their original levels. This assumption, first proposed by <cit.>, has been employed by, inter alia, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and others. However, in light of both theoretical and empirical criticism of the uniform partisan swing assumption <cit.>, a more sophisticated extrapolation method has been developed by <cit.>, see also , and . However, neither of these two methods can account for multiple parties absent some unrealistic assumption that the relevant swing happens only between two parties identified in advance as major, but see an attempt to develop a multi-party variant of the Gelman-King method by .
The third approach is the efficiency gap method proposed by <cit.> and further developed in <cit.>. It is based on the assumption that in an unbiased election all contending parties should waste the same number of votes. While prima facie attractive, this assumption is actually highly problematic because it requires the electoral system to match a very specific seats-votes curve [, ]. In this respect it represents a methodological step backwards, making it again impossible to distinguish asymmetry from responsiveness. The McGhee-Stephanopoulos definition of wasted votes has also been criticized as counterintuitive [see, e.g., , and ]. From our perspective the primary weakness of the efficiency gap method, as well as of its many variants <cit.> is again the lack of accounting for multiple parties. The original efficiency gap criterion is violated in almost every multiparty election.
Finally, there are several method designed to identify anomalies in the vote distribution indicative of standard gerrymandering techniques like packing and cracking. These include the mean-median difference test proposed by <cit.>, which measures the skewness of the vote distribution; the multimodality test put forward by <cit.>; the declination coefficient introduced by <cit.> and measuring the change in the shape of the cumulative distribution function of vote shares at 1/2; and the lopsided winds method of testing whether the difference between the winners' vote shares in districts won by the first and the second party is statistically significant <cit.>. Again, virtually of all those methods assume a two-party system. For instance, natural marginal vote share distributions in multiparty systems (such as the beta distribution or the log-normal distribution) are necessarily skewed. A similar assumptions underlies the declination ratio and the lopsided wins test. The multimodality test, on the other hand, assumes a constant number of competitors.
§.§ Basic Concepts and Notation
Gerrymandering is usually defined as manipulation of electoral district boundaries aimed at achieving a political benefit. Hence, intentionality is inherent in the very concept. However, identical results can also arise non-intentionally. For instance, geographic concentration of one party's electorate in small areas (major cities, regions) can produce similar effects to intentional packing. We use the term `electoral bias' to refer to such `nonintentional gerrymandering'.
Our basic idea is to treat gerrymandering and electoral bias as statistical anomalies in the translation of votes into seats. Identification of such anomalies requires a reference point, either theoretical, such as a theoretical model of district-level vote distribution, or empirical, such as a large set of other elections that can be expected to have come from the same statistical population. The former approach is undoubtedly more elegant, but burdened with the risk that the theoretical model deviates from the empirical reality. Hence, in this paper we focus on the empirical approach.
There are three basic assumptions underlying our methodology. One is that we have a training set of elections that come from the same statistical population as the election we are studying. Another one is that gerrymandering (or any other form of electoral bias) is an exception rather than a rule. Thus, we assume that a substantial majority of the training set elections are free from bias. The third assumption is that while district-level results of individual candidates can be tainted by gerrymandering, aggregate electoral results (e.g., vote shares) never are.
One major limitation of our methodology lies in its inability to distinguish gerrymandering from natural electoral bias. This limitation is shared, however, with virtually all methods in which the evidence for gerrymandering is sought in analyzing voting patterns or any other variables which are ultimately a function of such patterns (e.g., seat shares, wasted votes, etc.). For many applications that may be enough, since for many potential end-users of our method it might not matter whether the bias in the electoral system is artificial or natural. Even for applications where that distinction matters, the proposed methods might still be useful to identify cases requiring more in-depth investigation, which is usually necessary to find evidence of the intent to gerrymander.
Let us introduce some basic notation to be used throughout this paper:
set of districts We denote the set of districts by D := {1, …, c}.
set of parties We denote the set of parties by P := {1, …, n}.
set of contested districts For i ∈ P, we denote the set of districts in which the i-th party runs a candidate by D_i. Let c_i := |P_i|.
set of contesting parties For k ∈ D, we denote the set of parties that run a candidate in the k-th district by P_k. Let n_k := |D_k|.
district-level vote share For i ∈ P and k ∈ D, we denote the district-level vote share of the i-th party's candidate in the k-th district by v_i^k. If there was no such candidate, we assume v_i^k = 0.
district-level seat share For i ∈ P and k ∈ D, let s_i^k equal 1 if the i-th party's candidate in the k-th district won the seat, and 0 otherwise.
district size For k ∈ D, we denote the number of voters cast in the k-th district by w_k.
aggregate vote share For i ∈ P, we denote the aggregate vote share of the i-th party by v_i := (∑_k ∈ D_i v_i^k w_k) / (∑_k ∈ D_i w_k).
aggregate seat share For i ∈ P, we denote the aggregate seat share of the i-th party by s_i := (∑_k ∈ D_i s_i^k) / c_i.
unit simplex For n ∈ℕ_+, we denote the (n-1)-dimensional unit simplex by Δ_n := {𝐱∈ℝ_+^n : ‖𝐱‖_1 = 1}.
k-th largest / smallest coordinate For n ∈ℕ_+, 𝐱∈ℝ^n, and k ∈ 1, …, n, we denote the (k)-th largest coordinate of 𝐱 by x_k^↓, and the (k)-the smallest coordinate of 𝐱 by x_k^↑.
For any n ∈ℕ_+, a Uniform distribution on the unit simplex Δ_n, denoted by (Δ_n), is the unique absolutely continuous probability distribution supported on Δ_n whose density with respect to the Lebesgue measure on Δ_n is constant.
For any n ∈ℕ_+, a Dirichlet distribution on the unit simplex Δ_n is any absolutely continuous probability distribution supported on Δ_n and parametrized by a vector α∈ℝ_+^n whose density with respect to the Lebesgue measure on Δ_n is given by:
f(𝐱) = 1/B(α)∏_i=1^n x_i^α_i - 1.
Note that ({1}^n) = (Δ_n) for any n ∈ℕ_+.
§ SEATS-VOTES FUNCTIONS
Seats-votes curves are one of the fundamental concepts under the traditional approach to the quantitative study of electoral systems. It is a function that maps an aggregate vote share to an aggregate seat share. Of course, it is easy to see that in reality even in two-party elections a seats-votes curve is not actually real-valued, but probability measure-valued, since the seat share depends on what we call `electoral geography' – the distribution of district-level vote shares. We call this measure-valued function a seats-votes function, while reserving the name of a seats-votes curve to a function that maps a vote share to the expectation of its image under the seats-votes function. Note that both gerrymandering and electoral bias manifest themselves by deviation of the seats-votes function applicable to one or more parties from the `model' seats-votes function (however the latter is determined) caused by anomalies of the electoral geography.
In multi-party elections there is another fundamental problem with seats-votes functions: the distribution of seats depends not only on the vote share and the electoral geography, but also on the competition patterns: the number of competitors and the distribution of their votes (or, to be more precise, on the distribution of the first order statistic of their votes) <cit.>. If we were to fit a single seats-votes for all parties without regard to competition patterns, the result would involve another source of randomness besides districting effects, namely the variation in such patterns. Hence, we would be unable to distinguish between a seats-votes function that deviates from the model because of electoral geography and a seats-votes function that also deviates from the model, but because of unusual competition patterns. Thus, we need to account for this effect by considering a seats-votes-competition pattern function rather than the usual seats-votes function.
Consider seats-votes curves in multi-party elections. If we assume that they are anonymous (i.e., identical for all parties), non-decreasing, and surjective, it turns out perfect proportionality (s = v) is the only seats-votes curve that does not depend on the distribution of competitors' votes <cit.>.
It would be convenient if we were able to describe the competition pattern by a single numerical parameter. Our objective here is to find a measure of the `difficulty' of winning a seat given the number of competitors and the distribution of their vote shares (renormalized so as to sum to 1). A natural choice would be the seat threshold:
Fix i = 1, …, n, and assume that renormalized vote shares of the competitors of the i-th candidate equal some random variable 𝐙 distributed according to some probability measure on Δ_n,-i. A seat threshold of the i-th candidate is such t_i ∈ [1/n, 1/2] that Pr(S_i = 1 | v_i) > 1/2 for every v_i > t_i, i.e., the probability that the i-th candidate wins a seat with vote share equal v_i exceeds 1/2.
It is easy to see that Pr(S_i = 1 | v) = 1 - F_Z_1^↓(v / (1-v)), where F_Z_1^↓ is the cumulative distribution function of the renormalized vote share of the largest competitor.
Out next objective is to approximate the seat threshold in cases where we do not have any knowledge of the distribution of the competitors' vote shares, but only a single realization thereof. We therefore need a statistic that is both a stable estimator of the distribution parameters and highly correlated with the value of the largest order statistic. We posit that the best candidates for such statistics are measures of vote diversity among competitors, and use a Monte Carlo simulation to test a number of such measures commonly considered in social sciences.
Let α∼Gamma(1,1), and let 𝐕∼({α}^n), where n = 3, …, 12. On a sample of 2^16 independent realizations of 𝐕 we have calculated Spearman's rank correlation coefficients <cit.> of the following variables:
* α,
* 𝐕_1^↓, i.e., maximum of the coordinates,
* 𝐕_1^↑, i.e., minimum of the coordinates,
* median coordinate V_med,
* Shannon entropy <cit.>, H(𝐕) := -∑_i=1^n V_i log V_i,
* Herfindahl–Hirschman–Simpson index <cit.>, ∑_i=1^n V_i^2,
* Gini coefficient of the coordinates,
* Bhattacharyya angle <cit.> between V and the barycenter of the simplex, arccos∑_i=1^n √(V_i/n).
The results for n = 3, 6, 12 are given in, respectively, <ref>, <ref>, and <ref>.
We conclude that the Herfindahl–Hirschman–Simpson index is consistently the one that best correlates with the maximal coordinate while also being a reasonably good estimate of the distribution parameters. Accordingly, in our procedure for estimating the seat threshold we use its monotonic transform, i.e., the effective number of competitors <cit.>:
The effective number of competitors of the i-th candidate, i = 1, …, n, is given by:
φ := (∑_j=1, j ≠ i^n z_j^2)^-1,
where 𝐳∈Δ_n,-i is a vector of the vote shares of that candidate's competitors multiplied by such constant in ℝ_+ that ∑_j=1, j ≠ i^n z_j = 1.
We shall see that vote share, the number of competitors, and the effective number of competitors enable us to classify candidates as winning and losing with a quite small classification error (see <ref> and <ref>).
Clearly, with three candidates, i.e., two competitors, the classifier is exact (modulo ties), as the effective number of competitors uniquely determines the share of the larger one in their aggregate vote share:
max{z_j_1, z_j_2} = 1/2(1 + √(2/φ_i - 1)).
Then the decision boundary (i.e., the curve separating the space of candidates into winning and losing subspaces) is the set of points satisfying:
φ_i = 1 - 2 v_i + v_i^2/1 - 4 v_i + 5 v_i^2.
[Decision Boundary for n > 3]
For n > 3, the decision boundary is determined on the basis of the data using a support vector machine-based classifier <cit.> with a third-order polynomial kernel, and then smoothed by approximating it with a strictly decreasing B-spline of degree 3, with boundary nodes at 1/n and 1/2 and interior nodes fitted using cross-validation.
We refer to the value of the decision boundary for the candidate of the i-th party in the k-th district, ascertained for the empirically given number and effective number of competitors, as the effective seat threshold of the i-th party in the k-th district, and denote it by t_i^k.
An effective seat threshold classifier is a function 𝔰 : [0, 1] ×ℕ× [1, ∞) →𝔹 that maps a triple (v, n, φ) to 0 if the probability of winning a seat with vote share v, n-1 competitors, and φ effective competitors is below 1/2, and to 1 otherwise.
Mean effective seat threshold, t_i := ⟨ t_i^k ⟩_k ∈ D_i, where D_i is the set of districts contested by the i-th party, is our measure of the difficulty of winning a seat.
§ NONPARAMETRIC SEATS-VOTES FUNCTION ESTIMATES
As already noted in Sec. 1, one possible approach to identifying the model seats-votes function is to construct one theoretically. We might start with some probabilistic model of intra-district vote distribution, then use it to calculate the seat threshold, and finally use a probabilistic model of inter-district vote distribution to calculate the probability of district vote share exceeding the seat threshold. Finally, either by convolving binomial distributions (for small values of c) or by the central limit theorem (for large values of c) we obtain the expected seat share, as well as the distribution around the mean.
One unavoidable weakness of any theoretical seats-votes curve lies in the fact that a systematic deviation therefrom might just as easily arise from gerrymandering or any other electoral bias as from incongruities between the theoretical distributional assumptions and the empirical reality. To avoid this issue we derive our model seats-votes function solely from the reference election dataset with minimal theoretical assumptions[In particular, we assume that the seat shares are distributed according to some absolutely continuous probability measure supported on [0, 1]] by using the kernel regression method <cit.> to obtain an estimate of the conditional cumulative distribution function of a seat share for the given vote share. Its general idea is to estimate the conditional expectation of a random variable at a point in the condition space by averaging the values of its realizations at neighboring points with distance-decreasing weights. Because the method can be sensitive as to the choice of that method's hyperparameters, we discuss those choices in some detail.
[Locally-Constant Kernel Regression]
Let S ∈ℝ be a random response variable, and let 𝐗∈𝔉, where 𝔉 is some linear feature space and D := 𝔉, be a vector of predictor variables. Assume we have a vector of N realizations of S, 𝐬, and an N ×𝔉 matrix of realizations of 𝐗, 𝐱. We denote its j-th row by 𝐱_j. Then the locally-linear kernel regression estimate of the conditional expectation of S given a vector of predictors 𝐱_0∈𝔉 is given by:
𝔼(S | 𝐱_0) = ∑_j=1^N s_j K((𝐱_j - 𝐱_0) 𝐡_𝐱_0, 𝐱_j)/∑_j=1^N K((𝐱_j - 𝐱_0) 𝐡_𝐱_0, 𝐱_j),
where N is the number of observations (in our case – sum of the number of parties over all elections in our set of elections), K is a second-order kernel, and 𝐡_𝐱_0, 𝐱_j∈ℝ_+^D is a bandwidth parameter for the pair (𝐱_0, 𝐱_j). In other words, we average the values of s over all parties with weights determined by the value of the kernel at (𝐱_j - 𝐱_0) 𝐡_𝐱_0, 𝐱_j.
Choice of Kernel.
A k-order kernel, where k ≥ 2 ∈ℕ, is any function ℝ^D→ [0, ∞) of class C^k satisfying:
* ∫_ℝ^D K(𝐱) d𝐱 = 1,
* ∫_ℝ^D𝐱 K(𝐱) d𝐱 = 0,
* ∫_ℝ^D𝐱^k K(𝐱) d𝐱∈ℝ^D.
Prima facie it would seem that the appropriate choice of the kernel is fundamental in fitting a kernel density model. However, this is not actually the case: most of the commonly used kernels, including Gaussian, Epanechnikov (square), and even uniform, actually yield similar estimation errors. See, e.g., <cit.> and <cit.>. In our case, we choose the Gaussian kernel, i.e., the density of the standard D-variate normal distribution.
Choice of Bandwidth.
Unlike choice of kernel, the choice of bandwidth is of key importance in kernel regression (see generally ). Initial kernel regression models treated the bandwidth parameter as scalar and constant over all observations in the dataset <cit.> and this still the dominant approach <cit.>. However, it leads to a significant bias if the density of any feature is highly nonuniform, and for multidimensional feature spaces it requires prior standardization of the feature scales.
Two most popular alternative approaches are the generalized nearest-neighbor bandwidth <cit.> and the adaptive nearest-neighbor bandwidth <cit.>:
For 𝐱_0, 𝐱_j ∈𝔉, the generalized nearest-neighbor bandwidth is given by:
𝐡_𝐱_0, 𝐱_j = h_0 d(𝐱_0, 𝐱_N_k(𝐱_0)),
where d: 𝔉×𝔉→ℝ_≥ 0 is a metric, N_k(𝐱) is the index of the k-th nearest neighbor of 𝐱 under d, and h_0 ∈ℝ_+ is a scaling constant.
For 𝐱_0, 𝐱_j ∈𝔉, the generalized nearest-neighbor bandwidth is given by:
𝐡_𝐱_0, 𝐱_j = h_0 d(𝐱_0, 𝐱_N_k(𝐱_j)),
where d: 𝔉×𝔉→ℝ_≥ 0 is a metric, N_k(𝐱) is the index of the k-th nearest neighbor of 𝐱 under d, and h_0 ∈ℝ_+ is a scaling constant.
For 𝐱_0, 𝐱_j ∈𝔉, the adaptive nearest-neighbor bandwidth is given by:
𝐡_𝐱_0, 𝐱_j = h_0 d(𝐱_0, 𝐱_N_k(𝐱_j)),
where d: 𝔉×𝔉→ℝ_≥ 0 is a metric, N_k(𝐱) is the index of the k-th nearest neighbor of 𝐱 under d, and h_0 ∈ℝ_+ is a scaling constant.
Generalized NN is more computationally efficient, since the nearest-neighbor search need only to be performed once per the point of estimation, while in adaptive NN it has to be performed for every realization of the predictor variables. On the other hand, adaptive NN yields smoother estimators – generalized NN can result in non-differentiable peaks of the regression function. Motivated by the latter consideration, we use adaptive NN bandwidth, albeit in a modified multivariate version which enables us to have different bandwidths for different dimensions of the feature space:
For 𝐱_0, 𝐱_j ∈𝔉, the i-th coordinate of the multivariate adaptive nearest-neighbor bandwidth, i = 1, …, D, is given by:
(h_𝐱_0, 𝐱_j)_i = h_0,i |x_0,i - x_N_k^i(𝐱_j),i|,
where N_k^i(𝐱) is the index of the k-th nearest neighbor of 𝐱 along the i-th dimension of the feature space under the absolute difference metric, and 𝐡_0 ∈ℝ_+^D is a scaling vector.
The choice of a nearest-neighbor bandwidth requires us to choose additional hyperparameters of the model: the scaling vector 𝐡_0 and the nearest-neighbor parameter k. This is usually done by leave-one-out cross-validation <cit.> with the objective function defined either as an L_1 or L_2 distance between the predicted and actual value vectors <cit.>, or as the Kullback-Leibler divergence between the former and the latter. We use the latter variant together with an optimization algorithm by <cit.> which penalizes high-variance bandwidths (with variance measured as the trace of the parameter matrix) in a manner similar to the well-known Akaike information criterion <cit.>.
§ MEASURING DEVIATION FROM THE SEATS-VOTES FUNCTION
By this point, we have estimated a party's expected seat share given its aggregate vote share and the competition patterns in the districts it contests. But what we actually need is a measure of how much the actual seat share deviates from that expectation. A natural choice would be the difference of the two. It is, however, inappropriate for two reasons:
First Seat shares only assume values within a bounded interval [0,1]. Thus, in particular, if the expected seat share is different from 1/2, the maximum deviations upwards and downwards differ.
Second There is no reason to expect seat share distributions to be even approximately symmetric around the mean, so deviation of ε upwards might be significantly more or less probably than identical deviation downwards.
We therefore use another measure of deviation: the probability that a seat share deviating from the median more than the empirical seat share could have occurred randomly. Note how this quantity is analogous in definition to the p-value used in statistical hypothesis testing:
Electoral Bias p-Value
Let s_i be an empirical seat share and let μ be the conditional distribution of the aggregate seat share given the empirical aggregate vote share and the empirical mean effective seat threshold, i.e., the value of the seats-votes function. Then the electoral bias p-value is given by:
π_i = min{μ((0, s_i)), μ((s_i, 1))} = min{F(s_i), 1 - F(s_i)},
where F is the cumulative distribution function of μ.
We thus need not a regression estimator, but a conditional cumulative distribution function estimator. One approach would be to estimate the conditional density of S_i <cit.> and integrate it numerically. This method, however, is prone to potential numerical errors. We therefore use another approach, relying on the fact that a conditional cumulative distribution function is defined in terms of the conditional expectation, and therefore the problem of estimating it can be treated as a special case of the kernel regression problem.
There remains one final problem: when comparing parties contesting different number of districts, we need an adjustment for the fact that the probability of getting an extreme value depends on that number (decreasing exponentially as the number of contested districts increases). In particular, except for very rare electoral ties, single-district parties always obtain extreme results. Thus, if for a party contesting k districts we include parties contesting fewer districts in the training set, we overestimate the probability of obtaining an extreme seat share. To avoid that problem, the kernel model for parties with exactly k districts, k ∈ℕ, is trained only on parties with as many or fewer contested districts. If the distribution of the number of contested districts has a tail, it is optimal to adopt a cutoff point k_0 such that for the set P_c ≥ k_0 of parties contesting k_0 or more districts each party is compared with a model trained on all parties in P_c ≥ k_0.
§ OUTLIER-BASED APPROACH
Our second approach to seats-votes-based gerrymandering detection avoids the problem of modeling the seats-votes function entirely by recognizing that gerrymandering detection is less about the central tendency of the seats-votes relation than about the outliers. And while identification of outliers by measuring their deviation from the model distribution is certainly one of the standard approaches, so are nonparametric methods of outlier identification <cit.>. We develop a method that modifies standard depth-based <cit.> outlier identification algorithm to account for the obvious asymmetry in our data structure: we are only interested in outliers that deviate from the data in the direction orthogonal to the s=v diagonal, i.e., exhibit large seat share under weak electoral performance or vice versa.
§.§ Calculating Normalized Vote Shares
Clearly, being an outlier in the seats-votes feature space is by itself insufficient to support any credible claim of electoral bias. A party can obtain unusually high seat share under an unusually low vote share because it faced an exceptionally large number of competitors whose votes were particularly evenly distributed. Similarly, it can obtain an extraordinarily low seat share under a rather high vote share if it faced concentrated opposition. One way of accounting for that effect is to consider a higher-dimensional feature space, but this is susceptible to the `curse of dimensionality' <cit.>: the number of points remains constant, but a linear increase in dimension of the feature space yields an exponential increase in the surface area of the manifold corresponding to the `cloud' of points and a likewise exponential increase in the number of outliers. We instead choose another way – transforming vote shares into another quantity, one so normalized that its relationship with seat shares would not be affected by the competition structure, operationalized through the effective seat threshold. In this subsection, we introduce such a quantity.
Let t_k^* denote the effective seat threshold of the winning candidate in the k-th district and let μ denote the distribution of such thresholds.
Let (I_1, …, I_m) be a partition of (0, 1/2) into m intervals of equal measure μ, i.e., such that μ(I_i) = μ(I_j) for every i, j ∈{1, …, m}. A the distribution of effective seat threshold usually has an atom at 1/2, corresponding to two-candidate districts, we also define I_m+1 := {1/2}.
We fix m := ⌊√(|{k ∈ D: t_k^* < 1/2}|)⌋, but this is a completely arbitrary choice.
Let 𝒟_i := {k ∈ D : t_k^*∈ I_i}, i = 1, …, m+1, be the set of such districts that the effective seat threshold of the winning candidate is contained in I_i, and let F_D_i be the empirical cumulative distribution function of the vote share of the winning candidate in districts in D_i, except that each observation is weighted by the proportion of districts in which that candidate's party fielded candidates.
Normalized District Vote Share.
Fix any party j ∈ P and any district k ∈ D_j. The normalized district vote share of party j in the k-th district is given by:
λ_j^k := F_D_i(v_j),
where i = 1, …, m+1 is such that t_j^k ∈ I_i and v_j is the aggregate vote share of the j-th party.
Normalized Vote Share
The normalized vote share of party j is given by:
w_j := ⟨λ_j^k⟩_k ∈ D_j.
Figure: Vote share v. normalized vote share as a function of I_i
The intuition underlying the foregoing definitions is as follows: We know that the distribution of aggregate vote shares depends on district-level effective seat thresholds. We would like to obtain a quantity w_j that is a function of the vote share v_j, but independent of such thresholds. If the effective seat thresholds for all party candidates were equal (let us denote them by t_j), a natural way to define w_j would be to take an image of v_j under the conditional cumulative distribution function of the winner's aggregate vote share given the effective seat threshold t_j. We approximate that distribution by the empirical distribution of the winner's aggregate vote share in districts where their effective seat thresholds are approximately equal, i.e., they are contained in the same interval I_j as t_j.
Of course, in practice it is rare for the effective seat threshold to be equal across districts. However, we can run the procedure described above for each district separately, only using t_j^k instead of t_j, and then average over districts.
To test whether the normalized vote share satisfies our postulate of independence from effective seat thresholds, we fitted a kernel regression curve s(λ) and then tested whether residuals s_i - s(λ_i) depend on the mean effective seat threshold. The correlation coefficient between the two equals -0.197, as compared with -0.427 for v_i - s(v_i). Thus, while the normalized vote share coefficient does not fully satisfy our postulates, it goes further in satisfying them then the non-normalized vote share.
§.§ Identification of Outliers
As noted above, we are only interested in outliers that deviate from the data in the direction orthogonal to the s=v diagonal. A natural approach would be to consider the following index:
Outlier Score – Preliminary Version
Let 𝒟 denote the the union of all parties over the training set of elections. For any i ∈𝒟:
* let u_i^+ := {j ∈𝒟 : s_j ≥ s_i, λ_j ≤λ_i, i ≠ j}, i.e., the number of points (parties) other than i that have obtained at least as large seat share with at most as small normalized vote share;
* let u_i^- := {j ∈𝒟 : s_j ≤ s_i, λ_j ≥λ_i, i ≠ j}, i.e., the number of points (parties) other than i that have obtained at most as small seat share with at least as large normalized vote share;
* let u_i := min{u_i^+, u_i^-} be the outlier score of the i-th point.
We would expect low values of u_i^+ to be associated with being a beneficiary of electoral bias (if not gerrymandering) who obtain unexpectedly large seat shares given their vote shares. Conversely, low values of u_i^- should be associated with the victims of bias. Taking the minimum of those two indices, we distinguish both groups.
Figure: outlier score 1
The values of the above score are plotted on Figure ... It illustrates two weaknesses of <ref>. First, since seats are discrete, seat shares can only assume a small number of possible values. This effect is particularly pronounced for committees that field candidates only in few districts – for a committee contesting only a single district s_i can only attain either 0 or 1; for a committee contesting two districts – also 1/2; for a committee contesting three districts – 0, 1/3, 2/3, and 1; etc. In Figure ... this effect translates to clearly aligned horizontal `lines' of points at values of s equal to a fraction with a small-integer denominator. At the same time, the smaller the number of districts, the greater the probability of an anomalously low or high seat share (since instead of a conjunction of multiple rare random events only a conjuction of one or two is sufficient). Hence, the conditional distribution of Λ_i given c_i = 2 and S_i = 1/2 naturally has a higher variance than the distribution of the same variable given c_i = 15 and S_i = 7/15, wherefore points for s = 0, 1/3, 1/2, 2/3, 1, etc., are more dispersed than others. This affects not only parties with a small number of candidates, but also the larger ones who are closely below or above such linear sets. Our proposed solution for this problem is to weigh all points by the square of the proportion of c_i to the whole number of districts in the respective jurisdiction.
The second problem is that points with λ_i close to 0 or 1 are overrepresented among outliers. They are in some sense anomalous, but their `outlierness' is a matter of unusually low or high normalized vote share rather than unusually low or high seat share. To avoid this effect, we introduce a correction for a distance from λ_i = 0 or λ_i = 1, thus obtaining the final version of our coefficient:
Outlier Score – Final Version
Let 𝒟 denote the the union of all parties over all reference elections. For any i ∈𝒟 let:
U_i^+ := 1/λ_i∑_j ∈𝒟
s_j ≥ s_i
λ_j ≤λ_i(c_j/g_j)^2,
U_i^- := 1/1 - λ_i∑_j ∈𝒟
s_j ≤ s_i
λ_j ≥λ_i(c_j/g_j)^2,
and let U_i := min{U_i^+, U_i^- be the outlier score of the i-th point.
Figure: outliers
§ AGGREGATION
The final step is the aggregation of party-level indices into a single election-level index of electoral bias. We would like our aggregation function to: (1) assign greater weight to major parties than to minor parties; (2) be sensitive to very low p-values and less sensitive to even substantial differences in large p-values; and (3) be comparable among elections, i.e., independent of the number of parties and districts. An easy example of such a function is the weighted geometric mean given by:
π := exp(∑_i=1^n w_i logπ_i ),
where w_i is the number of votes cast for the i-th party divided by the number of all valid votes cast in the election (it differs from the aggregate vote share in that the denominator includes votes cast in districts not contested by the i-th party).
§ EXPERIMENTAL TEST
Before applying our proposed method to empirical data, we wanted to sure that it really works – both in terms of high precision (low number of false positives) and of high recall (low number of false negatives). But one fundamental problem in testing any method for the detection of gerrymandering, especially outside the familiar two-party pattern, lies in the fact that we have very few certain instances thereof. Therefore we first tested our method on a set of artificial (i.e., simulated) elections, consisting both of `fair' districting plans, drawn at random with a distribution intended to approximate the uniform distribution on the set of all admissible plans, and of `unfair' plans generated algorithmically. In order to ensure that voting patterns matched real-life elections, we used actual precinct-level data from the 2014 municipal election in our home city of Kraków. It was a multi-party election, but with two leading parties that were nearly tied in terms of votes. That allowed us to generate `gerrymandered' plans for both of them, improving the test quality.
§.§ Experimental Setup
Out baseline dataset consisted of a neighborhood graph of 452 electoral precincts, each of which was assigned three parameters: precinct population, w_k ∈ℕ, varying between 398 and 2926 (but with 90% of the population taking values between 780 and 2420); party p's vote share p_k, varying between 9.1% and 66.7% (but with 90% of the population taking values between 20.8% and 48.5%); and party q's vote share q_k, varying between 11.5% and 56.7% (but with 90% of the population taking values between 21.0% and 44.1%). On the aggregate, party p won the election with 33.15% of the vote, but party q was a close runner-up with 32.64% of the vote. There have been seven third parties, but none of them had any chance of winning any seats (in particular, none has come first in any precinct). In drawing up plans, we fixed the number of districts at 43 (the real-life number of seats in the municipal council) and the permissible population deviation at 25%.
As our training set, we used dataset 𝒟_14, described in the following section.
§.§ Algorithm for Generating Fair Plans
Our sample of fair districting plans consisted of 128 partitions of the precinct graph generated using the Markov Chain Monte Carlo algorithm proposed by <cit.>. It used the Swendsen-Wang algorithm <cit.>, as modified by <cit.>, to randomly walk the graph of solutions. In each iteration, we randomly `disable' some of the edges within each district of the starting districting plan (independently and with a fixed probability); identify connected components adjoining district boundaries; randomly choose R such components (where R is chosen from some fixed discrete distribution on ℝ_+) in such manner that they do not adjoin one another; identify admissible exchanges; and randomly accept or reject each such exchange using the Metropolis-Hastings criterion. <cit.> have shown that if (R = 0) > 0, the algorithm is ergodic, and <cit.> – that in such a case its stationary distribution is the uniform distribution on the set of admissible districtings. In practice this algorithm has a better rate of convergence than classical Metropolis-Hastings, but obtaining satisfactory performance still required additional heuristic optimizations like simulated annealing <cit.>.
§.§ Algorithm for Generating Unfair Plans
To generate unfair districting plans we used an algorithm by Szufa et al. <cit.> based on integer linear programming. The essential idea is to consider all feasible districts (connected components of the precinct graph with aggregate population within the admissible district population range), K_1, …, K_d, and to solve the following optimization problem for party x = p, q:
For
ξ∈𝔹^d
maximize
∑_j = 1^d ξ_j (∑_k ∈ K_j x_k w_k - ∑_k ∈ K_j y_k w_k)
subject to
∑_j=1^d ξ_j = s,
∑_j=1^d 1_K_j(k) = 1 for every k = 1, …, c,
where y = q if x = p and y = p if x = q.
In other words, we choose such subset of feasible districts that maximizes the seat share of party x subject to constraints that the number of chosen districts equals the number of seats s and every precinct is assigned to exactly one district. Since this is a classical ILP problem, it can solved using a standard branch and bound algorithm.
In practice, it is infeasible to enumerate all possible districts with hundreds of precincts. We therefore first artificially combine leaf nodes, small precincts, and similar precincts until the number of precincts is reduced below 200. Only then we run the ILP algorithm and recover the full solution by replacing combined precincts with their original elements. Since the combining process can lead to suboptimality, we then run a local neighborhood search algorithm to find a local maximum.
The process as described finds an optimal ex post gerrymandering - in our case, we get a 36-seat districting plan for party p and a 37-seat plan for party q. However, we can obtain less extreme instances of gerrymandering by including an additional constraint in our algorithm on the required margin of victory.
§.§ Results
Our sample of fair districting plans yielded a distribution of electoral results varying from 26 to 17 seats for party p, with the median at 21. Those results corresponded to aggregate p-values between .18 and .37. Accordingly, none of the fair plans was classified as gerrymandered at the significance level .05, giving us perfect precision 1.
Gerrymandered plans varied from a 37-seat to a 29-seat plan for party q, corresponding to aggregate p-values from .006 to .064, and from a 36-seat to a 29-seat plan for party p, corresponding to aggregate p-values from .016 to .096. In total, 24 out of 28 gerrymandered plans were classified as such at the significance level .05, yielding recall .857. Note, however, that all plans that we failed to recognize as gerrymandered were highly inefficient ones in terms of the seat benefit to the party doing gerrymandering.
§ EMPIRICAL TEST
We have tested our method on data from four training sets of elections:
* 𝒟_14, 2014 Polish municipal elections (this has been the case that originally motivated us to develop the method described in this paper) (2412 elections, 15,848 parties, 37,842 districts, 131,799 candidates),
* 𝒟_18, 2018 Polish municipal elections (2145 elections, 10,302 parties, 32,173 districts, 86,479 candidates),
* 𝒟_U, U.S. House of Representatives elections from the 1900-2016 period, where the election within each state is treated as a single election (2848 elections, 13,188 parties, 23,390 districts, 71,314 candidates),
* 𝒟_N, national legislative elections from 15 countries (206 elections, 53,721 parties, 52,321 districts, 237,331 candidates).
We do not need to track party identity beyond any individual election, wherefore for instance the Republican party in, say, the 1994 House election in Pennsylvania and in the 1994 House election in New York (or the 1996 House election in Pennsylvania) is counted as two different parties. Hence the large number of parties in the U.S. election dataset.
The following countries were included in the 𝒟_N dataset:
* United Kingdom – all general elections from 1832 (47 cases); multi-member districts and Speakers running for reelection were dropped from the dataset;
* Canada – all general elections from 1867 (42 cases);
* Denmark – all general elections held under the FPTP system, i.e., those from 1849 to 1915 (32 cases);
* New Zealand – all general elections from 1946 until introduction of MMP system in 1994 (17 cases);
* India – all general elections from 1962 until 2014 (14 cases);
* Malaysia – all general elections from 1959 until 2018 (12 cases);
* Philippines – all general elections from 1987 until 1998, as well as single-member-district results from elections held under parallel voting from 1998 to 2013 (9 cases);
* Japan – single-member-district results from elections held under parallel voting from 1996 to 2014 (7 cases);
* Ghana – 2000-2016 elections (4 cases);
* South Africa – 1984-1989 elections (4 cases);
* Poland – upper house elections from 2011 (3 cases);
* Taiwan – single-member-district results from elections held under parallel voting (3 cases);
* Nigeria – 2003 and 2011 elections;
* Kenya – 2002 and 2007 elections.
Most of the national election data has been obtained from the Constituency-Level Election Archive <cit.>.
Those 15 countries were chosen as major countries using the FPTP system that have been categorized as at least partly free under the Freedom House Freedom in the World survey <cit.>. We have chosen elections that, even if not always democratic by modern standards, were at least minimally competitive (some opposition parties were able to field candidates and the results generally reliable). We note that the countries listed include cases with strong regional parties (UK, Canada) or very large number of small parties and independent candidates (India), as well as cases with party systems developing or otherwise fluid (19th century elections, developing country elections). They also include instances in which allegations of gerrymandering have already appeared in the literature <cit.>. For those reasons, we believe they constitute a good testing set.
Two Polish local election datasets were included as examples of extremely irregular competition patterns, especially the 2014 election, which has been the first one held under FPTP. By the 2018 election local party systems have somewhat settled, as evidenced by the smaller average number and effective number of candidates. The U.S. elections, on the other hand, were included to test whether the proposed method works well with regular two-party elections.
§.§ Results by Training Set
In this subsection, we report the raw results for our training sets and test whether the method agrees with classical measures of gerrymandering for two-party U.S. elections.
Figure: nonparametric S-V density
The incidence of electoral bias in the four datasets under consideration was as follows:
Finding an appropriate baseline for comparison, however, is quite difficult. If we were to assume that that party p-values behave like statistical p-values, i.e., follow the uniform distribution on (0,1), and that they are independent, we would expect the election p-values to follow an absolutely continuous distribution with density given by:
f(x) = (-2log x)^n-1/2x(n-1)!
for x ∈ (0, 1). However, while the first of those assumptions is realistic, the second is decidedly not: electoral bias is a zero-sum phenomenon and if an election is biased in favor of some set of parties, it must necessarily be biased against another set.
But even if we were able to easily model the expected scale of random electoral bias, it would still be impossible to determine whether deviation from it is caused by the deficiencies of our method or by actual instances of gerrymandering. Hence, a more appropriate test would be to analyze whether our measure agrees with other methods for detecting gerrymandering used in the literature. Of course, we can do this test only for two-party elections, such as those from the U.S. elections dataset. As many modern methods tend to assume the absence of large-scale malapportionment, we have dropped the pre-1970 observations. We have compared our coefficient with absolute values of four classical indices: Gelman-King partisan bias, efficiency gap, mean-median difference, and declination coefficient. Scores for those methods were obtained from PlanScore <cit.>. Expected association is negative, since for all classical methods high values of the index are indicative of gerrymandering, while for our method values close to 0 indicate electoral bias.
For other datasets, we are at present left with qualitative analysis of the most biased cases. For the full U.S. dataset D_U these were: several Missouri elections from the 1900s to 1920s (especially the 1926, 1916, 1906, and 1902), the 1934 Indiana and New Jersey elections, and the 1934 New Jersey elections. Missouri and Indiana were at the time highly malapportioned <cit.>, but the case of New Jersey requires further study. Nevertheless, an inspection of the results suggests that something was definitely amiss in all of those elections: while two-party vote shares were very close to 1/2, the discrepancy of seat-shares (i.e., the partisan bias) was quite high, e.g., .75 to .25 in Missouri in 1926
For the non-U.S. national elections dataset the most biased instances include the 1874 U.K. general election (a famous electoral inversion where the Liberal Party decisively lost in terms of seats – 242 to 350 – despite winning a plurality of the popular vote), the 1873 Danish Folketing election (held in highly malapportioned districts), the 1882 Canadian federal election, and the 1841 U.K. general election, while among modern elections – the 2013 Malaysian election, the 2014 Indian general election, the 2005 and 2009 Japanese elections, and the 1983 U.K. general elections (with the non-intentional pro-Labour and anti-SDP-Liberal Alliance bias resulting from geographical patterns documented by <cit.>).
plainnat
|
http://arxiv.org/abs/2306.07315v1
|
20230612180000
|
Heavy Neutral Leptons from Stopped Muons and Pions
|
[
"Yohei Ema",
"Zhen Liu",
"Kun-Feng Lyu",
"Maxim Pospelov"
] |
hep-ph
|
[
"hep-ph",
"hep-ex"
] |
UMN-TH-4217/23
FTPI-MINN-23-10
0.5in
Heavy Neutral Leptons
from Stopped Muons and Pions
.8in
Yohei Ema,^a,b Zhen Liu,^b Kun-Feng Lyu,^b Maxim Pospelov^a,b
.3in
^a
William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,
University of Minnesota, Minneapolis, MN 55455, USA
^b
School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA
.6in
Stopped muons, which are generic in pion-at-rest experiments, can shed light on heavy neutral leptons (HNLs)
in unexplored parameter spaces.
If the HNL is lighter than the muon, the HNL can be produced from decays of muons and pions.
The HNL will travel from the production location and decay into visible Standard Model (SM) modes,
leaving signals inside downstream detectors.
We find that in the case that the HNL dominantly mixes with muon neutrinos,
the LSND constraint on the mixing angle squared is stronger than all the previous constraints
by more than an order of magnitude. In this study, we recast the LSND measurement of the ν-e scattering.
Future experiments such as PIP2-BD could further improve the sensitivity, provided they can distinguish the HNL events from backgrounds induced by the SM neutrinos.
§ INTRODUCTION
The discovery of neutrino oscillation stands out as clear evidence of physics beyond the
Standard Model (SM) <cit.>.
Neutrinos are exactly massless within the SM, and hence non-zero neutrino masses require an extension of the SM in the neutrino sector.
Among possible extensions, the introduction of heavy neutral leptons (HNLs) is particularly interesting due to its minimality
(see e.g., Refs <cit.> for representative reviews of the HNL).
The HNL can couple to the SM lepton and Higgs doublets at the renormalizable level.
After the electroweak symmetry breaking, the HNL interacts with the SM particles through mixing with the SM neutrinos.
The phenomenology of the HNL can be diverse, depending on the mass of the HNL.
If the HNL is light, m_N ≲ 1 eV where m_N is the HNL mass,
the HNL can participate in the neutrino oscillation, which may be relevant for, e.g.,
the anomalies observed at LSND <cit.> and MiniBooNE <cit.>.
Even if the HNL is heavy enough to be irrelevant for the neutrino oscillation,
it can still leave signals in different types of experiments.
In particular, if the HNL is lighter than the muon,
the HNL can be produced from the decay of muons and pions.
Once produced, the HNL can travel downwards and decay into an e^+e^- pair and a neutrino,
leaving signals inside downstream detectors, as long as m_N > 2m_e with m_e being the electron mass.
Typically, one can search for HNLs in the particle regime in the high-energy beamdump experiments and as well at colliders. In the regime of HNL mass around 10-100 MeV, the searches are dominated by HNL from pion decays and kaon decays. In this paper, we point out that stopped muons, which commonly exist in the pion-at-rest experiments, can provide leading productions and enables us to probe new parameter spaces in HNLs.
In this paper, we focus on neutrino experiments with stopped muons and stopped pions as the source of the HNL, such as
LSND and PIP2-BD <cit.>.
In the past, the LSND experiment has been used to set constraints on dark photons, dark scalars, and light dark matter (see , e.g., <cit.>).
In the parameter region of our interest, the HNL decay length is far longer than the typical size of the laboratory.
The small velocity of the HNL from the muon and pion decay-at-rest, in comparison to that decay-in-flight, therefore
enhances the probability of the HNL decaying inside the detectors, strengthening the sensitivity.
Together with the large accumulation of proton-on-target (POT),
we find that the LSND experiment provides an upper limit on the mixing angle
between the muon neutrino and the HNL which is stronger by more than an order of magnitude than
the previous constraints from T2K <cit.> and
μBooNE <cit.>.
Interestingly, our LSND constraint still does not utilize all the properties of the HNL decay, as LSND did not perform a search of the e^+ e^- events and thus the HNL events are compared with the single e events originating from the SM neutrinos.
Future experiments such as PIP2-BD potentially improve the sensitivity significantly, provided that they can distinguish
the HNL-induced e^+ e^- events from backgrounds induced by, e.g., the SM neutrinos. While this part of the parameter space is disfavored by big bang nucleosynthesis (BBN) in the minimal model of HNLs, a trivial model modification, such as dark decay channels can be introduced to render this model safe from the cosmological constraints.
The rest of this paper is organized as follows. In Sec. <ref>, we summarize the production and decay rates
of the HNL. This section provides basic formulas that we use to derive the sensitivity in the subsequent section.
We consider both the muon mixing and electron mixing cases, with an emphasis on the former case.
In Sec. <ref>, we derive the current constraint on the mixing angle from LSND
and the future sensitivity of PIP2-BD. We summarize our results in Sec. <ref>.
§ PRELIMINARY
In this section, we summarize the basic ingredients necessary for our analysis.
For definiteness, we consider one (pseudo-)Dirac HNL which dominantly mixes
with either muon or electron neutrinos.
The relevant part of the Lagrangian is given by
ℒ = N̅(i∂ -m_N)N
- g/√(2)U_lN^*l̅W^- N
- g/2cosθ_W U_lN^*ν̅_l Z N
+ (h.c.),
where N is the HNL field with its mass m_N, g is the SU(2) gauge coupling, θ_W is the weak mixing angle,
and U_lN is the mixing angle between the HNL and the SM neutrino.
We mainly focus on the case m_N < m_μ, where m_μ is the muon mass, so that the HNL can be produced by the decay of (stopped) muons and pions.
We consider both the muon mixing case l = μ and the electron mixing case l = e,
with a particular interest in the former case.
Throughout this paper, we ignore the electron and neutrino masses as the energy scale of our interest
is above ∼ 10 MeV.
Throughout this paper, we assume (pseudo-)Dirac mass m_N. While in principle, both Dirac and Majorana masses are possible, the (pseudo-)Dirac mass is easier to reconcile with neutrino phenomenology. In the inverse seesaw models, the HNLs are dominated by their Dirac mass and receive a small correction from their Majorana mass terms [hence dubbed pseudo-Dirac.]. A single generation with Majorana mass would imply |U_l N|^2m_N contributions to the active neutrino masses, which is above the neutrino mass limits for the interesting values of these parameters, implying that the Majorana mass m_N would require an additional finely tuned contribution to m_ν. In general multi-generation HNLs with all mixings with different neutrinos turned on, an inverse seesaw-like mass spectrum is generically found to be consistent with the SM neutrino mass considerations, see, e.g., discussions in review <cit.>.
§.§ HNL production rate
We first discuss the HNL production rates from the decay of muons and pions.
The muon mixing case and electron mixing case are studied separately.
§.§.§ Muon mixing case
If m_N < m_μ,
the HNL is predominantly produced from the decay of muons
at neutrino experiments with stopped muons such as LSND.
The amplitude is diagrammatically given by
iℳ(μ→ e ν_e N) =
[baseline=(b)]
[inline = (base.b), horizontal=b to d]
[label=180:μ](a);
[right = of a] (b);
[above right = of b, label=360:N] (c);
[below right = of b] (d);
[above right = of d, label=360:e] (e);
[below right = of d, label=360:ν_e] (f);
*
(a) – [fermion] (b) – [fermion] (c),
(b) – [photon] (d),
(f) – [fermion] (d) – [fermion] (e),
;
.
The decay rate is expressed as
Γ(μ→ eν_e N) =
∫_m_N^(m_μ^2+m_N^2)/2m_μdE_N dΓ(μ→ Neν_e)/dE_N,
with the differential decay rate given by
dΓ(μ→ eν_e N)/dE_N =
G_F^2 U_μ N^2/12π^3 (3E_N(m_μ^2 + m_N^2) - 4m_μ E_N^2 - 2m_μ m_N^2 )
√(E_N^2 - m_N^2),
where G_F is the Fermi constant.
If we set m_N = 0, this formula correctly reproduces the well-known muon decay rate
up to the factor | U_μ N|^2.
Although subdominant, if m_N < m_π - m_μ with m_π the pion mass,
the HNL can also be produced from the decay of pions,
whose diagram is given by
iℳ(π→μ N)
= [baseline=(b)]
[inline = (base.b), horizontal=b to d]
[label=180:π](a);
[right = of a] (b);
[above right = of b, label=360:N] (c);
[below right = of b, label=360:μ] (d);
*
(a) – [scalar] (b),
(c) – [fermion] (b) – [fermion] (d),
;
.
In this case, the final-state HNL is monochromatic, and the differential decay rate is given by
dΓ(π→μ N)/dE_N =
G_F^2 f_π^2 V_ud^2 U_μ N^2/8π m_π^3((m_μ^2 + m_N^2) m_π^2 - (m_μ^2 + m_N^2)^2 + 4m_μ^2 m_N^2)
×√(m_π^4 - 2(m_μ^2 + m_N^2)m_π^2 + (m_μ^2 - m_N^2)^2)×δ(E_N - m_π^2 + m_N^2 - m_μ^2/2m_π),
where V_ud is the ud-component of the CKM matrix and f_π is the pion decay constant.
§.§.§ Electron mixing case
As in the muon mixing case, the HNL can be produced from the muon decay if m_N < m_μ,
and the relevant amplitude is given diagrammatically as
iℳ(μ→ e ν_μ N) =
[baseline=(b)]
[inline = (base.b), horizontal=b to d]
[label=180:μ](a);
[right = of a] (b);
[above right = of b, label=360:ν_μ] (c);
[below right = of b] (d);
[above right = of d, label=360:e] (e);
[below right = of d, label=360:N] (f);
*
(a) – [fermion] (b) – [fermion] (c),
(b) – [photon] (d),
(f) – [fermion] (d) – [fermion] (e),
;
.
The decay rate is given by
Γ(μ→ e ν_μ N) = ∫_m_N^(m_μ^2 + m_N^2)/2m_μ
dE_NdΓ(μ→ e ν_μ N)/dE_N,
with the differential rate given by
dΓ(μ→ e ν_μ N)/dE_N =
G_F^2 U_eN^2/2π^3E_N(m_μ^2 + m_N^2 - 2m_μ E_N)√(E_N^2 - m_N^2).
This formula again correctly reproduces the well-known muon decay rate in the limit m_N = 0.
In the electron mixing case, the pion produces the HNL if m_N < m_π.
The contribution of the pion decay is important at the relatively high m_N region.
This is in contrast to the muon mixing case, where the decay is kinematically allowed only for m_N < m_π - m_μ.
The decay rate of pion is obtained simply by replacing μ→ e in the muon mixing case as
dΓ(π→ eN)/dE_N =
G_F^2 f_π^2 V_ud^2 U_eN^2 m_N^2/8π m_π^3(m_π^2 - m_N^2)^2
×δ(E_N - m_π^2 + m_N^2/2m_π).
Note that the chirality flip is supplied by the HNL and this rate is not suppressed by the electron mass.
§.§ HNL decay rate
We next discuss the decay rate of the HNL.
We first note that, in the case of our interest, the total decay rate of the HNL Γ_N
into the SM particles is estimated as
Γ_N ∼Γ_μ(m_N/m_μ)^5 U_l N^2,
and thus the lifetime is estimated as
c τ_N ∼ 10^8 m×(m_μ/m_N)^5 (10^-6/|U_l N|^2).
Thus N is sufficiently long-lived at the laboratory scale and only a small fraction of the HNL decays
within the laboratory,
even after including the velocity of the HNL (smaller than the speed of light c).
Therefore we focus on the partial decay rate Γ(N → e^+ e^- ν)
as the other decay modes with only neutrinos in the final state are simply invisible.
§.§.§ Muon mixing case
In the muon mixing case, the HNL decays via the neutral current.
The relevant diagram is given by
iℳ(N → e^+ e^- ν_μ) =
[baseline=(b)]
[inline = (base.b), horizontal=b to d]
[label=180:N, p_N](a);
[right = of a] (b);
[above right = of b, label=360:ν_μ, p_ν] (c);
[below right = of b] (d);
[above right = of d, label=360:e^-, p_2] (e);
[below right = of d, label=360:e^+, p_1] (f);
*
(a) – [fermion] (b) – [fermion] (c),
(b) – [photon] (d),
(f) – [fermion] (d) – [fermion] (e),
;
.
The spin-averaged matrix element square is computed as
ℳ(N → e^+ e^- ν_μ)^2 = 16 G_F^2 U_μ N^2
[4 sin^4θ_W (p_2 · p_N) (p_1 · p_ν)
+ (1-2sin^2θ_W)^2 (p_2 · p_ν) (p_1 · p_N)
],
and the decay rate is given by
Γ(N → e^+ e^- ν_μ)
=
1/(2π)^51/64 m_N^3∫_0^m_N^2 dm_2ν^2
∫_0^m_N^2 - m_2ν^2 dm_1ν^2
∫_0^2πdψ∫_-1^1dcosθ∫_0^2πdϕℳ^2.
Here we have defined
m_1ν^2 = (p_1 + p_ν)^2 = m_N^2 - 2 m_N E_-^N,
m_2ν^2 = (p_2 + p_ν)^2 = m_N^2 - 2 m_N E_+^N,
where E_±^N is the energy of e^± in the N-rest frame,
and (ψ, θ, ϕ) are the Euler angle of the orientation of the final particles
(they are on a single plane) relative to the initial N.
Note that the angle between p⃗_1 and p⃗_2 in the N-rest frame is fixed once
we fix E_±^N due to the energy-momentum conservation as
p⃗_1 ·p⃗_2/|p⃗_1||p⃗_2|
= 1 - m_N/E_+^N E_-^N(E_+^N + E_-^N - m_N/2).
Later we use this expression for the Monte Carlo (MC) simulation of the cut efficiency at the LSND experiment.
The total decay rate is easily obtained from this expression as
Γ(N → e^+ e^- ν_μ) = G_F^2 m_N^5/768π^3U_μ N^2
(1-4sin^2θ_W + 8sin^4θ_W).
This correctly reproduces the parameter dependence of Eq. (<ref>).
§.§.§ Electron mixing case
In the electron mixing case, the HNL can decay into e^+e^- ν_e both through the neutral and charged currents.
The amplitude is given by
iℳ(N → e^+ e^- ν_e) =
[baseline=(b)]
[inline = (base.b), horizontal=b to d]
[label=180:N, p_N](a);
[right = of a] (b);
[above right = of b, label=360:ν_e, p_ν] (c);
[below right = of b] (d);
[above right = of d, label=360:e^-, p_2] (e);
[below right = of d, label=360:e^+, p_1] (f);
*
(a) – [fermion] (b) – [fermion] (c),
(b) – [photon] (d),
(f) – [fermion] (d) – [fermion] (e),
;
+
[baseline=(b)]
[inline = (base.b), horizontal=b to d]
[label=180:N, p_N](a);
[right = of a] (b);
[above right = of b, label=360:e^-, p_2] (c);
[below right = of b] (d);
[above right = of d, label=360:ν_e, p_ν] (e);
[below right = of d, label=360:e^+, p_1] (f);
*
(a) – [fermion] (b) – [fermion] (c),
(b) – [photon] (d),
(f) – [fermion] (d) – [fermion] (e),
;
.
We may note in passing the relative minus sign between the two diagrams
from the exchange of the positions of ν_e and e^-.
The amplitude square after the spin average is computed as
ℳ(N → e^+ e^- ν_e)^2 = g^4/2m_W^4U_e N^2
[4 sin^4θ_W (p_2 · p_N) (p_1 · p_ν)
+ (1+2sin^2θ_W)^2 (p_2 · p_ν) (p_1 · p_N)
].
The rest of the computation proceeds in the same way as the muon mixing case.
The decay rate is given by
Γ(N → e^+ e^- ν_e)
=
1/(2π)^51/64 m_N^3∫_0^m_N^2 dm_2ν^2
∫_0^m_N^2 - m_2ν^2 dm_1ν^2
∫_0^2πdψ∫_-1^1dcosθ∫_0^2πdϕℳ^2,
and after performing all the integrations, it is given by
Γ(N → e^+ e^- ν_e)
= G_F^2 m_N^5/768π^3U_eN^2(1 + 4sin^2θ_W + 8sin^4θ_W).
In the electron mixing case, if m_μ < m_N < m_π,
the HNL produced from the pion decay can also decay as N →μ e ν_e.
However, as we will see, the LSND constraint for this mass range in the electron mixing case
is anyway weaker than the other constraints,
and thus we may ignore this decay channel in the rest of the analysis.
§ NEUTRINO EXPERIMENTS WITH STOPPED MUON
Armed with the basic formulas, we now derive the current constraint and future sensitivity
of neutrino experiments with stopped muon on the HNL mixing parameter U_lN.
Our signal is e^+e^- produced from the HNL decay.
The total event number of such a decay inside a detector is estimated as
N^(i)_ee = N_i ×ϵ_det×1/Γ_i∫ dE_N
dΓ(i → f N )/dE_N×L_det/γβ cτ_N→ eeν,
where i = μ if the HNL is produced from the stopped muon and i = π if produced from the stopped pion,
and f represents SM particles depending on the decay channel of i.
In this expression,
N_i is the total number of i that decays at the HNL production point, ϵ_det is the probability
that a single HNL produced from i passes through the detector,
and Γ_i is the total decay width of i.
The relativistic factor γβ = (E_N^2 - m_N^2)^1/2/m_N with E_N
being the energy of the HNL, L_det denotes the length of the detector,
and τ_N → eeν is the decay lifetime of the HNL to a pair of e^+e^- at rest. The signal rate has no N total width Γ_N dependence so long as the long lifetime limit is valid, as 1/cτ_N× Br(N→ eeν)=1/cτ_N→ eeν.
Note that γβ < m_μ/m_N for the HNL produced from the stopped muon.
This enhances the sensitivity to the HNL compared to neutrino experiments with pions and muons decay-in-flight,
such as the MiniBooNE experiment <cit.>.
Of course, experiments impose cuts on events to eliminate backgrounds, and hence we
compute the efficiency ϵ^(i) that the e^+e^- event from the HNL decay passes through the cuts.
For this purpose, we perform a Monte Carlo simulation to generate the distribution of the energy E_±
and the angle cosθ_± [with respect to the direction of the production point to the center of the detector]
of the e^± in the laboratory frame.
More specifically, we generate the HNL energy, E_N, from the differential decay rates
of μ and π computed in Sec. <ref> ,
and the energy of e^± in the N-rest frame E_±^N, as well as the Euler angle (ψ, θ, ϕ)
from Eqs. (<ref>) and (<ref>).
We then perform the Lorentz transformation to obtain E_± and cosθ_± in the laboratory frame.
The HNL event number after imposing the cut is given by
N_ee = N_ee^(μ)×ϵ^(μ) + N_ee^(π)×ϵ^(π).
In the following, we study the current constraint on U_lN from the LSND experiment <cit.>,
and the future sensitivity of the PIP2-BD experiment <cit.>, respectively.
§.§ New constraint: LSND
Here we derive the current constraint on the mixing parameter U_lN from the LSND experiment.
The Liquid Scintillator Neutrino Detector (LSND) experiment <cit.> was located
at the Los Alamos Neutron Science Center (LANSCE).
An 800 MeV proton beam is delivered onto the water target (and other materials such as tungsten),
producing a large flux of pions dumped at the copper beam stop.
Most of π^- are absorbed by the materials and hence the abundance of μ^- is suppressed.
The main source of the neutrino flux is the decay of stopped π^+ and μ^+.
The detector is located ∼ 30 m downstream and is well-shielded.
In the case of our interest, the HNL can be produced from the decay of stopped μ^+ and π^+.
Once produced, a small portion of the HNL travels downstream and decays inside the detector,
providing e^+e^- signatures.
The LSND experiment did not perform any search on the e^+e^- events.
Therefore, we may use the single e event search in <cit.> to put a constraint on the HNL.
The number of ν_e that passes through the detector is ≃ 3× 10^19 in total <cit.>.
Since ν_e is dominantly produced from stopped μ^+,
which is dominantly produced from stopped π^+, we can set
N_μ×ϵ_det = N_π×ϵ_det = 3× 10^19.
Therefore we obtain the total number of the HNL decay events inside the detector originating from μ^+ as
N_ee^(μ)≃ 1.5× 10^5(U_μ N^2/10^-6)^2
(m_N/m_μ)^6 (1-m_N/m_μ)^4
(1 + 4m_N/m_μ + m_N^2/m_μ^2)
:μ`-mixing,
8.1× 10^5(U_eN^2/10^-6)^2
(m_N/m_μ)^6 (1-m_N/m_μ)^4
(1 + 4m_N/m_μ + m_N^2/m_μ^2) :e`-mixing,
and that originating from π^+ as
N_ee^(π)≃
8.7× 10^4(U_μ N^2/10^-6)^2 (m_N/m_μ)^6
m_π(
(m_μ^2 + m_N^2)m_π^2 - (m_μ^2 + m_N^2)^2 + 4m_μ^2m_N^2
)/m_μ(m_π^2-m_μ^2)^2 :μ`-mixing,
4.1× 10^5(U_eN^2/10^-6)^2
(m_N/m_μ)^7
m_N m_π(m_π^2 - m_N^2)/(m_π^2 - m_μ^2)^2 :e`-mixing,
where we take L_det = 7.6 m following the fiducial cut specified in <cit.>.
The total number of the HNL decay events is thus sizable, and we can indeed put a stronger
constraint on the mixing parameter than the other experiments for m_N < m_μ,
especially in the muon mixing case, as we will see.
Our signal mimics the single e event in <cit.> if either
(1) one e^± has small energy so that it is not detected, while the other e^∓ has enough energy for the detection,
or (2) e^+ and e^- are collinear enough so that they cannot be separated within the detector resolution.
Note that e^+ and e^- are produced at the same time in our case
so that it is excluded by neither past nor future activity cuts imposed in <cit.>.
We thus impose two alternative cuts on our events as
cut 1: 18 MeV < E_± < 50 MeV, E_∓ < E_th,
cosθ_± > 0.9, cosθ_∓ < 0.9,
cut 2: 18 MeV < E_+ + E_- < 50 MeV, cosθ_± > 0.9.
Cut 1 captures the e^+e^- events mimicking single-e events through one of them being soft, and cut 2 captures when they are being colinear.
The events passing either cut 1 or cut 2 will show up
in the ν-e scattering data sample of the LSND.
Here the cut on the angle originates from the requirement that the observed event
has the direction consistent with the neutrino from the production point,
and E_th is the threshold energy of the e^± detection.
We take E_th = 15 MeV as there are huge backgrounds of gamma rays
in this energy range from, e.g., Carbon excited states and thus
the analysis is expected to be insensitive to this region.
We also show the result with E_th = 10 MeV as an even more conservative choice
of the threshold energy.
In Fig. <ref>, we show the efficiency of the muon decay channel
ϵ^(μ) computed by our MC simulation.
As one can see, the cut 1 dominates for m_N ≳ 20 MeV, while the cut 2 is more important at
the lower mass range as the lighter HNL is more boosted in the forward direction.
To derive the constraint, we may follow the procedure used in <cit.>
to put an upper bound on the neutrino magnetic dipole moments.
The LSND experiment observed in total 242 single e events
(including all the ν_e, ν_μ, and ν̅_μ initiated signals) while 229 events are expected within the SM.
With the systematic error included, we impose at 90% C.L.
N_ee = N_ee^(μ)×ϵ^(μ) + N_ee^(π)×ϵ^(π) < 55.
In Fig. <ref>, we show the new constraint derived in this study from the LSND (with the future sensitivity
of PIP2-BD, which we describe in the next section) and the previous constraints from PSI <cit.>,
TRIUMF <cit.>, PIENU <cit.>,
T2K <cit.>, and μBooNE <cit.>.
The solid blue line corresponds to E_th = 15 MeV
while the dashed blue line does to 10 MeV.
As one can see, in the muon mixing case, our constraint is better than the previous ones by
more than one order of magnitude in terms of | U_μ N|^2
for 35 MeV≲ m_N ≲ 70 MeV.
In the electron mixing case, the current constraint from LSND is weaker than but close to the constraint from PIENU.
Here we comment on the JSNS^2 experiment
at J-PARC <cit.>.
JSNS^2 aims at measuring the neutrino oscillation ν̅_μ→ν̅_e with the neutrinos
produced from stopped pions and muons created by a 3 GeV proton beam,
with the lower duty factor than LSND and the Gd-loaded
liquid scintillator detector. Thus, JSNS^2 provides a direct test of
the LSND neutrino oscillation results <cit.>.
In our case, the HNL decay would be indistinguishable from the single e events (without the final state neutron)
as the JSNS^2 detector is not sensitive to the angular separation of e^+ and e^- <cit.>.
The current data is 1.45× 10^22 POT, 13 %
of the designed total POT <cit.>, compared to 1.8× 10^23 POT at LSND
with a larger volume of the detector.
Still, once enough data is accumulated, it would be interesting to study the sensitivity of JSNS^2
on the HNL. In particular, JSNS^2 can be sensitive to neutrinos from Kaon decay-at-rest, due to higher proton
beam energy, which may allow us to go beyond the muon and pion mass thresholds.
Before closing this subsection, we briefly comment on cosmological constraints on the mixing parameter.
Indeed, in the parameter region of our interest, the HNL decay rate into the SM particles is small so that
it may decay after the BBN, disturbing the consistency
between the theory and observation <cit.>.
However, this can be avoided if, e.g., the HNL decays into invisible particles before the BBN.
As long as the HNL decay length is longer than ∼𝒪(10 m), this additional decay channel
does not affect our analysis.
In this sense, our constraint is independent of the one from BBN.
§.§ Future sensitivitity: PIP2-BD
In this subsection, we derive the future sensitivity of
the PIP2-BD experiment <cit.>.
The Proton Improvement Project (PIP-) is
a major upgrade of the accelerator complex at Fermilab
to meet the requirement of hosting the Deep Underground Neutrino Experiment (DUNE) <cit.>.
On top of its main purpose, PIP- has the flexibility of supporting multiple experiments,
and a beam dump facility, PIP2-BD, was proposed
as a candidate experiment to study sub-GeV dark sectors.
In the case of our interest, PIP2-BD can probe the HNL in the same way as LSND.
The HNL is produced from stopped pions and muons at the target.
We may assume five years of physics run with the baseline PAR option <cit.>,
which results in an 800 MeV proton beam with 1.2× 10^23 POT.
We take the formation rate of stopped π^+ (and hence μ^+) per proton to be 0.1,
as PIP2-BD is designed to use a lighter target that has a larger formation rate than Hg.
For the mercury target, for instance, the COHERENT experiment reported a formation rate of (9.0± 0.9)× 10^-2
with slightly higher proton beam energy <cit.>.
This fixes the parameters in Eq. (<ref>) as N_π = N_μ = 1.2× 10^22.
We assume that the active volume of the detector is cylindrical in shape with 4.5 m in height and 4.5 m in diameter,
located 18 m away from the HNL production point.
We then estimate from the solid angle coverage that ϵ_det≃ 3.9× 10^-3.
The total event number of the HNL decay originating from μ^+ is thus given as
N_ee^(μ)≃ 1.4× 10^5(U_μ N^2/10^-6)^2
(m_N/m_μ)^6 (1-m_N/m_μ)^4
(1 + 4m_N/m_μ + m_N^2/m_μ^2)
:μ`-mixing,
7.5× 10^5(U_eN^2/10^-6)^2
(m_N/m_μ)^6 (1-m_N/m_μ)^4
(1 + 4m_N/m_μ + m_N^2/m_μ^2) :e`-mixing,
and that originating from π^+ as
N_ee^(π)≃
8.0× 10^4(U_μ N^2/10^-6)^2 (m_N/m_μ)^6
m_π(
(m_μ^2 + m_N^2)m_π^2 - (m_μ^2 + m_N^2)^2 + 4m_μ^2m_N^2
)/m_μ(m_π^2-m_μ^2)^2 :μ`-mixing,
3.8× 10^5(U_eN^2/10^-6)^2
(m_N/m_μ)^7
m_N m_π(m_π^2 - m_N^2)/(m_π^2 - m_μ^2)^2 :e`-mixing,
where we take L_det = 4.5 m.
Without any dedicated study on possible backgrounds at this moment,
we may simply draw the lines that correspond to 3 and 50 events of the HNL decay inside the detector,
assuming 75 % of event acceptance independent of m_N following <cit.>.
In Fig. <ref>, we show the future sensitivity of PIP2-BD, together with the expected sensitivities of
DUNE <cit.> and PIONEER <cit.>.
The solid orange line corresponds to 3 events while the dashed orange line corresponds to 50 events.
It is clear from the figure that PIP2-BD has the potential of exploring the parameter region
that is not yet and will not be covered by the other experiments, especially in the muon mixing case.
Here the key difference from LSND is the event acceptance, thus it would be essential to distinguish the e^+ e^- events
from backgrounds, in particular, the single e events from the ordinary neutrinos, to realize this level of sensitivity.
§ SUMMARY
In this paper, we studied the sensitivity of the neutrino experiments with stopped muons and pions
as the source, such as LSND and PIP2-BD, on the HNL.
With the help of the small velocity of the produced HNL as well as a large number of POT,
we find that the LSND constraint on the mixing angle squared is stronger than the previous constraints by
more than one order of magnitude below the muon mass, if the HNL dominantly mixes with muon neutrinos.
Since LSND does not perform a search for e^+ e^- events, the HNL decay events are compared
with the single e events induced by the SM neutrinos in our analysis.
Therefore, if future experiments such as PIP2-BD can distinguish the e^+ e^- events from backgrounds
induced by the SM neutrinos, the sensitivity will be significantly improved.
Although we have focused on PIP2-BD as a future experiment,
there are other near-future experiments that are expected to be sensitive to the HNL.
For instance, the Coherent CAPTAIN-Mills (CCM) experiment aims at probing sterile neutrinos
and sub-GeV dark matters produced at the proton beam dump at LANSCE <cit.>.
Even though the planned number of the total POT as well as the detector volume is smaller than LSND,
CCM may potentially explore a new parameter region of the HNL,
given that the e^± are efficiently resolved by, e.g., the Cherenkov light with their liquid Ar detector.
The COHERENT experiment <cit.> is another possibility
once the detector volume is upgraded to be large enough.
The stopped muon and pion sources have been shown to be sensitive probes of sub-100-MeV physics for dark photons, dark scalars, light dark matter, and now for the HNLs. Their sensitivity to axion-like particles (ALPs) could also be interesting. Our initial estimations show that the LSND detector is capable of setting competitive bounds on ALP-photon couplings, which also means that the sensitivity of PIP2-BD to ALPs must also be investigated.
Acknowledgements
We would like to thank Drs. M. Hostert, P. Huber, W. Louis, T. Maruyama, and R. Tayloe for helpful discussions.
This work is supported in part by the DOE grant DE-SC0011842. Z.L. and K.L. were supported in part by DOE grant DE-SC0022345.
The Feynman diagrams in this paper are generated by <cit.>.
The data associated with the figures in this paper can be accessed via https://github.com/ZhenLiuPhys/HNLwStoppedMGitHub.
utphys
|
http://arxiv.org/abs/2306.08426v1
|
20230614104042
|
Patterns of Patterns II
|
[
"Joseph Corneli",
"Noorah Alhasan",
"Leo Vivier",
"Alex Murphy",
"Raymond S. Puzio",
"Abby Tabor",
"Charles J. Danoff",
"Mary Tedeschi",
"Manvinder Singh"
] |
cs.SI
|
[
"cs.SI",
"D.2.10; I.6.0"
] |
Corresponding author, [email protected].
[email protected]
1234-5678-9012
Oxford Brookes University
Gipsy Lane
Oxford
UK
OX3 0BP
[email protected]
Hyperreal Enterprises Ltd
81 St Clement’s St
Oxford
UK
OX4 1AW
University of the West of England
Faculty of Health and Applied Sciences (HAS), Frenchay Campus, Coldharbour Lane
Bristol
England
UK
BS16 1QY
[email protected]
Mr Danoff’s Teaching Laboratory
PO Box 802738
Chicago
IL
USA
60680
[email protected]
Baruch College
PO Box 802738
New York
NY
USA
60680
[email protected]
We review how our earlier theorization of pattern methods fares in the
wild. The “wild” here included a graduate school classroom in New
York, a workshop at a transdisciplinary conference in Arizona, a
nascent citizen science project in Bristol, and a professional
development day for a university in Oxford. We encountered unexpected
challenges such as working with students in a HyFlex classroom,
getting conference attendees to feel comfortable evaluating the
conference they were presently attending, and adapting our plans on
the fly when leading workshops with surprising attendee responses. We
describe and refine patterns specifications that will help other
practitioners of patterns in their own forays into the wild.
<ccs2012>
<concept>
<concept_id>10003456</concept_id>
<concept_desc>Social and professional topics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10011007.10011074.10011075</concept_id>
<concept_desc>Software and its engineering Designing software</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10011007.10011074.10011134.10003559</concept_id>
<concept_desc>Software and its engineering Open source model</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010481</concept_id>
<concept_desc>Applied computing Operations research</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010341</concept_id>
<concept_desc>Computing methodologies Modeling and simulation</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Social and professional topics
[300]Software and its engineering Designing software
[300]Software and its engineering Open source model
[300]Applied computing Operations research
[100]Computing methodologies Modeling and simulation
Patterns of Patterns II: Discourse on Implementation
Manvinder Singh
====================================================
§ INTRODUCTION
The previous installation in this series presented a high-level
methodological synthesis of three techniques from design, futures
studies, and elite training
in the form of a high-level design pattern
called PLACARD <cit.>. We saw this high-level
pattern as really getting to the heart of what design patterns are.
To back up this claim, we presented a theoretical analysis, and a case
study. During the two years which have elapsed since then, we have
had opportunities to deploy and further develop these methods in
various contexts. We will describe some of these applications in the
four cases studies below. We have distilled this experience into a
collection of practical patterns which augment the earlier high-level
pattern. This fully-fledged collection of patterns of patterns can
help you organise your work with Design Pattern Language methods.
§ RECAP OF “PATTERNS OF PATTERNS”
We introduced a synthesis of methods that operationalise the
“sensory”, “cognitive” and “motor” systems from psychology in the context of social intelligence. The particular methods we outlined were certainly not the only way to implement these system features. What drew our attention is that each of the methods we selected comes with a framework or template; each of the methods is, essentially, a design pattern.
* Project Action Review (PAR): a set of five review questions to explore at a project checkpoint.
* Causal Layered Analysis (CLA): a set of four “layers” that can
be used to unpack a problem area of interest.
* Design Pattern Languages (DPL): a three-part
template of context, problem, and solution.
We made the further assertion that these sensory, cognitive, and motor
methods can be hooked together, theorising design patterns as little
pieces of moveable social intelligence. We called the specific method
that combines PAR, CLA, and DPL the “PLACARD” pattern.
We applied these methods to analyse the design Pattern Language
literature and practices, and also developed a case study examining
the way the Emacs Research Group used related methods. We built on
these analyses to outline potential futures for the development of
pattern methods.
All of these potential futures have early indicators attached to them,
in the sense of William Gibson: “the future is already here, it’s just
not very evenly distributed”. But, now with reference to Alan Turing, much
remains to be done.
§ METHODOLOGY
In the current paper we will apply a similar reflective methodology,
examining major events in our work that took place since the
publication of “Patterns of Patterns”. We ran three formal workshops
that were inspired by the original set of methods, and we will
describe how the methods evolved further in those settings. We
also used the aforementioned paper as a focal reading over three
sequential years of a postgraduate course, CIS 9590 “Information
Systems Development Project” at Baruch College, part of the City
University of New York. Together, an analysis of these touchpoints
suggests ways in which the methods can continue to evolve. As before,
we will use a case study in Causal Layered Analysis as a method
for describing that evolution.
We used design patterns directly when developing and running the
workshops. A selection of these patterns are included here. Each of
the patterns is given a marker, (S), (C), or (M), to indicate whether it
plays a primarily sensory, cognitive, or motor role. Some of the
patterns have a complex role, such as designing and building a new
instrument ((C)→(M)→(S)). We also include the
itinerary for each of the workshops to help bring the reader into the
scene, and encapsulate post-workshop reflections as further design
patterns.
§ CASE STUDIES
§.§ Case Study 1: “Going Meta” workshop at Anticipation 2022
This workshop functioned as a further pilot of methods that we already
shared in earlier pilots (at PLoP 2021, at Oxford Brookes Creative
Industries Festival, and previously in more nascent forms). Our aims
were to explore the methods in a hands-on way, and provide attendees
with a rapid introduction to peeragogy. We also wanted to try out
some new “pattern cards” to organize the workshop. Our pitch to
Anticipation attendees was that this workshop would help to establish
a position of maximum leverage, exercising our “Critical Anticipatory
Capacities” using “Creativity, Innovation and New Media” (two of the
conference’s themes).
§.§.§ Itinerary
[backgroundcolor=blue!50,linecolor=blue!50]
< g r a p h i c s >
§.§.§ Selected Patterns for Case Study 1
§.§ GOING META(C)
Context: In the course of working on a project
together.Problem: We may find a gap between
our ideals and our methods;Solution: Try “going
meta”, to explore how the project’s methods can be applied to
itself.Example: In a community that usually
focuses on anticipating the future for others, try inviting members of
the community to anticipate the future of the community.
§.§ DÉRIVE COMIX(S)
Context you want to develop some future scenarios to explore with a group.If you have an group BUT everyone has their own experiences;Then Go for a walk or just look out the window wherever you are, and document what you see. Follow up by preparing your materials to share in a succinct fashion, e.g., as photos, a screenshot, slides, sketches, a zine, a map, or some PostIt® notes.
By itself, looking to the immediate surroundings only gives an
imperfect picture of how to develop a future scenario. Direct
observations might include little to no evidence of, say, top-level
government policy which likely is a major factor in the future. Two
further patterns access more levels of meaning.
§.§ MEANING MAP (C)→(M)→(S)
Context We have collected images describing people’s worlds
(see Dérive Comics).If you want to distill
shared meaning BUT everyone has their own experience;Then talk together about the problems and opportunities that
everyone sees. Maybe some of these will cluster together, or maybe
everyone will have their own different perspective: that’s OK. You
can use these different viewpoints to get everyone on the same
page.
§.§ REINFUSE EXPERTISE
Context a group wants to build a Meaning Map.If everyone has experience as a citizen BUT they also have
expertise;Then begin by removing expertise to get
everyone on the same page, and subsequently reinfuse expertise to
enable richer and more complex thinking.
§.§ PATTERN LANGUAGE COMPONENTS
Context In a collaborative setting with people who are new to
design patterns.If new attendees are being invited
to create new patterns BUT the context, problem, solution language
brings assumptions that they may not be comfortable Then
introduce more dynamic keywords such as HOWEVER (to describe a gap or
conflict), BECAUSE (to describe a set of operating causes), THEREFORE
(to describe a rationale based on related data) , and SPECIFICALLY (to
describe next steps) in order to help people talk about the different parts
of the patterns and build them up piece by piece.
Note that in this workshop we tried aligning the Pattern
Language Components with Functional Roles (see next section).
We later decided to separate the two.
Reflecting on the workshop experience, together with the ‘meta’
context provided by the contemporary anticipation community, led us
come up with the following proto-pattern:
§.§ INCREASE PARTICIPANT CONTROL
Summary: When organising a collaborative activity,
participants should not remain only a audience, or only deliver
scripted lines. Give them increasing responsibility.
§.§ Case Study 2: Public Space for Public Health
This workshop was commissioned by Abby Tabor as part of her project
“Designing urban environments for human health: from the microbiome to
the metropolis”. The aim was to gather attendees with an interest in
the project themes and work together to envision next
steps. Elaborations of these were developed by participants, and were
organised by facilitators using a software tool based on Org Roam and
Org Roam UI.
§.§.§ Itinerary
[backgroundcolor=blue!50,linecolor=blue!50]
< g r a p h i c s >
§.§.§ Selected Patterns for Case Study 2
§.§ CONTEXT SETTING
Context A workshop or other working context has been
convened.If the facilitators have ideas that they
would like to explore with attendees BUT these ideas are not top of
mind for attendees.Then do some context-setting,
e.g., showing videos, giving a short talk about why people have been
invited and describe the hoped-for outcomes.
§.§ FUNCTIONAL ROLES♢
Context When building a new set of design patterns.If you have ideas about the components of a pattern BUT the
pattern hasn’t been fully formed yet.Then introduce some
different perspectives to critique the pattern as it develops.Specifically, Time Traveller, Wrinkler, and Analyst are roles that we have found useful.
The superscripted “♢” is used to indicate that this pattern
comes with a small embedded pattern language. The following patterns
are described using a more informal template, outlining the kinds of
questions that people taking on the roles might ask, and further
specifying the function served. They are also presented with a
mnemonic symbol based on the chess set. The list is not intended to
be an exhaustive listing.
§.§ TIME TRAVELLER
Question What has happened in the past, what could
happen in the future?Role To provide historical context and
anticipate alternate futures.
§.§ WRINKLER
Question What could go wrong? Role. Consider what might derail or counter
the proposed solution. Each wrinkle can be assigned a level of
perturbation (from low to high).
§.§ ANALYST
Question What are the moving parts?Role 1 Consider the current challenge and all the components
of the potential solution (actors, resources, institutions). Identify
and orchestrate the dynamic network of these components.Role 2 Consider the other challenges specified beyond the
current focus. Identify and orchestrate the integration of these
components relevant to the present challenge.
§.§ FACILITATOR ROLES♢♢
Context Developing a collection of interrelated design
patterns.If you are getting ideas from participants
who play Functional Roles BUT the ideas aren’t all connected
with each other in a structured way.Then introduce
facilitator roles to help structure the collection.Specifically, Linkers, and Reflectors are two
roles that we have found useful.
The superscripted “♢♢” means that this pattern introduces a sub-sub-language; see remarks above.
§.§ LINKERS
Question
How do proposed scenarios build into patterns across layers, and how do they interact within the constellation?Role. Data wrangling as it comes in, providing visualisation of patterns and interconnections.
§.§ REFLECTORS
Question How is the scenario evolving?Role To appraise each scenario, provide a format for reflection (PAR), make decision to continue, reset, end.
Some examples of the patterns that participants created during the workshop by making use of the Pattern Language Components and Functional Roles are presented below in capsule form.
§.§ CONTESTED SPACE
Summary: So-called public space doesn’t always feel welcoming to all members of the public. It can be overrun with antisocial behaviour. It can feel exclusionary, or uninviting. It can be the site of conflict. However, the need for complex uses of public space does not mean that each space needs to support every use equally.
§.§ FUNDING OF PUBLIC SPACE
Summary: Even though public space is known to increase wellness in the population, well-being priorities that would lead to increased funding for public space aren’t universally adopted. In order to make the benefits of such investment clear, increase transparency around investments in public welfare, e.g., create register of impacts of local social enterprises.
§.§ REBALANCE SOCIAL SERVICES
Summary: Welfare-related services should be supplied in balance with local needs, though they often are not. Can varied expertise be integrated in a similar way to the domain-specific skills practised by Médicins Sans Frontièrs to address complex local challenges?
§.§ Case Study 3: Open Research Futures
This workshop was developed as an “Away Day” for faculty and staff
members at Oxford Brookes University. The aim of the workshop is to
elaborate the institution’s open research strategy relative to its
existing organisational strategy. Methodologically, this workshop
builds on a pre-seeded Org Roam network of interlinked themes and an
additional activity that enlists attendees in taking concrete actions
on the identified next steps. This itinerary reused the language
“experts to citizens”, “citizens to action” from the previous workshop
(with a slight variation suited to the context). The theoretical content of these phases mirrors the Dérive Comix
§.§.§ Itinerary
[backgroundcolor=blue!50,linecolor=blue!50]
< g r a p h i c s >
§.§.§ Selected Patterns for Case Study 3
§.§ DO YOUR RESEARCH
Context Prior to beginning a formal workshop or other participatory research activity.If it looks like it will be possible to do participatory research BUT the participants haven’t begun speaking with each other yet.Then
start doing the research in a more centralised way before inviting direct collaboration, in order to give participants something to engage with.Example In the current setting, this pre-research included 1-to-1 interviews with about half of the invitees, as well as internet research to find and explore related scenarios developed by others.[<https://royalsociety.org/topics-policy/projects/research-culture/changing-expectations/visions-of-2035/visions-of-2035-materials/>]
§.§ THE FUTURE BEGINS NOW
Context Having developed possible next steps.If appears that leaving without concrete commitments means
concrete actions are less likely to take place.Then
introduce early actions within the
collaborative setting to create commitment.Example One way to build commitment would be to ask people to develop and share a method for a small-scale experiment that they plan to carry out.
§.§ Case Study 4: CIS 9590, Information Systems Development Project
§.§.§ Introduction the course from the instructor, Mary Tedeschi
CIS 9590 is Information Technology Project Design and Management is the “Computer Information Systems” (CIS) capstone project course for the CIS major wherein the students will apply concepts and techniques from prior course work, to design, develop, and create an implementable application for a working information system of an actual business. It also focuses on the design and management of systems to meet the increased need for information within an enterprise. The course exposes students to the fundamentals of IT project management required for the successful implementation of IT-based systems. The course presents tools and technologies for project definition, work breakdown, estimating, planning and scheduling resources as well as monitoring and control of project execution. Students utilize knowledge gained from prior coursework, and work in groups to design and manage an Information Technology project. During my first semester Spring 2020 teaching with the students using whatever development tools they were familiar, I noticed this to be a problem so with this knowledge I changed the course to require the use of Intel One API. This did not get implemented until Fall 2021. I actually taught the course three times before requiring the software tool uniformly changed. The course was a 3 hour course, first face-to-face. Then synchronous online only. In Fall 2021 we changed to 75 minutes in person and online (hybrid). Students had to self-teach Intel One API with the use of tutorials and buddy system. The students seemed to have the necessary skills to learn enough of the software to create an implementable application. This semester, Spring 2023, the students really seemed to lack the coding skills.
§.§.§ Our use of “Patterns of Patterns” within the course
We used the paper “Patterns of Patterns” as a focal text with three
successive cohorts of CIS 9590 students. The course syllabus is
focused on developing group projects with a computer programming
component. Our hope was that the topics in the paper would inspire
them with new ideas about design and collaboration.
Each year, students asked many thoughtful questions about the paper; they also produced their own written response to the paper, engaging the original paper in depth; and in the latest run, we offered some in-class
exercises based on the workshop methods described above.
Reading these written responses showed that the students had not only understood the main ideas of our paper, but added to them. In effect, they created alternative imaginaries for the paper’s history and future. For instance, in their 2022 ‘case study’, they generated a “Recommendation and Implementation Plan” which proposed specific actions which a group could take based on our ideas; and, in 2023, the students produced a slide presentation based upon our paper, exploring its relationship to themes such as “emerging technology”.
In the spirit of these student responses, we will now present our reflections on the experience in the form of a Causal Layered Analysis.
§.§.§ Litany
Initially our paper was introduced as a contemporary reading, relevant to the “CIS” theme. Students would not be able to “cheat” in their reports, because the paper wasn’t described extensively on Sparknotes or similar. Along with this (intentional) challenge, CIS 9590 students encountered a range of more or less predictable problems, e.g., many felt a lack of confidence with coding. The students came to the course with a variety of different backgrounds (e.g., Python vs C++) which contributed to some friction with this course.
§.§.§ System
Whereas in our rounds of earlier participation we were more there for enrichment, in the most recent iteration our contributions were more closely integrated into the main activities of the course. We attended more sessions, including one in which we attempted to run a short version of the workshop with attendees. This allowed us opportunity to interact with the students on an ongoing basis as they designed and implemented theie projects. Furthermore, Mary attended at least as many meetings of the Peeragogy project, fostering an exchange of ideas and viewponts between these two contexts. In the short term, this led to productive synergy and, in the long term, could lead to our pedagogical and peeragogical initiatives becoming more integrated into a larger system. This collaboration continued post-semester, insofar as Mary invited the students to express interest in possible internships in the Peeragogy project.
§.§.§ Worldview
Students were thinking about their future careers. What they wanted to get out of the course (e.g., becoming a well-paid data scientist or business leader) at times had some friction with the practical reality of the course requirements, in which they had to deliver a concrete hands-on working project, without being able to rely on employees. PLACARD wouldn’t be of much direct help with the technical challenges they faced, but we hoped it could help them organise their work in a sensible way. More informally, the ideas underlying PLACARD informed our comments; for example, in a session with Mary in which we ‘workshopped’ CIS 9590 with other peeragogues, discussants suggested adding more touchpoints for peer learning and feedback.
§.§.§ Myth
A deep metaphor within the classroom setting is pedagogy. However, the methods that we brought as guests was more linked with our experience of peeragogy. In the new shared context, these two perspectives begin to integrate. Mary as a host exercised the value of xenia by bringing us into her course as guests. The possibility of student internships within the Peeragogy project would create the reciprocal opportunity for further student practice with CIS skills in an applied context, helping to build tools and platforms for peer learning and peer production (including through use of pattern methods). Indeed, the particular combination of peer learning and formal education developed here led us to wonder how far off the Peeragogy project might be from being able to support informal learning of relevant programming concepts (preliminaries to CIS 9590) or applied computing projects (an analogue of CIS 9590).
§.§ Proto-patterns describing the experience, by a CIS 9590 student, Manvinder Singh
§.§ ENGAGEMENT AND GUIDANCE
Summary: The authors of ‘Pattern of Patterns‘ actively
participated in our class, to share expertise and create a
collaborative learning environment. Their presence allowed us to gain
deeper insights into the paper's concepts and methodologies, leading
to innovative project approaches. By closely studying the patterns of
patterns identified in their research, I gained a fresh perspective on
project organization and established a logical and coherent structure.
§.§ AVOIDING MISTAKES
Summary: The authors' insights helped me navigate common
project development pitfalls. Through their emphasis on effective
documentation, regular testing, and thorough project planning, I was
able to avoid costly errors. Their guidance ensured a consistent
progress trajectory and maintained the professionalism of my final
project.
§.§ SCALING AND ADAPTABILITY
Summary: ‘Pattern of Patterns’ underscored the importance of
scalability and adaptability in project design. By considering future
technologies and incorporating modular elements, I aim to seamlessly adopt new advancements. In particular, I focused on building a flexible framework that could easily accommodate emerging technologies.
§ DISCUSSION
The first workshop mixed Pattern Language Components with Functional Roles, putting participants in the thick of a
pattern-related dialogue. While this led to interesting
conversations, it was more work to extract any patterns. We did find some useful process
patterns this way, such as Increase Participant Control. We employed what we learned in subsequent
runs. Within the second workshop, a more distinct use
of the Pattern Language Components helped the participants come
up with their own patterns.
There is an interesting interplay between content-level patterns like
these, and process-level patterns. For instance, the workshop is akin
to a public space; further development of the associated tools might
make it even more of a public resource — somewhat like Wikipedia, but endorsing the contribution of original research, not forbidding it. Already, the workshop is a context in which to do a kind of rapid, local, open research.
In order for any pattern-informed research to work well, we should be gathering evidence for or against the salience of the patterns that are elaborated. The Octopus platform mentioned in the itinerary for Case Study 3 uses several data types that follow the rough outline of a scientific paper, viz.,
Research Problem,
Rationale/Hypothesis,
Method,
Results,
Analysis,
Interpretation,
Real World Application, and
Peer Review.
The formulation of an Octopus-like platform for recording and reporting on design patterns would probably need to change somewhat — but the Problem, Rationale, Method, and Results components are reasonably familiar for pattern authors.
Table <ref> summarises the patterns that were described in
this paper, pulling them together from across the separate cases studies. The table shows the patterns grouped in a way that elaborates
our use of “PAR”, “CLA”, and “DPL” methods (summarised in Section <ref>) with a more rounded
description of the purposes that these methods serve. Further work
would be needed to fully describe the patterns’
application domains, with evidence of the kinds of results that can be expected, and to describe their interconnections as a pattern language.
§ CONCLUSION
We hoped that running these workshops would help us design the next
steps for our platform and process, and this seems to have been
successful. As an immediate outcome, we developed the “PLACARD
workshop” — now retitled “Open Future Design” — across several
successive runs in different organisational contexts in a way that
makes it more robust. This relative success notwithstanding, it is
worth recalling that our initial intention in Patterns of Patterns
was to support distributed collaboration across contexts.
The informal pattern-based review of our evolving work, presented
here, is a good start. Software development could carry this work
further. A not-so-distant future for Org Roam would allow several
facilitators to make notes in near real-time into a shared map, and
with some more fine tuning of the Emacs interface, a similar workflow
could be used directly by workshop attendees, even across different
contexts. Many rich dialogues might ensue — integrating concepts from
fields as disparate as future studies, health sciences, open research,
and information systems.
Org Roam could be augmented with additional “emerging technologies”.
Articulating domain level patterns which outline potential new
behaviours, and gathering evidence that those behaviours do in fact
work as intended is already an ambitious (but logical) ramification of
the pattern method. It is a further step to articulate the learning
apparatus that underpins such mechanisms in a computationally-coherent
way. The Functional Roles provide an early informal
articulation of the process, at the level paper prototyping. Looking
ahead to further development, there’s no particular reason to use one
data format for representing complex systems related to domains such
as public health and climate action, and use another for representing
the meta-level. Indeed, the meta-level is just another domain. All
such models should include predictions about the causal connections
between actions and measurements, and should incorporate strategic
intelligence to articulate action.
AI methods could be employed alongside hands-on methods to elaborate
and work with these models — to identify analogies between action
arenas, to highlight the ramifications of complex actions, to show
predicted costs and benefits, and as well as to surface new questions.
Sophisticated models will need to incorporate information from across
disciplines, legal frameworks, national entities, local
administrations, social norms, communities, and individuals, as well
as inforation about leverage and tipping points that allow the effects
of change to reach across level boundaries.
We have begun (as we mean to carry on) by focusing on the development
and articulation of multi-purpose tools for thought.
ACM-Reference-Format-Journals
|
http://arxiv.org/abs/2306.05295v2
|
20230608153909
|
The Star-forming and Ionizing Properties of Dwarf z~6-9 Galaxies in JADES: Insights on Bursty Star Formation and Ionized Bubble Growth
|
[
"Ryan Endsley",
"Daniel P. Stark",
"Lily Whitler",
"Michael W. Topping",
"Benjamin D. Johnson",
"Brant Robertson",
"Sandro Tacchella",
"Stacey Alberts",
"William M. Baker",
"Rachana Bhatawdekar",
"Kristan Boyett",
"Andrew J. Bunker",
"Alex J. Cameron",
"Stefano Carniani",
"Stéphane Charlot",
"Zuyi Chen",
"Jacopo Chevallard",
"Emma Curtis-Lake",
"A. Lola Danhaive",
"Eiichi Egami",
"Daniel J. Eisenstein",
"Kevin Hainline",
"Jakob M. Helton",
"Zhiyuan Ji",
"Tobias J. Looser",
"Roberto Maiolino",
"Erica Nelson",
"Dávid Puskás",
"George Rieke",
"Marcia Rieke",
"Hans-Walter Rix",
"Lester Sandles",
"Aayush Saxena",
"Charlotte Simmonds",
"Renske Smit",
"Fengwu Sun",
"Christina C. Williams",
"Christopher N. A. Willmer",
"Chris Willott",
"Joris Witstok"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
firstpage–lastpage
Hybridizable discontinuous Galerkin methods for the Monge–Ampère equation
[
July 31, 2023
=========================================================================
Reionization is thought to be driven by faint star-forming galaxies, but characterizing this population in detail has long remained very challenging. Here we utilize deep nine-band NIRCam imaging from JADES to study the star-forming and ionizing properties of 756 z∼6-9 galaxies, including hundreds of very UV-faint objects (M_UV>-18). The faintest (m∼30) galaxies in our sample typically have stellar masses of M_∗∼(1-3)×10^7 M_⊙ and young light-weighted ages (∼50 Myr), though some show strong Balmer breaks implying much older ages (∼500 Myr). We find no evidence for extremely massive galaxies (>3×10^10 M_⊙) in our sample. We infer a strong (factor >2) decline in the typical [OIII]+Hβ EWs towards very faint z∼6-9 galaxies, yet a weak UV luminosity dependence on the Hα EWs at z∼6. We demonstrate that these EW trends can be explained if fainter galaxies have systematically lower metallicities as well as more recently-declining star formation histories relative to the most UV-luminous galaxies in our sample. Our data provide evidence that the brightest galaxies are frequently experiencing a recent strong upturn in SFR. We also discuss how the EW trends may be influenced by a strong correlation between M_UV and Lyman continuum escape fraction. This alternative explanation has dramatically different implications for the contribution of galaxies along the luminosity function to cosmic reionization, highlighting the need for deep spectroscopic follow-up. Finally, we quantify the photometric overdensities around two z>7 strong Lyα emitters in the JADES footprint. One Lyα emitter lies close to a strong photometric overdensity while the other shows no significant nearby overdensity, perhaps implying that not all strong z>7 Lyα emitters reside in large ionized bubbles.
galaxies: high-redshift – galaxies: evolution – dark ages, reionization, first stars
§ INTRODUCTION
The formation and assembly of galaxies within the first billion years of cosmic history directly influenced the large-scale ionization state of the Universe via the process of hydrogen reionization <cit.>.
Recent Lyα forest measurements from a statistical sample of high-redshift quasars indicate that essentially all hydrogen atoms in the intergalactic medium (IGM) had been reionized by z=5.3 <cit.>, approximately 1.1 Gyr after the Big Bang.
Additional quasar and galaxy spectra at z>6, along with measurements of the cosmic microwave background, imply that reionization was about halfway complete ∼400 Myr earlier at z∼7-8 <cit.>.
The primary agents of hydrogen reionization appear most likely to be star-forming galaxies given that constraints on the z≳6 quasar luminosity function imply that active supermassive black holes were very rare at early times <cit.>.
Prior to , over 1000 Lyman-break galaxies at z>6 had been identified from deep Hubble Space Telescope () imaging <cit.> as well as wide-area ground-based imaging <cit.>.
These studies demonstrated that the z∼6-10 galaxy UV continuum luminosity functions had very steep faint-end slopes (α∼ -2; e.g. ), clearly indicating that very UV-faint sources (> -18) greatly dominated the galaxy number counts during reionization.
The very high relative abundance of the faintest (> -18) z≳6 galaxies has long motivated efforts to characterize their physical properties.
Deep observations revealed that these systems often exhibit very blue rest-UV continuum slopes (-2.5 ≲β≲ -2 where f_λ∝λ^β; e.g. ) as well as very compact rest-UV morphologies (effective radii r_e≲200 pc; ).
Based on results from local (z∼0.3) samples, such properties hint that very UV-faint reionization-era galaxies might be efficient at leaking ionizing photons into the IGM <cit.>.
However, detailed statistical studies of the star-forming and ionizing properties of this faint, abundant z≳6 population have been precluded by the lack of deep data probing the rest-frame optical portion of their spectral energy distributions (SEDs).
For much of the past 20 years, the only instrument capable of delivering any constraints on the rest-optical SEDs of high-redshift galaxies was the Infrared Array Camera (IRAC) on board the Spitzer Space Telescope.
Even so, IRAC could only reach a 5σ imaging sensitivity of m∼26.5 (≲ -20.5 at z∼7) in the deepest pointings (120 hours; ).
While dedicated observations in lensing fields were able to push this sensitivity to intrinsically fainter UV luminosities <cit.>, statistical constraints on the rest-optical SEDs of the faintest reionization-era galaxies remained very limited.
Consequently, in the lead-up to , much attention was concentrated on understanding the physical properties of relatively bright (≲ -20) Lyman-break z≳6 galaxies <cit.>, though even these analyses suffered from IRAC's poor angular resolution (FWHM∼2 arcsec) and SED sampling (only two imaging filters at 3–5μm, both with broad bandpasses).
Within the past year alone, data from have greatly advanced our understanding of reionization-era galaxies.
Rest-optical spectra from the Near-Infrared Spectrograph (NIRSpec; ) have generally revealed very strong nebular line emission (e.g. [OIII], Hα, Hβ) coupled with signatures of hard radiation fields ([OIII]/[OII] > 10) and low gas-phase metallicities (≲0.1 Z_⊙) among z>6 galaxies <cit.>.
This is consistent with expectations from findings prior to <cit.>.
Less expected sub-populations of z>5 galaxies are also emerging from early NIRSpec observations, including sources with (sometimes tentative) contributions from active galactic nuclei (AGN; ) as well as objects that are in a very inactive stage of star formation <cit.>.
The Near-Infrared Camera (NIRCam; ) on-board is also proving to be an invaluable tool for studying the demographics of reionization-era galaxies.
This is not only because of NIRCam's dramatic improvement in sensitivity, angular resolution, and SED sampling over IRAC, but also because Lyman-break z≳6 selections with imaging are very efficient at yielding highly complete samples of objects with faint continua.
Much of the early NIRCam studies on z≳6 galaxies focused on data from the Early Release Science (ERS) CEERS <cit.> and GLASS <cit.> surveys which immediately pushed down to extremely deep Hubble imaging depths (m_AB∼29 mag at 5σ) across 1–5μm with just ∼3 hours of observations per photometric band.
These data (among that from other early /NIRCam surveys) have delivered a wealth of insight into faint (≲ -19) z≳6 galaxies including their ages, stellar masses, nebular line strengths, UV slopes, dust attenuation strengths, and morphologies <cit.>.
While these studies have clearly advanced our understanding of reionization-era systems far beyond that possible prior to , a detailed statistical analysis of the properties of very faint (> -18) z≳6 galaxies has yet to be undertaken.
Such an endeavor is clearly warranted as these very faint objects are often though to contribute substantially to cosmic reionization, and also are the likely progenitors of more typical galaxies at lower redshifts.
Here we take steps to address this shortcoming by utilizing NIRCam imaging taken as part of the JWST Advanced Deep Extragalactic Survey (JADES; ).
JADES is a collaborative effort of the NIRCam, NIRSpec, and U.S. Mid-Infrared Instrument (MIRI; ) teams utilizing coordinated parallels to maximize science outcomes across the ≈770 hours of allocated observing time.
All data in JADES are being taken over the Great Observatories Origins Deep Survey (GOODS) fields in the northern (GOODS-N) and southern (GOODS-S) hemispheres <cit.> which contain some of the deepest Hubble imaging ever obtained <cit.>.
By adding exceptionally deep (m_AB∼30-31 mag 5σ) NIRCam imaging in several bands from ∼1–5μm (often including two medium-band filters) across >200 arcmin^2 of the GOODS fields, JADES is opening a completely new window on the high-redshift Universe <cit.>.
The primary goal of this paper is to utilize the deep NIRCam imaging from JADES to measure the rest-UV+optical SEDs among a large sample of very faint (> -18) z∼6-9 galaxies and statistically characterize their physical properties in detail, comparing with more luminous systems.
One of the key conclusions of this paper relates to the star formation histories of reionization-era galaxies, complementing the JADES/NIRCam investigation in <cit.> which utilized broadband SEDs to infer the presence of multiple bursts by considering non-parametric SFHs.
Here, we utilize the two long-wavelength NIRCam medium bands from JADES to focus on insight from nebular line emission about variations in star formation activity on short (∼3 Myr) timescales.
The structure of this paper is as follows.
We begin by describing the imaging data, source extraction, photometric calculations, sample selection, and photo-ionization modelling in <ref>.
Next, we present and discuss the UV luminosities, photometric redshifts, stellar masses, and light-weighted ages among our sample (<ref>).
We then proceed to utilize photometric constraints on strong rest-optical nebular lines ( and Hα) to statistically analyze the star-forming and ionizing properties of relatively bright and very faint reionization-era galaxies (<ref>).
Finally, we quantify the photometric overdensities surrounding two strong Lyα emitters at z>7 within the JADES footprints to improve our understanding of the connection between strong Lyα and ionized bubbles deep in the epoch of reionization (<ref>).
Our main conclusions are summarized in <ref>.
Throughout this paper, we quote all magnitudes in the AB system, assume a <cit.> initial mass function (IMF) with limits of 0.1–300 , and adopt a flat ΛCDM cosmology with parameters h=0.7, Ω_M=0.3, and Ω_Λ=0.7. We provide a catalog listing coordinates, magnitudes, and inferred physical properties among our sample.
§ IMAGING DATA AND SAMPLE SELECTION
In this section, we first describe the JADES/NIRCam imaging over the GOODS-S and GOODS-N fields as well as the complementary Hubble Advanced Camera for Surveys (ACS) imaging that assists with our Lyman-break dropout selection (<ref>).
We next detail our procedure for source extraction and photometric measurements in <ref>, then describe our Lyman-break selections of z∼6 and z∼7-9 galaxies (<ref>).
Lastly, we describe the photo-ionization SED models used throughout this work to infer the physical properties of the galaxies within our sample (<ref>).
§.§ and Imaging
We consider all JADES/NIRCam imaging data taken prior to February 10, 2023 (see ).
These data include imaging in the broad-band F090W, F115W, F150W, F200W, F277W, F356W, and F444W filters[F070W imaging is also available over a single pointing as of Feb. 10, 2023. Because this represents a small fraction of the total JADES/NIRCam imaging at the time, we here ignore the F070W data.] as well as the F410M medium-band filter across the full footprints in both the GOODS-S and GOODS-N fields.
Additional medium-band F335M imaging is available over the majority of the imaging area taken up to Feb. 10, 2023.
Because the medium-band filters are particularly useful for determining whether galaxies show strong nebular line emission or continuum discontinuities (e.g. Balmer breaks), we here restrict our analysis to the ≈90 arcmin^2 of JADES/NIRCam imaging that include data in all nine of the above filters. This imaging is split approximately evenly into the GOODS-S (≈44 arcmin^2) and GOODS-N fields (≈46 arcmin^2).
The NIRCam imaging reduction algorithm applied here largely follows that described in previous works (e.g. ) and is detailed below.
We begin by processing the uncalibrated (*_rate.fits) NIRCam exposures through the stage 1 step of the Science Calibration Pipeline[<https://jwst-pipeline.readthedocs.io/en/latest/index.html>] (v1.9.4) using the Calibration Reference
Data System (CRDS) context map jwst_1045.pmap.
During this step, we implement a custom snowball <cit.> masking algorithm largely based on the methods described in <cit.>.
From the output *_uncal.fits files, we subtract prominent artifacts (e.g. wisps; ) from the short-wavelength NIRCam imaging data using custom-built templates.
These custom templates were created by combining all F090W, F115W, F150W, and F200W imaging data (as of Feb. 1 2023) from JADES as well as the deep, blank-field public programs CEERS <cit.>, PRIMER (PI J. Dunlop), and NGDEEP <cit.>.
All snowball-masked and artifact-subtracted *_uncal.fits files are visually inspected; for a very small fraction of imaging exposures, we implement additional masking of remaining prominent artifacts or remove the exposures from the reduction entirely if the overall data quality appears poor.
From the resulting files, we build custom sky flats in all JADES imaging bands considered here, folding in additional data from CEERS, PRIMER, NGDEEP, and FRESCO <cit.>.
The *_uncal.fits files are then processed through the stage 2 step of the Science Calibration Pipeline with our custom sky flats and the photometric zeropoints from <cit.> that remain the most recent values implemented in CRDS.
From the resulting calibrated *_cal.fits, we then subtract off 1/f noise <cit.> using the sigma-clipped median values along given rows (on an amplifier-by-amplifier basis) and columns.
Next, we subtract off the 2D background using the sep package <cit.> and follow the methods described in <cit.>.
The final *_cal.fits files are then processed through the stage 3 step of the Calibration Pipeline to create mosaics with a pixel scale of 30 mas/pixel.
Using the tweakreg software[<https://drizzlepac.readthedocs.io/en/latest/tweakreg.html>], these mosaics are astrometrically-matched to the imaging over the GOODS fields that have been registered to the Gaia frame (see below).
With the primary intention of improving the astrometric alignment, we fold in F444W data from FRESCO (which significantly extends the covered area in each GOODS field) into our reduction and first align the F444W mosaics to the /WFC3 F160W images, and then subsequently align all other NIRCam mosaics to that of F444W in the respective field.
For our dropout selection of z∼6-9 galaxies, we also utilize imaging from taken over the GOODS fields.
The mosaics adopted here come from the Hubble Legacy Field archive (HLF; see and references therein) and are registered to the Gaia frame using the astrometry from the CHArGE images (G. Brammer private communication) as described in <cit.>.
All HLF mosaics used here (v2.0) have the same pixel scale as the final NIRCam mosaics (30 mas/pixel).
Here we focus on using the optical ACS imaging for our dropout selections, which includes data in the F435W, F606W, F775W, F814W, and F850LP bands.
To obtain reliable photometric colors across the 0.4–5μm SEDs, we must account for the fact that the angular resolution of the ACS and NIRCam bands considered here differ by up to a factor of ≈4.
We therefore convolve all mosaics to the PSF of F444W which has the poorest angular-resolution among these bands (FWHM≈0.15 arcsec).
We refer the interested reader to <cit.> for details on our procedure for constructing the empirical PSFs (one per band and field) and creating the convolution kernels using photutils <cit.>.
§.§ Source Extraction and Photometry
To identify objects across the JADES footprints, we run Source Extractor <cit.> on an inverse-variance weighted stack of the PSF-convolved F200W, F277W, F335M, F356W, F410M, and F444W mosaics.
The photometry is computed in Kron <cit.> apertures following commonly-used procedures for high-redshift galaxies <cit.>.
First, we measure the photometry on the PSF-matched mosaics of each band within elliptical apertures with a <cit.> factor of k=1.2, as these apertures have been shown to yield optimal S/N <cit.>.
The photometric errors are determined separately for each object and each band to take into account both the size and shape of the aperture, as well as the local background level from the varying exposure times across the mosaics.
Specifically, the photometric errors are measured as the standard deviation of flux values computed within randomly-placed elliptical apertures of the size/shape of interest) in nearby empty regions of the mosaic (as determined using sep) with similar background pixel flux variance as the source of interest.
The photometry in the k=1.2 apertures (and the associated error values) are corrected to total flux in two stages.
The fluxes and their associated errors are first multiplied by the ratio of the flux measured in k=2.5 apertures divided by that of the flux in the k=1.2 apertures, with these measurements performed on the inverse-variance weighted stack used for the source extractor detection.
The second stage is correcting for flux outside the k=2.5 apertures by assuming a point-source light profile outside these apertures and adopting our empirical F444W PSFs.
We have verified that our measured photometry for z∼6-9 galaxies located in the current public release region of JADES (see ) is broadly consistent with the photometry released in that associated catalog.
Because of NIRCam's sensitivity, a significant number of our identified Lyman-break dropout z∼6-9 galaxies have k=2.5 apertures that contain one or more different objects, artificially boosting their recovered total fluxes.
We therefore utilize the neighbor subtraction algorithm described in <cit.> to subtract off any neighboring object within the k=2.5 aperture (as determined by the source extractor segmentation map) and recompute the photometry on those images.
Moreover, because the morphologies of z≳6 galaxies are sometimes clumpy <cit.>, the galaxies we intend to study can be identified as multiple separate objects by source extractor.
If two or more nearby objects satisfy one of our Lyman-break dropout selections (z∼6 or z∼7-9; see below) and have overlapping k=2.5 apertures, we determine the smallest elliptical aperture that contains all pixels from their combined segmentation maps and recompute the photometry in that aperture, subtracting off other nearby objects when appropriate.
§.§ Selection of Lyman-break z 6 - 9 Galaxies
Our primary goal is to statistically characterize the physical properties of very UV-faint reionization-era galaxies for the first time, and moreover compare their properties to that of the more UV-luminous population.
We therefore restrict our analysis to galaxies at z∼6-9 where the lower bound reflects when reionization remains significantly incomplete <cit.> while the upper bound of z∼9 is chosen such that the NIRCam SEDs remain sensitive to the emission line set, a key photometric probe of high-redshift galaxy properties <cit.>.
Following the approach of many previous studies <cit.>, we identify z∼6-9 galaxies via a Lyman-break color selection.
These Lyman-break selections rely on the appearance of a sharp spectral discontinuity at λ_rest = 1216 Å that is imposed by very strong absorption blueward of the Lyα line from HI in the intervening IGM (e.g. ).
Other studies have also selected high-redshift galaxies via photometric redshifts, including the study of <cit.> which aimed at establishing a census of z>8 galaxies within the JADES/NIRCam data set and providing a first discussion of their colors and morphologies.
Such photometric redshift selections have the advantage of folding in all photometric data points in determining the probability that a given object lies in the redshift interval of interest.
However, here we explicitly choose not to select on photometric redshifts as doing so may bias our sample preferentially towards objects with strong rest-optical lines (which imprint unique long-wavelength NIRCam color patterns) and the goal in this work is to characterize reionization-era galaxy physical properties.
Because the Lyα break spans a considerable range in observed wavelength across our targeted z∼6-9 interval, we divide our selection according to two separate sets of criteria, one for z∼6 and one for z∼7-9, which we discuss in sequence below.
At z∼6, the Lyα break is located at ≈0.85μm and thus galaxies at this redshift should appear as strong ACS/F775W dropouts <cit.>.
We therefore begin our z∼6 galaxy selection with the following color cuts:
* F775W - F090W > 1.2
* F090W - F150W < 1.0
* F775W - F090W > F090W - F150W + 1.2.
These criteria enforce the presence of a strong break between the F775W and F090W bands (at least 1.2 mags) as well as a much flatter color between F090W and F150W.
The second color cut above sets an upper redshift limit of z≈6.5 where the Lyα break redshifts well into F090W.
In these cuts, the F775W flux is set to its 1σ upper limit in cases where the S/N<1 in that band following past Lyman-break color cut selections in the literature (e.g. ).
Additional analytic cuts are imposed to verify the presence of a strong break.
First, we enforce S/N<2 in ACS/F435W since this band lies fully blueward of the Lyman-continuum break at z>5.5 and all Lyman-continuum photons at these redshifts face extremely strong IGM attenuation.
Second, we ensure that a strong dropout is also seen in ACS/F606W with either F606W - F090W > 2.7, or F606W - F090W > 1.8 if the S/N(F606W)<2, again setting the F606W flux to its 1σ upper limit when S/N<1.
However, we ignore the cut in F435W S/N as well as the F606W dropout criteria for objects with extremely strong Lyα breaks (F775W - F090W > 2.5) which can confidently be identified as z∼6 galaxies even if photometric scatter impacts the F435W or F606W fluxes.
Finally, to ensure each source is real, we enforce the criteria that every z∼6 galaxy in our sample be detected at S/N>5 in at least one NIRCam band, as well as detected at S/N>3 in at least three NIRCam bands in addition to either ACS/F814W or ACS/F850LP.
Our z∼7-9 Lyman-break selection largely follows the same logic of the z∼6 criteria described above.
Specifically, we begin with the color cuts
* F090W - F115W > 1.5
* F115W - F200W < 1.2
* F090W - F115W > F115W - F200W + 1.5.
These cuts are satisfied by galaxies at z≈7.0-9.0 with blue rest-UV colors (-2.5 ≲β≲ -2.0), as well as extremely red (-1.0 ≲β≲ 0.0) objects at z≈7-8 where the Lyman-alpha break has not yet shifted into F115W.
As with the z∼6 criteria, we also impose the condition that S/N<2 in F435W.
For the z∼7-9 selection, we additionally utilize the χ^2_opt parameter defined in <cit.> as χ^2_opt≡∑abs(f)/f (f/σ)^2 where f and σ represent, respectively, the measured flux density and its error in a given band, while abs(f) is the absolute value of the flux density.
After summing over the ACS F435W, F606W, and F775W bands, we enforce χ^2_opt < 5.
However, we ignore the S/N cut in F435W as well as the χ^2_opt cut for objects with extremely strong Lyα breaks (F090W - F115W > 2.5).
Every selected z∼7-9 galaxy must be detected at S/N>5 in at least one NIRCam band redward of F090W, as well as detected at S/N>3 in at least three such bands.
We enforce a final set of cuts to both the z∼6 and z∼7-9 samples to ensure that the NIRCam data are sufficiently sensitive for rest-optical color measurements (tracing e.g. Balmer breaks or nebular line EWs).
Following the IRAC-based approach of <cit.>, we require that f(FUV) / e(X) > 3 where f(FUV) is the far-UV flux density corresponding to the inferred value from the constant star formation history SED fits (see <ref>) while e(X) is the 1σ uncertainty in the flux density of band X.
Here, this cut is enforced with the four reddest bands in the JADES filter set (i.e. X ∈ {F335M, F356W, F410M, F444W}).
Every selected z∼6-9 object was visually inspected in all HST and NIRCam mosaics (both at original and PSF-matched resolution) to remove spurious sources due to artifacts (mostly diffraction spikes) or diffuse emission from large, low-redshift objects.
We also remove bright point-source objects that show colors consistent with brown dwarfs, which can mimic a strong F090W dropout.
The measured colors are compared to the empirical SPEX 0.8–2.5μm brown dwarf spectral library <cit.>, as well as the Sonora model brown dwarf spectral templates <cit.>.
After performing the final steps of neighbor subtraction and combined aperture photometry for the appropriate set of objects (see <ref>), we end up with a final sample of 280 F775W dropouts (z∼6) and 480 F090W dropouts (z∼7-9) across the ≈90 arcmin^2 area with coverage in ACS F435W, F606W, F775W, F814W, and F850LP as well as NIRCam F090W, F115W, F150W, F200W, F277W, F335M, F356W, F410M, and F444W.
§.§ Photoionization SED Modelling
To infer the physical properties of each Lyman-break z∼6-9 galaxy in our JADES sample, we fit their 14-band ACS+NIRCam photometry with star-forming photoionization models.
One of the primary goals of this work is to quantify the stellar masses implied by the rest-UV+optical SEDs.
As has been discussed previously <cit.>, stellar mass estimates can change significantly depending on the assumed star formation history (SFH).
We therefore fit every galaxy in our sample with four different sets of models to quantify the systemic stellar mass uncertainties.
Each of these four models is described in turn below, though we first note the model assumptions applied in every case.
For all four sets of SED model fits described below, we adopt a <cit.> stellar initial mass function (IMF) with bounds 0.1–300 as well as an SMC dust attenuation curve <cit.>.
Moreover, in each case, we fit with log-uniform priors on stellar mass in the range 5 ≤≤ 12, on V-band dust optical depth in the range -3 ≤ log τ__V ≤ 0.7, on ionization parameter in the range -4 ≤ log U ≤ -1, and on metallicity in the range -2.2 ≤ log(Z/Z_⊙) ≤ -0.3.
The upper limit of ≈50% Z_⊙ is set to avoid unphysically high metallicities for the faint (≳ -20) reionization-era galaxies that vastly dominate our sample.
Moreover, the stellar and ISM metallicities are equivalent in all models considered here.
We leave implementations of e.g. a stellar mass to metallicity relation prior or α-enhanced metallicity models for future work.
All F775W and F090W dropouts are fit in the redshift range z=4-8 and z=6-10, respectively, with uniform priors adopting the IGM attenuation model of <cit.>.
We intentionally choose not to make sample cuts based on photometric redshifts as this would likely bias our sample towards z∼6-9 objects with strong nebular lines (and hence young light-weighted ages) given the unique NIRCam color patterns caused by and Hα lines at high redshifts.
beagle Constant SFH models: We first consider models adopting a constant SFH (hereafter CSFH) using the BayEsian Analysis of GaLaxy sEds (beagle) SED-fitting code <cit.>.
beagle fits photometry against the suite of star-forming photoionization SED models described in <cit.> which utilize isochrones computed by the PAdova and
TRieste Stellar Evolution Code (parsec; ).
The Bayesian multinest algorithm <cit.> is implemented in beagle to determine the posterior probability distribution for each free physical parameter in the fit.
In the beagle CSFH fits, the galaxy age is fit between 1 Myr and the age of the Universe at the sampled redshift with a log-uniform prior, where no star formation is assumed to have occurred prior to the fitted age in these models.
Throughout this work, we refer to these beagle CSFH ages as the light-weighted ages of the galaxies.
beagle Two-component SFH models: We allow for more flexible SFH models within context of beagle following the approach of previous works <cit.>.
Specifically, we adopt a two-component SFH (hereafter TcSFH) composed of 1) a delayed τ model component (SFR ∝ t e^-t/τ__SF where t is the time since the onset of star formation) and 2) a constant SFH component.
The onset of the delayed τ model component is assumed to have been between 10^1.35 (≈22) Myr age up to the age of the Universe at the fitted redshift, while the constant SFH component defines the SFH over the most recent 1–20 Myr (log-uniform prior for both components).
The SFR from the delayed component is not added to the constant SFH component over its fitted recent time interval, allowing for both strong recent increases in the SFR, as well as strong recent drops in the SFR (and anything between).
For the delayed τ component, the τ__SF parameter is fit with a log-uniform prior in the range 1 Myr to 30 Gyr.
The SFR of the constant SFH component is determined via the specific star formation rate (sSFR) over the associated time interval, where we adopt a log-uniform prior of -14 ≤ log(sSFR/Gyr^-1) ≤ -6.
prospector continuity SFH prior models: We also consider non-parametric SFH models using the SED-fitting code prospector <cit.>.
prospector adopts the star-forming photoionization models from fsps <cit.> which by default utilizes products from the MESA Isochrones and Stellar Tracks project (mist; ).
Posterior probability distributions on fitted parameters are determined using the dynesty sampling package <cit.>.
We first consider the `continuity' prior implemented in the non-parametric SFH models in prospector.
This continuity SFH prior weights against strong changes in the SFR over adjacent time bins (see for details), thereby preferentially yielding more extended SFHs for galaxies with young light-weighted ages.
The implementation of the non-parametric SFH models in this work largely follows that described in <cit.>.
To summarize, the SFHs are composed of 8 time bins where the SFR in each is a constant value, and the ratios of the SFR in adjacent time bins are fit by prospector.
The earliest time bin extends to a fitted formation redshift in the range z_form = 10-30 (uniform prior), and the two most recent time bins are fixed to 0–3 Myr and 3–10 Myr while the remaining 6 time bins are divided evenly in logarithmic space to the fitted formation redshift.
prospector bursty SFH prior models: Finally, we consider the `bursty' SFH prior <cit.>, a slight modification of the continuity prior described above.
These bursty priors more easily allow for strong deviations in the SFR in adjacent time bins while still permitting very extended star formation histories like the continuity prior.
For each of the four models described above, fiducial values on inferred physical properties for a given galaxy are taken as the median of the posterior probability distribution.
Similarly, the associated ±1σ error as the inner 68% credible interval from the posterior.
Reported absolute UV magnitudes, , are computed here from the continuum flux density at 1500 Å rest-frame using the output redshift and SED posteriors.
Prior to running the SED fits, we add a 5% systematic error to all photometric measurements, largely with the intention of being conservative about the precision of current photometric zero points.
In general, we find that these star-forming photoionization models yield acceptable fits to the z∼6-9 Lyman-break galaxies in our sample.
For each of the four models sets described above, the median best-fitting χ^2 value over the full sample ranges between 10–14 (14 photometric data points are fit in every case).
Moreover, 90-98% of the SED fits yield best-fitting χ^2 < 28.
However, there are four objects in our sample with measured photometry that are very poorly fit by these star-forming only models.
These four objects are those with best-fitting χ^2 > 100 with the beagle models and best-fitting χ^2 > 50 for the other three models.
Because the goal in this paper is to study reionization-era galaxies in context of star-forming models, we remove these four objects from our sample and defer a detailed investigation and physical interpretation of their SEDs to an upcoming separate paper (Endsley et al. in prep).
Two of these objects were selected as F775W dropouts while the other two were selected as F090W dropouts.
Therefore, our final sample of Lyman-break z∼6-9 galaxies analyzed below consists of 278 F775W dropouts (z∼6) and 478 F090W dropouts (z∼7-9).
§ THE PROPERTIES OF GALAXIES AT REDSHIFTS 6 - 9
In this section we begin by discussing the overall properties of the sample, beginning with two basic parameters – photometric redshifts and absolute UV magnitudes – and then shift our focus to the inferred stellar masses (<ref>) and light-weighted ages (<ref>).
We include a detailed discussion of the most massive objects in our sample, as well as those with the youngest and oldest light-weighted ages.
The F775W and F090W dropout subsets are largely comprised of galaxies with photometric redshifts in the range 5.5<z_phot<6.5 and 6.5<z_phot<8.5, respectively (see Fig. <ref>), consistent with expectations given our Lyman-break selection criteria.
The mean photometric redshift is ⟨ z_phot⟩ = 5.9 and 7.3 for the F775W and F090W dropout subsets, respectively.
Considering all 756 of these z∼6-9 JADES/NIRCam galaxies, our sample spans a very broad range in absolute UV magnitude from = -22.0 at the bright end to = -16.4 at the faintest end (see Fig. <ref>).
Using a characteristic UV luminosity of L^∗_UV = -20.5 <cit.>, the range of our sample corresponds to ≈0.02 – 4 L^∗_UV thus covering a factor of ≈200 in UV luminosity.
The average absolute UV magnitude of the F775W dropout (z∼6) subset is ⟨⟩ = -18.6, which is slightly brighter than that of the F090W dropout subset (⟨⟩ = -18.3) given that the sensitivity of our F775W selection is limited more by the /ACS depth.
Both dropout subsets have very similar standard deviation in (0.9–1.0 mag) and each include a handful of galaxies at very bright UV luminosities (< -21.25 or L^_UV > 2 L^∗_UV) as well as ≈100–200 galaxies at the very faint end (> -18 or L^_UV < 0.1 L^∗_UV; see Fig. <ref>).
§.§ The Stellar Masses of Lyman-break z 6 - 9 Galaxies
The 756 Lyman-break z∼6-9 galaxies comprising our JADES sample possess a wide range of inferred stellar masses.
From the beagle CSFH models, we infer stellar masses spanning approximately 3.5 orders of magnitude from ≈2×10^6 up to ≈7×10^9 (see Fig. <ref>a,b).
However, we find that the inferred stellar masses of individual z∼6-9 galaxies can change substantially depending on the assumed SFH, consistent with the results of previous works <cit.>.
In our sample the beagle CSFH stellar masses are, on average, approximately 0.2, 0.1, and 0.5 dex smaller than (i.e. 0.6×, 0.8×, and 0.3×) the stellar masses inferred from the beagle two-component SFH fits, the prospector bursty SFH prior fits, and the prospector continuity SFH prior fits, respectively (see Fig. <ref>e–g).
Therefore, the beagle CSFH fits tend to yield the lowest stellar masses among our sample while the prospector continuity SFH prior fits typically yield the largest mass estimates.
This is because the bulk of galaxies in our sample have young inferred ages in context of a CSFH (∼50 Myr; see <ref>) and in these models no stars are assumed to have formed prior to that inferred age.
On the other hand, the prospector continuity prior imposes a preference for significant star formation extending back to z=10-30 (Δ t ∼ 300-650 Myr for a galaxy at z∼7) by weighting against strong time-variability in the SFH.
For galaxies with the youngest light-weighted ages in our sample (< 10 Myr), the stellar masses inferred with the prospector continuity SFH prior are typically 0.8 dex (i.e. 6×) higher than that inferred with the beagle CSFH setup, though this difference can rise to factors of ∼30–100 in the most extreme cases (see Fig. <ref>g) as found in <cit.> using a sample of very bright (≲ -21) z∼7 galaxies with IRAC coverage.
The offset between constant and continuity prior SFH stellar masses is generally more moderate among galaxies with relatively old light-weighted ages (> 100 Myr), with a median and maximum offset of approximately 0.3 dex (2×) and 0.7 dex (5×), respectively (see Fig. <ref>g), again consistent with the IRAC-based findings of <cit.>.
Consequently, we find that the range of stellar masses for our sample shifts upwards to ≈ 7×10^6 - 2×10^10 with the prospector continuity SFH prior fits (see Fig. <ref>c,d), though we continue to conclude that the faintest (∼ -17) and youngest (≲ 30 Myr) galaxies tend to have the lowest stellar masses of ∼10^7.
We quantify the relationship between UV luminosity and stellar mass among our sample by fitting the equation log(M_∗ / M_⊙) = a ( + 19) + b.
This relation is derived at z∼6 and z∼7-9 by fitting the F775W and F090W dropout subsets separately, and we moreover perform the fits using the beagle CSFH and prospector continuity SFH prior models to bracket the systematic uncertainties associated from different SFH assumptions.
In these – fits, we only include objects with relatively bright UV luminosities (≤ -18) as we aim to mitigate potentially significant biases at the faintest end, though acknowledge that future work directly accounting for incompleteness will be required.
The fitted parameters (a and b) and their uncertainties are taken as the median and standard deviation of values obtained from 1000 realizations of randomly sampling stellar masses and UV luminosities of each galaxy from the posteriors of their beagle CSFH outputs, again keeping only those with ≤ -18.
With this simple approach, we derive the – relations shown in panels Fig. <ref>a–d.
When adopting the beagle CSFH models, we infer a typical stellar mass of ≈ 2.1×10^8 and ≈3.4×10^7 for z∼6 galaxies with = -20 and = -18, respectively.
At z∼7-9, the typical inferred CSFH stellar masses are lower by ≈0.15–0.2 dex at these values, resulting in values of ≈ 1.5×10^8 and ≈2.1×10^7, respectively.
This redshift trend is consistent with previous findings that, at fixed , typical inferred stellar masses decrease slightly with increasing redshift at z∼6-9 implying systematically higher sSFRs at earlier epochs <cit.>.
The normalization of our derived – relation is ≈0.5 dex (i.e. ≈3×) higher when adopting the stellar masses from the prospector continuity SFH prior fits, as expected from our discussion above.
The slope of the – relation we derive in each case (-0.48 ≤ a ≤ -0.40) is consistent with results of several previous works <cit.>.
However, we are generally inferring considerably (∼0.3–0.5 dex) lower stellar masses at fixed than found previously with and data, even when accounting for differences in the assumed SFHs throughout various studies.
We defer a more detailed comparison with literature results to a future work, though quickly note that the z∼7-9 CSFH – relation derived here is consistent with the early results of <cit.>, perhaps reflecting the much richer information gained from on reionization-era galaxy rest-optical SEDs.
§.§.§ The Most Massive Lyman-break z 6 - 9 Galaxies in JADES
Shortly after the first /NIRCam images were released, there were reports of very high-redshift (z∼7-11) candidates with extremely large stellar masses of ∼10^11 identified over small areas (≈40 arcmin^2; ) which clearly challenged models of galaxy formation <cit.>.
Since these initial reports[<https://arxiv.org/abs/2207.12446v2>], stellar mass estimates for nearly all the candidates have been lowered to <3×10^10 , though one candidate = 10^10.9 object remains in the <cit.> sample.
However, due the limited filter set available in these earliest data, solutions with 10–200× lower stellar masses can be obtained for the most massive candidate in <cit.> by adopting different model assumptions that yield similar (if not considerably better) χ^2 (see ).
Here, we utilize the deep 9-band NIRCam imaging of JADES (including two long-wavelength medium bands) to build upon these initial studies by investigating the potential abundance of extremely massive z≳6 galaxies.
There are no galaxies in our sample where the photometric data clearly imply stellar masses of >3×10^10 .
Only a small number of galaxies have inferred stellar masses around ∼10^10 when applying the continuity SFH prior with prospector which, as discussed in <ref>, generally yields a maximum stellar mass estimate for a given galaxy (see Fig. <ref>).
Here, we discuss these most massive z∼6-9 candidates in our JADES sample.
There are 13 galaxies with inferred stellar masses of ≥3×10^9 from the prospector continuity SFH prior fits among our JADES sample.
Eleven and two of these thirteen galaxies are in the F775W and F090W dropout subsets, respectively, and all of their SEDs are shown in Figs. <ref> and <ref>.
Unsurprisingly, many of these objects show NIRCam colors consistent with strong Balmer breaks implying that a relatively old (≳300 Myr) stellar population is contributing significantly to the emergent light.
However, a few of these galaxies show colors consistent with relatively weak Balmer breaks and are simply so luminous (-22 ≲≲ -21.5) that a substantial population of relatively old stars can be hidden within the observed SED if outshined by more recently-formed stars.
The galaxy with the highest inferred stellar mass in the sample (JADES-GS+53.03755-27.87491; = 10^10.2±0.1) lies at z_phot≈ 6.1, is very luminous in the rest UV (= -21.6), and shows a significant Balmer break as well as moderate nebular line emission from and Hα (Fig. <ref>).
Fortuitously, this galaxy falls within the deep MIRI F770W parallel imaging of JADES (see ) and is clearly detected at high S/N, granting improved constraints on the total amount of stellar mass allowed by the SED by extending measurements into the rest-frame near infrared <cit.>.
With the MIRI measurement included (Alberts et al. in prep), we infer a consistent though slightly (≈0.2 dex) lower stellar mass of = 10^10.0±0.1 (see Fig. <ref>).
Because the F770W flux density (largely probing the rest-NIR continuum) is significantly higher than that in F410M (probing the rest-optical continuum), the model posteriors push to higher metallicity solutions (≈0.3 Z_⊙) over that without the MIRI data (≈0.05–0.1 Z_⊙).
Such higher metallicity solutions alter the light-to-mass ratio across the rest-frame ≈0.4-1μm SED, resulting in a slightly lower stellar mass.
A more systematic assessment of how including MIRI data impacts stellar mass inferences among the high-redshift JADES sample will be presented in upcoming works (Florian et al. in prep.; Helton et al. in prep; Ji et al. in prep).
Aside from JADES-GS+53.03755-27.87491, there is only one other galaxy in the z∼6-9 sample considered here with a comparable stellar mass: JADES-GN+189.13794+62.23601 with an estimated = 10^10±0.1 from the ACS+NIRCam data.
JADES-GN+189.13794+62.23601 does not possess any MIRI constraints in the rest-frame near-infrared so we adopt this estimated mass as fiducial (in context of an extended SFH), though we note that previous work has shown that the addition of MIRI data generally lowers estimated stellar masses of high-redshift galaxies <cit.>.
Overall, we find that the maximum inferred stellar mass (from the continuity prospector fits) among all galaxies in our sample is 1.0×10^10 .
We remind the reader that, in this work, we are only considering Lyman-break selected z∼6-9 galaxies with photometry that can be reasonably explained by star-forming photoionization models.
There are four Lyman-break z∼6-9 objects in JADES with photometry that is very poorly matched by star-forming only models and hence have been excluded from the sample considered here (see <ref>).
Notably, these include compact objects with NIRCam colors similar to the most massive candidate reported in <cit.> and others subsequently identified in more recent data sets <cit.>.
In a forthcoming paper (Endsley et al. in prep), we will discuss potential AGN solutions and the impact on stellar mass estimates for these four z∼6-9 objects in JADES that are poorly fit with star-forming only models.
§.§ The Constant SFH Ages of Lyman-break z 6 - 9 Galaxies
In this sub-section, we discuss the light-weighted ages (defined here as CSFH ages) inferred among our sample of 756 Lyman-break galaxies.
A wide variety of rest UV+optical SED shapes are clearly found across our sample, indicating a large diversity in light-weighted ages among reionization-era galaxies.
The typical galaxy in our sample shows a NIRCam SED consistent with a roughly fixed power-law continuum extending from the rest-UV to rest-optical, implying a relatively weak Balmer break and hence a young light-weighted age (∼ 50 Myr) consistent with previous findings from early data <cit.>.
The SEDs for a subset of these typical, young galaxies are shown in Fig. <ref> including a variety of redshifts and UV luminosities.
Many of the young galaxies show a significant photometric excess in at least one long-wavelength band consistent with strong and/or Hα emission (EW∼700–1000 Å; see e.g. IDs JADES-GN+189.19124+62.19952 and JADES-GS+53.14154-27.82320 in Fig. <ref>) implying a substantial contribution of hot, massive stars to the SED.
However, a subset of the young (∼ 50 Myr) galaxies have measured photometry consistent with surprisingly weak nebular line emission (e.g. IDs JADES-GN+189.19376+62.29430 and JADES-GS+53.15167-27.80925 in Fig. <ref>); this sub-population is discussed further in <ref>.
In addition to the typical young (∼ 50 Myr) galaxies, several sources in our sample show long-wavelength photometric excess patterns indicating extremely strong nebular line emission ( EW ∼ 2000-5000 Å or Hα EW ∼ 1000-2500 Å), consistent with exceptionally young light-weighted ages ( ∼ 3 Myr).
Other galaxies show strong Balmer breaks with signatures of weak-to-no nebular line emission ( EW≲300 Å), consistent with relatively old light-weighted ages (∼ 300-1000 Myr).
These sub-populations representing the extreme ends of the light-weighted age distribution are analyzed in greater detail below.
§.§.§ Galaxies with the Strongest Balmer Breaks
The task of identifying reionization-era galaxies with prominent Balmer breaks has long been hindered by the sparse rest-optical SED sampling afforded by /IRAC photometry, leading to strong degeneracy with SED solutions of extremely strong nebular line emission (e.g. ).
This degeneracy could be alleviated for galaxies situated in specific redshift intervals, leading to the early identification of a small number of z∼7-9 galaxies showing colors consistent with strong Balmer breaks <cit.>.
The list of z∼6-9 galaxies reported to exhibit prominent Balmer breaks has now steadily grown since the delivery of the first data <cit.>, though in some cases the available NIRCam photometry could not confidently rule out solutions with extremely strong nebular line emission.
Now equipped with deep imaging in two non-overlapping medium bands as well as three broad bands at ≈3–5μm, we identify and characterize the z∼6-9 Lyman-break galaxies in our JADES data that confidently show significant Balmer breaks.
For ease of comparison with existing literature <cit.>, we quantify the strength of the Balmer break as where F_ν is the (continuum) flux density at the specified rest-frame wavelength.
These flux densities are taken from the posterior SEDs from the beagle fits and are corrected for dust using the inferred A_V value associated with each step of the nested sampling chain.
Here we focus on the subset of galaxies with inferred >1.25 at >84% probability from the posteriors of both the CSFH and two-component SFH beagle fits.
These objects are classified as those with confident strong Balmer breaks throughout this work.
We identify 13 and 9 galaxies with confident strong Balmer breaks (>1.25) in the F775W and F090W dropout samples, respectively, a subset of which is shown in Fig. <ref>.
Notably, four of these 22 galaxies fall in the very UV-faint regime (-18 < < -17; see e.g. JADES-GS+53.15472-27.81561 in Fig. <ref>) indicating that Balmer breaks do exist among reionization-era galaxies that were previously only identifiable in very deep imaging.
The full sample of 22 galaxies with confident Balmer breaks spans absolute UV magnitudes of -20.2 ≤≤ -17.1 and redshifts of z_phot = 5.1-7.9.
While we do not identify any objects with confident strong Balmer breaks at the brightest UV luminosities (-22 ≲≲ -21), we emphasize that our sample size is very limited at this end of the luminosity function given the search area considered in this work (≈90 arcmin^2).
IRAC studies covering >deg^2 fields have revealed z∼7-8 galaxy candidates in this luminosity regime with potential strong Balmer breaks <cit.>.
The JADES galaxies with confident strong Balmer breaks are among those with the oldest light-weighted ages in the sample with CSFH ages spanning ≈ 250 - 1000 Myr.
In context of these CSFH models, we may expect to see significant rest-optical emission line signatures in the photometry from O stars that have been produced over the past ∼10 Myr.
Indeed, a subset of galaxies with confident strong Balmer breaks show long-wavelength photometric excesses implying EWs ≈ 300–400 Å (e.g. IDs JADES-GN+189.16254+62.25824 and JADES-GS+53.17954-27.77444 in Fig. <ref>).
These emission line signatures imply moderate sSFRs over the most recent 10 Myr of sSFR_10 Myr≈ 1-10 Gyr^-1 with all four SED fitting procedures described in <ref>.
However, we also identify a number of galaxies with strong Balmer breaks showing colors consistent with very weak to no rest-optical line emission ( and Hα EWs ≲ 100 Å; e.g. JADES-GN+189.27602+62.19577 and JADES-GS+53.15472-27.81561 in Fig. <ref>).
For these galaxies, the SED fits allowing for more flexible SFHs tend to yield very low recent specific star formation rates (sSFR_10 Myr≲ 0.3 Gyr^-1) due to a declining SFH.
The CSFH fits are forced to sSFR_10 Myr > 1 Gyr^-1 solutions given the age of the Universe at z>6, and thus always have some modest line emission in the models (minimum EW∼200 Å).
Due to the very minor effect such weak emission lines can have on the observed broadband photometry, we cannot reliably conclude whether any of these objects are in fact experiencing a declining SFH.
However, NIRSpec spectra has proven to be highly effective at confirming the existence of such systems in the early Universe, with two post-starburst/micro-quenched galaxies now known at z=5.2-7.3 <cit.>.
Dedicated follow-up of the candidates identified in this work would help characterize and clarify the abundance of these relatively inactive galaxies during the reionization era.
§.§.§ The Most Extreme Line Emitters
For about a decade, it has been known that z>4 galaxies typically exhibit high EW (≳500 Å) rest-optical nebular emission lines (i.e. or Hα) from their impact on broadband Spitzer/IRAC colors (e.g. ).
As more observations were dedicated to deep IRAC imaging, it became clear that a surprisingly large number of bright (≲ -20) z∼7-8 Lyman-break galaxies exhibited very high EW emission (≳1500 Å; ), implying efficient ionizing photon production and very young light-weighted ages.
In the past year, imaging has proven to be highly efficient at identifying many more of these z≳6 extreme line emitters at fainter continuum luminosities, particularly when the nebular lines are situated in a medium band (e.g. ).
In this sub-section, we build upon these previous works by utilizing the deep 9-band NIRCam imaging from JADES (including F335M and F410M) to identify and characterize a statistical sample of z∼6-9 galaxies which confidently exhibit extreme rest-optical line emission.
Across our full sample of 756 z∼6-9 Lyman-break GOODS+JADES galaxies, we identify several systems with long-wavelength NIRCam color patterns indicating extremely high EW rest-optical emission lines.
For the purpose of explicitly investigating this population, we restrict our attention to galaxies where the beagle posteriors yield >84% probability of extremely high EWs from both the constant SFH and two-component SFH fits.
By imposing this probability cut on the results from two SFH models we better ensure that the inferred presence of extreme line emission is likely not being confused with a very strong Balmer break.
There are 20 and 29 galaxies that confidently exhibit extreme emission (EW>1500 Å) within our F775W and F090W dropout samples, respectively.
We show the SEDs of a subset of these systems in Fig. <ref>, where objects were chosen to reflect a range of UV magnitudes and redshifts.
The confident extreme emitters in GOODS+JADES have absolute UV magnitudes spanning -21.0 ≤≤ -16.9 and thus encompass nearly the entire range of our full z∼6-9 sample.
Remarkably, this includes galaxies in the very UV-faint regime (> -18; see e.g. JADES-GS+53.17835-27.77879 and JADES-GS+53.19590-27.79240 in Fig. <ref>).
The vast majority of these extreme emitters show color patterns indicating they lie at redshifts consistent with the targeted windows of our dropout selections (z_phot≈ 5.5 - 8.4).
Only two outlier sources show strong excesses in F277W suggesting z_phot∼ 5 which presumably entered our z∼6 Lyman-break selection due to photometric noise in F775W and/or F090W.
The confident extreme emitters all show SEDs consistent with exceptionally young light-weighted ages ( ≈ 3 Myr) where the emergent light is heavily dominated by both stellar and nebular emission powered by recently-formed O stars.
Several of these galaxies exhibit a significant drop in flux density between the short-wavelength NIRCam bands and at least one of the long-wavelength medium bands implying a significant Balmer jump between the rest-UV and optical continua (see e.g. IDs JADES-GS+53.16900-27.80079 and JADES-GN+189.33577+62.18652 in Fig. <ref>).
Such Balmer jumps are caused by strong nebular continuum emission which is most prominent at very young ages and low metallicities (e.g. ).
It is only for these objects that the inferred EWs confidently reach values of >3000 Å (up to ≈5000 Å) as the data give direct evidence for weak rest-optical continuum emission yet extremely strong line emission from excesses in other bands.
The very young light-weighted ages indicate that these extreme emitters have recently experienced a dramatic rise in star formation rate <cit.>.
In context of a constant star formation history, we infer an extremely large typical sSFR_10 Myr∼ 300 Gyr^-1 for these extreme emitters, which is simply the inverse of their typical light-weighted age (∼ 3 Myr).
But even with the prospector continuity SFH prior fits (which generally yield the lowest sSFR estimates; see <ref>), we continue to infer a very high median sSFR averaged over the past 10 Myr for these systems (26 Gyr^-1) with values up to 72 Gyr^-1.
At z∼6, the JADES/NIRCam photometry provides constraints on not only the EW of emission, but also Hα.
We identify 31 galaxies within our F775W dropout sub-sample that have a >84% posterior probability of Hα EW>800 Å and z<6.5 from both the CSFH and two-component beagle SFH fits.
Here, we have added the z<6.5 criteria as the Hα line begins to redshift out of F444W at higher redshifts.
This sample of confident extreme Hα emitters overlaps significantly with the confident extreme emitters (20 in both subsets) and thus has similar characteristics.
The confident Hα emitters span absolute UV magnitudes of -21.0 ≤≤ -17.4 and light-weighted ages of ≈ 1–10 Myr.
A subset show colors implying prominent Balmer jumps in their continuum and have inferred Hα EWs confidently reaching values up to ≈2000-3000 Å (see e.g. JADES-GS+53.16900-27.80079 in Fig. <ref>).
Their typical specific star formation rates range from sSFR_10 Myr = 150 Gyr^-1 with the beagle CSFH fits down to sSFR_10 Myr = 14 Gyr^-1 with the prospector continuity prior SFH models.
We place both the extreme and extreme Hα emitter populations in context of the broader sample in the following section.
§ THE DEPENDENCE OF [OIII]+HBETA AND HALPHA EQUIVALENT WIDTH ON UV LUMINOSITY
The EWs of rest-optical nebular lines have long provided key insight into the physical properties of reionization-era galaxies <cit.>.
Previous studies have found that relatively bright (≲ -20) z∼7 systems commonly show far higher EWs than galaxies even at z∼2 <cit.>, implying generally less chemically enriched gas and more vigorous star formation at earlier times.
However, the nature of very UV-faint (≳ -18; L_UV^≲ 0.1 L_UV^∗) z≳6 galaxies has remained far less clear given the difficulties in obtaining sensitive rest-UV+optical SED measurements among a statistical sample of such systems.
Photometric constraints on the nebular emission line EWs of this relatively numerous galaxy population would provide a powerful first glimpse into their physical properties, as these lines are sensitive to several physical parameters including metallicity, star formation history, and ionizing photon production and escape.
With our JADES/NIRCam sample of 756 z∼6-9 Lyman-break galaxies, we now investigate how the reionization-era and Hα EW distributions correlate with UV luminosity over a large dynamic range (-22 ≲≲ -16.5).
We focus on investigating how the EWs trend with UV luminosity given that the galaxies with the lowest stellar masses in our sample are biased towards those with the youngest light-weighted ages (see Fig. <ref>), and thus will be weighted towards higher EWs.
For the sake of clarity, this section is divided into sub-sections first describing the methodology for the EW distribution inference as a function of (<ref>) followed by the results on the (<ref>) and Hα (<ref> EW distributions.
We then discuss what these results may imply for the star-forming and ionizing properties of very early galaxies, considering potential evidence for bursty star formation histories (<ref>) or substantial ionizing photon leakage (<ref>) and what these scenarios imply for the contribution of galaxies along the UV luminosity function to cosmic reionization (<ref>).
§.§ Methodology
Here, we describe how we quantify the UV luminosity dependence on the and Hα EW distributions in the reionization era.
For this analysis, we divide our JADES/NIRCam sample into three different subsets separated by absolute UV magnitude: a bright (≤ -19.5) subset, a faint (-19.5 < ≤ -18) subset, and a very faint (> -18) subset.
We also aim to test whether there is any significant redshift evolution in the EW distribution during the reionization era and thus infer this distribution for the F775W dropouts (z∼6) and F090W dropouts (z∼7-9), separately.
Because the Hα line redshifts out of the reddest JADES/NIRCam band (F444W) at z>6.6, we restrict our analysis of the Hα EW distribution to the F775W dropout sample.
To infer the EW distribution among a given sub-population of our sample, we follow the Bayesian formalism described in <cit.> (see also ):
P (θ) ∝∏_i ∫ P_i(EW) P(EW|θ) dEW.
In this equation, θ encapsulates the parameters describing the assumed functional form of the distribution (e.g. the median and variance for a log-normal distribution) while P_i(EW) is the probability distribution function (PDF) on EW for the i^th galaxy in the sub-population of interest.
This formalism has two highly-desired features.
First, the expression in the integral of Eq. <ref> is small when the assumed EW distribution (P(EW|θ)) is inconsistent with the PDF of galaxy i while the expression is large when the assumed distribution and PDF strongly overlap.
Second, galaxies with narrow (i.e. precise) PDFs will have much stronger constraining power on the inferred P (θ) than galaxies with very broad (i.e. largely unconstrained) PDFs.
The PDFs for each galaxy are taken from the posterior of the SED fits, but are explicitly corrected for the prior on EW imposed by the SED fitting approach of interest (i.e. P_i(EW) ≡posterior_i(EW) / prior(EW)).
In doing so, we ensure that our resulting inferred EW distributions are being driven by evidence from the photometry rather than the choice of priors in the SED modelling.
These model priors have a stronger influence on the output posterior probability distribution functions for galaxies with lower signal-to-noise data, and hence would systematically impact the fainter populations more if we did not divide out the priors when computing P_i(EW).
To obtain the EW priors for each SED modelling setup, we fit a single photometric data point in the rest-UV thereby allowing the sampling algorithms to completely explore the allowed parameter space on and Hα EW.
In practice, we treat the integral in Eq. <ref> as a Riemann sum to avoid having to approximate P_i (EW) as some functional form via, e.g., spline interpolation.
Therefore, we rewrite Eq. <ref> as
P (θ) ∝∏_i [ ∑_j P_i,j(EW) P_j(EW|θ) ]
where the index j represents a bin in EW.
For the EW inference, we divide the EW bins by values of log(EW/Å) = 2.5–3.7 with spacing of 0.05 dex, yielding 26 total bins (i.e. log(EW/Å) = [≤2.50, 2.50–2.55, 2.55-2.60, ..., 3.65–3.70, ≥3.70]).
We chose a lower bound of log(EW/Å) = 2.5 to reflect the fact that the photometric data can only place an effective upper limit on the inferred EWs of ≈300 Å for galaxies with the lowest S/N of ∼3-5 in the long-wavelength NIRCam bands.
The upper bound of log(EW/Å) = 3.7 was chosen given that the maximum inferred EW in our sample is ≈5000 Å (see <ref>).
Similarly, the Hα EW bins applied in Eq. <ref> are divided by values of log(EW/Å) = 2.5–3.5 with spacing of 0.05 dex, where the upper bound of log(EW/Å) = 3.5 reflects the maximum inferred Hα EW of ≈3000 Å in the sample.
For each galaxy sub-population, we infer the EW distribution assuming a log-normal functional form defined by two parameters: the median of the EW distribution, , and its standard deviation, .
This assumption of a log-normal EW distribution is motivated by spectroscopic results at lower redshifts <cit.>.
Finally, we infer the EW distributions using both the beagle constant star formation history (CSFH) and two-component star formation history (TcSFH) models.
Due to our findings in <ref> (see Fig. <ref>), we adopt the results from the TcSFH models as fiducial.
§.§ Results on [OIII]+Hbeta EW Distributions
We begin by quantifying the inferred EW distribution among the brightest galaxies in our sample, comparing to existing results from the literature.
We then proceed to describe our findings for the very faint population that is now possible with the depth and area of JADES/NIRCam imaging.
From our fiducial TcSFH beagle SED fits, we infer a median EW of = 740^+50_-50 Å among the bright (⟨⟩ = -20.1) subset at z∼7-9.
This is consistent with median EWs reported in the literature for similarly UV-luminous and NIRCam selected galaxy samples at z∼7-8 <cit.>, as well as samples of very UV-bright (-22.5 ≲≲ -21.5) z∼7 galaxies selected from ground-based imaging <cit.>.
The inferred standard deviation of the EW distribution among our bright z∼7-9 JADES/NIRCam subset (0.31^+0.03_-0.02 dex) is also consistent with values reported in the literature at similar luminosities <cit.>.
Due to the strong degeneracy between Balmer breaks and nebular line emission on IRAC colors at z∼6, the EW distribution has never been quantified in the literature at this epoch.
With the rich (five-band) 3–5μm SED sampling now afforded by JADES/NIRCam, we here infer that the median EW among bright (⟨⟩ = -20.0) z∼6 galaxies is = 890^+40_-60 Å, ≈0.08 dex (≈1.2×) higher than that inferred from our bright z∼7-9 sub-sample.
The standard deviation of EWs among the bright z∼6 subset is inferred to be very similar to that of the bright z∼7-9 subset (see Table <ref>).
Now equipped with a sample of over 600 Lyman-break z∼6-9 galaxies with ≈2–30× fainter UV luminosities (-19.5 ≤≲ -16.5), we investigate whether the EW distribution correlates significantly with UV luminosity in the reionization era.
Using the fiducial TcSFH beagle SED fits, we infer a median EW of = 590^+30_-30 Å and 390^+50_-40 Å among the faint (⟨⟩ = -18.7) and very faint (⟨⟩ = -17.5) z∼6 subsets, respectively.
This indicates a smooth and strong (factor of ≈2.3) decline in the typical EW of z∼6 galaxies between the bright and very faint subsets, which are separated by an order of magnitude in average UV luminosity (see Fig. <ref>).
Similarly, at z∼7-9 we infer a median EW of = 520^+10_-20 Å and 300^+30_-20 Å among the faint (⟨⟩ = -18.6) and very faint (⟨⟩ = -17.6) subsets, respectively, indicating a ≈2.6× decline in typical EW over ≈1 dex in L_UV^ (Fig. <ref>).
Therefore, in both the z∼6 and z∼7-9 samples, we infer a strong UV luminosity dependence on the median EW at z∼6-9 with high confidence (≳5σ).
To understand why the NIRCam SEDs are driving the models to yield a strong decline in EW with decreasing UV luminosity, we analyze the long-wavelength NIRCam colors sensitive to these lines.
We begin by considering the F775W dropout (z∼6) sample where the large majority of galaxies lie at 5.5<z<6.5 (see Fig. <ref>).
For this sample, the F410M band provides a relatively clean probe of the rest-optical continuum given that it is free of contamination from the strongest rest-optical lines ([OIII], Hβ, and Hα) at z≈5.6-6.7.
Meanwhile. the F356W and F335M bands are contaminated by at redshifts associated with the majority of the z∼6 subset (z≈5.4-6.9 and z≈5.5-6.1, respectively).
Therefore, we investigate whether the measured photometry of the z∼6 sample supports significant flattening in the F410M-F335M and F410M-F356W colors as we move from the bright to very faint subsets.
In the bright z∼6 subset, the median F410M-F335M and F410M-F356W colors are 0.98 and 0.63, respectively, consistent with typical contamination from strong emission.
These colors are typically flatter among the very faint sub-population (0.67 and 0.54 mag, respectively), consistent with generally weaker EWs at lower UV luminosities.
We conduct a similar color analysis with the F090W dropout sample.
Across nearly the entire redshift range of our F090W dropout selection, the rest-optical continuum is probed free of strong lines by either F335M (z≈6.3-8.2) or F356W (z≈7.2-8.8) while the emission is captured by either F410M (z≈7.0-7.6) or F444W (z≈7.0-9.0).
We therefore investigate whether the measured photometry supports significant flattening in the F335M-F410M, F335M-F444W, F356W-F410M, and F356W-F444W colors as we move from the bright to very faint subsets of our z∼7-9 sample.
In the bright subset, the median values for all four above colors are noticeably red (0.49, 0.33, 0.46, and 0.27, respectively), consistent with typical contamination of strong emission.
On the other hand, in the very faint subset we measure much flatter median colors (0.21, 0.00, 0.22, and -0.01, respectively) suggestive of typically much weaker EWs.
Because the sample size of each bin is fairly large (N_gal = 55–200; see Table <ref>), we do not expect these median colors to be significantly impacted by noise.
Nonetheless, we employ a Monte Carlo simulation to quantify the likelihood of measuring the median colors of the very faint subsets (accounting for noise) under the assumption that they indeed have true colors represented by their respective bright subset of galaxies.
For this test, we take the measured photometry of all galaxies in the bright subset for a given dropout sample (randomly sampled with replacement), uniformly re-normalize the photometry of each bright galaxy such that it has an value equivalent to a galaxy in the very faint subset in the same dropout sample (again randomly sampled with replacement), and add Gaussian noise to their photometry in each band using the uncertainties from the associated very faint galaxy.
This Monte Carlo simulation is performed 5000 times and, in each iteration, we compute the median colors of interest.
The distributions of median colors from the Monte Carlo simulations are shown in Fig. <ref>, along with the actual measured median colors of the very faint subsets plotted as vertical dashed lines in each panel.
For nearly every color probing emission, we find that very few (≤0.1%) of the 5,000 Monte Carlo iterations yield median recovered colors as flat (or flatter) than that measured among the very faint subset.
While the median F410M-F356W Monte Carlo colors in the F775W dropout subset are consistent (at the ≈1σ level) with that measured among the very faint subset, F356W is less sensitive to strong emission relative to F335M given its broader bandwidth and thus the significance in color difference is expected to be considerably weaker.
Having now validated that the inferred trend in median EW with UV luminosity is consistent with expectations given the measured long-wavelength colors, we consider whether the data imply any significant changes in the width of the EW distribution towards fainter UV luminosities.
Even though the median EW of the very faint population declines substantially relative to the brightest bin, we continue to identify a number of very faint galaxies with prominent long-wavelength colors implying extremely high EWs of >1500 Å (see, e.g., ID JADES-GS+53.17835-27.77879 and JADES-GS+53.19590-27.79240 in Fig. <ref>).
Accordingly, our assumed log-normal EW distributions are inferred to broaden considerably towards fainter UV luminosities at both z∼6 and z∼7, with increasing from ≈0.30±0.03 dex in the bright subsets to ≈0.48±0.04 dex in the very faint subsets (see Table <ref>).
From these distributions, we infer that the fraction of galaxies with extremely high EWs (>1500 Å) declines significantly towards fainter UV luminosities, going from 23^+3_-3% (17^+3_-3%) in the bright z∼6 (z∼7-9) subset to 12^+2_-2% (7^+2_-1%) in the respective very faint subset.
Conversely, the fraction of galaxies with relatively very low EWs (<300 Å) rises dramatically towards the faint-end of the luminosity function, increasing from 6^+2_-1% (10^+3_-2%) in the bright z∼6 (z∼7-9) subset to 40^+5_-4% (49^+3_-3%) in the respective very faint subset.
§.§ Results on Halpha EW Distributions
Due to the poor rest-optical SED sampling of IRAC photometry, previous efforts to quantify Hα EWs at high redshifts have primarily been limited to the redshift range z≈4.0-5.0 where several works have inferred typical Hα EWs of ∼400 Å among -selected galaxies <cit.>.
Most recently, <cit.> stacked the IRAC 5.8μm data of ≈100 z∼8 galaxies and recovered a considerably larger typical Hα EW (≳1000 Å), perhaps implying a significant redshift evolution in the median Hα EW at high redshifts.
With the JADES NIRCam data, we are now able to infer the Hα EW distribution at z∼6 and moreover investigate whether it correlates strongly with UV luminosity as may be expected given our results on the EW distribution above.
From the beagle TcSFH SED fits, we infer very similar median Hα EWs among all three UV luminosity bins at z∼6 (≈600–700 Å in each; see Table <ref>).
Combined with the literature results discussed above, this implies that the typical Hα EWs increase by ≈0.2 dex (≈1.6×) between z∼4.5 and z∼6.
But perhaps the most striking result here is that we find no evidence for a strong change in the typical z∼6 Hα EW across 1 dex in UV luminosity, in marked contrast with our findings on the EW distribution in <ref>.
We again verify that the measured long-wavelength NIRCam colors are consistent with little-to-no change in the median Hα EW between the bright and very faint z∼6 subsets following the same approach as described in <ref>.
At z∼6, the F410M band provides a relatively clean probe of the rest-optical continuum (see <ref>) while the F444W band contains Hα.
The median F410M-F444W color of the bright z∼6 subset (0.33 mag) is nearly identical to that of the very faint subset (0.35 mag), consistent with a weak trend in typical Hα EW with UV luminosity.
The Monte Carlo simulation test described in <ref> also reveals that even accounting for signal-to-noise differences in the bright versus very faint z∼6 subsets often yields nearly identical median F410M-F444W colors between the two populations (see Fig. <ref>).
We thus conclude that the photometry is consistent with little change in typical z∼6 Hα EWs over the UV luminosity range spanned by our sample (-22 ≤≤ -16.4).
While there is moderate evidence (≈3σ) for a slight (≈0.05 dex) increase in the width of the z∼6 Hα EW distribution towards very faint UV luminosities (see Table <ref>), overall the inferred Hα EW distributions for the three UV luminosity bins are nearly indistinguishable (see Fig. <ref>).
If we assume that the Hα EW distribution can be described as a single log-normal form across the entire UV luminosity range of our z∼6 sample (-22 ≤≤ -16.4), we obtain = 630^+10_-40 Å and = 0.26±0.01 dex.
Given the strong UV luminosity dependence on EW inferred in <ref>, it is no surprise that we then infer a strong UV luminosity dependence on the [OIII]λ5007/Hα ratio (hereafter [OIII]/Hα) as well.
In the bright z∼6 subset, we infer a median [OIII]/Hα ≈ 1.6 while in the faint and very faint subsets we infer median [OIII]/Hα ratios of ≈1.3 and ≈0.9, respectively.
§.§ Potential Evidence for Bursty Star Formation
In the previous two sub-sections, we demonstrated that the JADES/NIRCam data indicate a strong decline in EW with UV luminosity at z∼6-9, yet an approximately constant Hα EW distribution across an order of magnitude in L_UV^ at z∼6 (Fig. <ref>).
Here, we begin by discussing what physically may be driving the different L_UV^ trends for and Hα EWs then explore the results implied from the SED fits, with the ultimate goal of considering potential implications for the nature of very UV-faint reionization-era galaxies.
One factor likely influencing the UV luminosity trends in nebular EWs is metallicity.
Due to the deficit of oxygen atoms at very low metallicity, the strength of nebular emission declines substantially at ≲0.2 Z_⊙ (see Fig. <ref>a).
Because fainter, lower-mass galaxies are generally expected to be less chemically enriched, this likely contributes to much weaker EWs in our faintest subset.
We consider how much lower the typical metallicity of very faint (⟨⟩≈ -17.5) z∼6-9 galaxies would have to be relative to the bright subset (⟨⟩≈ -20) to completely explain the ≈2.4× decline in median EW inferred in <ref>.
For this test, we utilize the <cit.> models employed in beagle which we note assume equal stellar and gas-phase metallicities (i.e. no α-enhancement).
From the <cit.> models, we find that the metallicity would need to drop by ≈1 dex to solely explain the substantial decline in EW (see Fig. <ref>a).
Such a dramatic (≈1 dex) decline in metallicity would have a significant impact on the Hα EWs.
Due to changes in stellar opacity, models with lower metallicity lead to more efficient ionizing photon production and hence larger Hα EWs.
From the <cit.> star-forming models employed in our beagle fits, we expect that lowering the metallicity by 1 dex from 0.2 Z_⊙ to 0.02 Z_⊙ would result in a ≈50% increase in Hα EWs.
Therefore, while lower metallicities at fainter UV luminosities may partially explain the decreasing EWs, something else must also be changing between the bright and very faint populations to explain why the Hα EW distribution remains nearly constant at z∼6.
One potential way to offset the expected increase in Hα EW at fainter L_UV due to lower metallicity is by invoking differences in the recent star formation histories of bright versus very faint reionization-era galaxies (we discuss an alternative solution with differing ionizing photon escape fractions below[Other mechanisms may also be responsible for decreasing the [OIII] and Balmer line EWs at fainter UV luminosities, including changes in IMF, contributions from low-luminosity AGN, or α-enhanced metallicities.]).
Because the Balmer and [OIII] nebular lines are primarily powered by hot, massive O stars that have formed in the most recent ≈3 Myr, a strong drop in the SFR over this timescale results in weaker emission lines, decreasing the EWs of Hα and (see Fig. <ref>b).
If UV-faint galaxies are more often caught in a recent downturn of star formation (relative to the UV-bright population), this may explain why their Hα EWs remain comparable to that of the more UV luminous population.
Such a change in recent SFHs would also contribute to the reduced EWs in the very UV-faint population (Fig. <ref>), working in tandem with the shift toward lower metallicities.
To better assess whether the photometric data support systematic changes in metallicity and recent star formation histories across the wide UV luminosity range spanned by our sample (-22 ≲≲ -16.5), we utilize the results of our two-component star formation history (TcSFH) beagle fits.
For each galaxy, we compute the inferred ratio of the star formation rate averaged over the past 3 Myr (SFR_3 Myr) to that over the past 50 Myr (SFR_50 Myr) as a proxy for whether their SFH has recently declined or risen on timescales most relevant for nebular line emission.
The 50 Myr timescale adopted in the denominator of this ratio was chosen to reflect the typical light-weighted age of the sample, though we note that our findings below are qualitatively unchanged if we instead adopt a 30 Myr or 100 Myr timescale.
We quantify the distributions of SFR_3 Myr / SFR_50 Myr and metallicity in the same three UV luminosity bins as considered for the EW distribution analysis (see <ref>), again adopting a log-normal distribution for each bin and applying Eq. <ref>.
Here, we focus our analysis solely on the z∼6 sample given the lack of Hα information for the large majority of the z∼7-9 sample.
Because the SFR_3 Myr / SFR_50 Myr ratio is by definition restricted to values ≤50/3 (this maximum value occurs when no stars are assumed to form between 3–50 Myr ago), we truncate the log-normal distributions at this upper limit (i.e. P(SFR_3 Myr / SFR_50 Myr > 50/3 | θ) = 0).
Similarly, the log-normal distributions for metallicity are truncated outside the fitted parameter space in our SED fits (-2.2 ≤ log(Z/Z_⊙) ≤ -0.3).
When quoting median inferred values for metallicity and SFR_3 Myr / SFR_50 Myr below we account for this truncation, though the standard deviations are quoted directly from the assumed log-normal functional form.
The TcSFH beagle outputs suggest a significant anti-correlation between UV luminosity and metallicity at z∼6 (Fig. <ref>a), with a median inferred metallicity of 0.20^+0.02_-0.02 Z_⊙, 0.11^+0.01_-0.01 Z_⊙, and 0.06^+0.01_-0.00 Z_⊙ in the bright (⟨⟩ = -20.0), faint (⟨⟩ = -18.7), and very faint (⟨⟩ = -17.5) subsets, respectively.
Therefore, in context of these models, changes in metallicity contribute significantly to the different UV luminosity trends with and Hα EWs.
We check whether these inferred metallicities are consistent with predictions from simulations under very simple assumptions on the connection between and stellar mass and assuming that our inferred metallicities largely reflect the gas-phase metallicity.
In our z∼6 sample, the typical stellar mass at ≈ -20 is ∼4×10^8 while it is ∼2×10^7 at ≈ -17.5 (see Fig. <ref>).
The inferred metallicities above from the TcSFH models are reasonably consistent (factor of ≲2 difference) with the predicted z∼6 gas-phase metallicities at the respective stellar masses from the illustris tng <cit.>, FirstLight <cit.>, and astreaus <cit.> simulations, though systematically ≈5× higher than that predicted from the fire simulations <cit.>.
We now investigate whether the TcSFH beagle fit outputs also imply differences in the recent SFHs of bright vs. very faint z∼6 galaxies.
Using Eq. <ref>, we infer a median SFR_3 Myr / SFR_50 Myr ratio of 1.4^+0.2_-0.1, 0.8^+0.1_-0.1, and 0.4^+0.1_-0.1 in the bright, faint , and very faint subsets, respectively.
Therefore, under context of the adopted SED models, we do find evidence that brighter z∼6 galaxies tend to have more very-recently rising SFHs than the faintest sources (Fig. <ref>b).
Notably, not all bright (very faint) galaxies are inferred to have very-recently rising (declining) SFHs as we recover substantial scatter in the SFR_3 Myr / SFR_50 Myr ratio at fixed UV luminosity.
The standard deviation of the assumed log-normal distribution is inferred to be 0.50^+0.05_-0.05 dex, 0.61^+0.05_-0.04 dex, 0.98^+0.23_-0.14 dex in the bright, faint, and very faint subsets, respectively, implying generally more variation in recent SFHs at fainter luminosities (and hence lower stellar masses).
There are two particular subsets of our sample that provide evidence in favor of bursty SFHs among reionization-era galaxies.
The first subset consists of galaxies showing extremely strong nebular line emission.
The very high EWs (∼2000-5000 Å) confidently inferred among ≈50 galaxies in our sample (see <ref>) imply that a considerable fraction of Lyman-break z∼6-9 galaxies have recently experienced a rapid strong upturn in SFR, yielding SEDs that are completely dominated by hot, massive O stars.
We plot the inferred star formation histories of two of the confident z∼6 extreme emission line galaxies in the top panels of Fig. <ref>, where both are inferred to have substantially (≳5×) higher SFRs over the most recent 3 Myr relative to that over the past 10–50 Myr (see also e.g. ).
Such a dramatic recent increase in SFR is consistent with a scenario in which these extreme emission line galaxies are undergoing a burst of star formation.
We note that a variety of recent papers are building empirical evidence for bursty SFHs among z≳6 galaxies using different methods <cit.>.
Bursty star formation in the early Universe is also predicted in several simulations and models of galactic evolution <cit.>.
Another subset of galaxies in our JADES sample may be caught during a relatively inactive phase of star formation.
We identify z∼6 galaxies exhibiting NIRCam SEDs consistent with relatively weak Balmer breaks (thus implying young light-weighted ages) yet also weak and Hα emission.
The SEDs of two of these galaxies are shown in Fig. <ref> for illustrative purposes (we discuss the abundance of similar objects within our full z∼6 JADES sample below).
In the left panels of Fig. <ref>, we show the fitted SEDs from the beagle CSFH models which clearly struggle to reproduce the observed photometry.
For both galaxies, the CSFH models are forced to extremely low-metallicity solutions (≈0.01 Z_⊙) to simultaneously explain the weak Balmer breaks[While the CSFH model fits to these two galaxies yield fairly high light-weighted ages (∼100 Myr), the resulting Balmer breaks in the models remain weak due to strong nebular continuum emission. As has been discussed elsewhere (e.g. ), nebular continuum emission is stronger at lower metallicities and the fractional contribution of nebular emission to the total (stellar+nebular) continuum is larger immediately blueward of the Balmer break than redward of this break.] (≈ 1 as implied by the fairly flat F200W-F277W colors) as well as the low EWs (≈300–400 Å as implied by the weak photometric excesses in F335M and F356W).
The extremely low-metallicity solutions in turn cause the models to drastically over-predict the Hα EWs relative to that implied by the observed F410M-F444W colors, suggesting that the SEDs of these galaxies cannot be well explained by our fiducial CSFH models.
When allowing for more flexible star formation histories, we obtain SED models that yield much better fits to the photometry of these two illustrative z∼6 galaxies.
As shown in the middle panels of Fig. <ref>, the beagle TcSFH models can reproduce the NIRCam photometry in all nine bands with moderately low-metallicity solutions (≈0.05–0.1 Z_⊙) for these two z∼6 galaxies.
In these TcSFH solutions, the two galaxies are currently experiencing a strong decline in SFR that has followed a burst of star formation which occurred ∼5–50 Myr ago (see bottom panels of Fig. <ref>).
Such star formation histories yield relatively few O stars that are still alive to power the [OIII] and Balmer lines, while the B and A stars that formed ≳10 Myr ago continue to dominate the rest UV+optical SEDs.
It is thus possible that these galaxies were in an extreme emission line phase just ∼10–30 Myr earlier, and are now caught in a relatively passive phase of star formation yielding weak nebular lines yet also young SEDs from the continua.
We note that a significant population of reionization-era galaxies with young light-weighted ages and weak emission was first tentatively identified in <cit.> from early data, though the lack of Hα constraints for the z∼6.5-8 sample considered therein left much uncertainty on the physical origin of the weak line emission (see also ).
With the improved imaging sensitivity and SED sampling of JADES and GOODS, we have now extended this investigation to a z∼6 sample and find that our fiducial CSFH models cannot adequately explain the photometry of at least some of these very early galaxies.
In addition to the two illustrative objects shown in Fig. <ref>, we identify a few other z∼6 galaxies that confidently show SEDs consistent with young light-weighted ages and weak line emission, perhaps indicating that they have recently experienced a strong downturn in SFR.
To formally quantify the number of such candidate objects in our JADES sample, we begin by selecting z∼6 galaxies with >84% probability of having and Hα EWs of <800 Å from the TcSFH posterior outputs.
We then apply the color-color cut F200W-F277W < F150W-F200W + 0.3 to ensure that the selected galaxies do not show a strong Balmer break.
While we could apply cuts on the inferred Balmer break strength from the TcSFH fits directly, we find that such an approach is not ideal given that the Balmer break strength can increase dramatically on short (∼10 Myr) timescales when allowing for strong drops in SFR.
Finally, because our goal is to identify galaxies that are relatively poorly fit with the CSFH models yet considerably better fit by the TcSFH models, we only include those with a best-fitting χ^2_CSFH > 14 (we are fitting 14 data photometric data points) and χ^2_CSFH - χ^2_TcSFH > 10.
This rather strict selection results in a total of seven z∼6 galaxies, including the two illustrative examples shown in Fig. <ref>.
The majority of these seven galaxies have inferred SFR_3 Myr / SFR_50 Myr ratios of <0.2 with >84% confidence from the TcSFH posterior outputs.
We emphasize that the seven objects described above are not an exhaustive census of z∼6 galaxies in our sample showing potential evidence of a recent strong downturn in SFR.
They are simply the most confident cases and hence are limited to galaxies with the highest signal-to-noise photometry (S/N(F277W)≳30 before adding the 5% systematic error).
The inferred SFR_3 Myr / SFR_50 Myr ratio distributions shown in Fig. <ref> demonstrate the collective evidence in our sample for a significant population of z∼6 galaxies (particularly at the UV-faint end) with recently-declining SFHs.
This statistical, photometric evidence for strongly declining SFHs among some reionization-era galaxies is further supported by the spectroscopic confirmation of a z=7.3 `(mini-)quenched' galaxy from recent observations (; for similar examples at z∼5 see also ).
The TcSFH models potentially provide new insight into the nature of reionization-era galaxies along the UV luminosity function.
In context of these models, galaxies at the bright end are weighted more towards systems that have very recently experienced an increase in star formation rate (SFR_3 Myr / SFR_50 Myr > 1).
Objects with such star formation histories will have a greater fraction of emergent light arising from hot and exceptionally luminous O stars, resulting in a temporary boost to their light-to-mass ratio.
Therefore, UV-luminous reionization-era galaxies may more often consist of relatively low-mass objects that have been up-scattered in luminosity due to their SFH, particularly when considering the exponential decline of the mass function at high masses.
However, given the large systematic uncertainties in stellar mass estimates among z∼6-9 galaxies (see <ref>), drawing such a conclusion directly from the data is quite challenging.
If early galaxies do indeed have very bursty SFHs, this would help explain the apparent abundance of bright z>10 galaxies recently identified by as one would then expect a higher fraction of low-mass galaxies to be up-scattered above the detection threshold <cit.>.
In contrast, galaxies much further down the UV luminosity function would be less weighted towards systems with recent upturns in their SFR in context of the TcSFH models.
Nonetheless, the substantial scatter in the SFR_3 Myr / SFR_50 Myr ratio inferred among our very UV-faint z∼6 subset (≈1 dex) does imply considerable diversity in the recent SFHs of this population.
This is directly demonstrated by the presence of galaxies with extremely strong EWs (≳1500 Å) within the very UV-faint bin (e.g. JADES-GS+53.17835-27.77879 in Fig. <ref> and the = -16.7 galaxy at = 7.28 described in ) indicating a strong rise in their SFHs on ∼3 Myr timescales.
It may be the case that galaxies considerably fainter than the exponential cut-off of the UV luminosity (and mass) function are more equally likely to be a relatively low-mass object experiencing a recent strong rise in SFR (resulting in high L_UV^/) as they are to be a relatively high-mass object experiencing a recent strong decline in SFR (yielding relatively low L_UV^/), thereby yielding the broader SFR_3 Myr / SFR_50 Myr distribution at fainter .
§.§ An Alternative Explanation: Efficient LyC Escape
Above, we have demonstrated how bursty SFHs (when coupled with lower metallicities at fainter ) can explain the z∼6 and Hα EW trends with UV luminosity found in <ref>–<ref>.
In this sub-section, we discuss how these EW trends can alternatively be explained by allowing for a large fraction of ionizing photons to escape the reionization-era galaxies in our sample.
Because the [OIII] and Balmer nebular lines are powered by ionizing photons interacting with dense gas in galaxies, removing a substantial fraction of these ionizing photons will result in fewer recombinations and hence weaker lines.
We estimate the Lyman-continuum (LyC; λ_rest < 912 Å) photon escape fractions required to explain the photometric data (and resulting and Hα EW trends) under the assumption of a constant SFH by using the `picket-fence' model within beagle.
In these picket fence models <cit.>, a given fraction (hereafter ) of LyC photons are assumed to escape the HII regions without ionizing the gas while still assuming ionization-bounded conditions.
Therefore, all nebular line and continuum emission (independent of wavelength) are reduced by a factor of 1- relative to the base CSFH beagle models (see Fig. <ref>c).
In the picket-fence beagle SED fits, we adopt a log-uniform prior on between 0.001 and 0.8 with all other priors equivalent to the nominal CSFH fits.
We then infer log-normal distributions for both metallicity and using Eq. <ref> in the three different UV luminosity bins for our z∼6 sample.
While density-bounded photoionization models may be more appropriate at the highest values (≳0.5), our goal here is to simply gain an initial sense of the escape fractions required to explain the photometric data (and the implied EW distributions) under the assumption of constant star formation histories.
Adopting density-bounded models <cit.> would generally result in a slower decline of the [OIII] EWs with increasing relative to the Balmer line EWs given that the high-ionization zones are the last to be significantly impacted by ionizing photon escape in this context.
Nonetheless, the difference in EWs between the ionization-bounded and density-bounded beagle models is ≲10% at < 0.8.
The picket-fence CSFH beagle fits also yield an anti-correlation between UV luminosity and metallicity at z∼6 (see Fig. <ref>d) as physically expected, with median values of 0.26^+0.02_-0.02 Z_⊙, 0.15^+0.01_-0.01 Z_⊙, and 0.05^+0.01_-0.01 Z_⊙ among the bright (⟨⟩ = -20.0), faint (⟨⟩ = -18.7), and very faint (⟨⟩ = -17.5) z∼6 subsets, respectively.
However, the inferred distribution of metallicities in a given UV luminosity bin is considerably different in shape from that inferred from the TcSFH beagle fits adopting = 0 (c.f. panel (a) in Fig. <ref>).
With the picket-fence models, both the bright and faint subsets have inferred metallicity distributions that peak at fairly high values (≳0.3 Z_⊙) while the metallicity distribution for the very faint bin is exceptionally broad with an inferred standard deviation of >0.8 dex resulting in a fairly uniform distribution across the -2.2 ≤ log(Z/Z_⊙) ≤ -0.3 parameter space allowed in our SED fits.
It is unclear whether such a distribution in z∼6 galaxy metallicity is physical at such faint UV luminosities and low stellar masses (maximum = 10^9 ; see Fig. <ref>).
Nonetheless, we proceed with investigating what correlation between UV luminosity and is required in context of these models to explain the photometry of the z∼6 sample.
Under the assumption of constant star formation histories, the trends in and Hα EW found in <ref>–<ref> imply a strong increase in towards very faint UV luminosities at z∼6 (Fig. <ref>d).
With the picket-fence beagle SED models, we infer median values of 0.08^+0.03_-0.03, 0.22^+0.02_-0.02, and 0.53^+0.04_-0.04 in the bright, faint, and very faint subsets, respectively, when adopting log-normal distributions for in each luminosity bin.
Moreover, the fraction of galaxies with substantial ionizing photon leakage ( > 20%) is inferred to be 30^+5_-5%, 53^+4_-4%, 78^+6_-6% in each of the respective bins.
As can be seen in Fig. <ref>, the inferred distribution of simultaneously narrows and peaks closer to the allowed upper bound of = 0.8 towards fainter UV luminosity.
The generally bluer UV slopes, smaller rest-UV sizes, and lower inferred dust attenuation among fainter z≳6 galaxies <cit.> do imply that these galaxies would typically be more efficient leakers of LyC photons based on local constraints <cit.>.
However, it is unclear whether a dramatic (factor of ∼5) change in median between ≈ -20 and ≈ -17.5 at z∼6 would be expected.
No evidence yet exists among local samples that the LyC escape fraction strongly correlates with far-UV luminosity nor stellar mass <cit.>.
We consider whether the picket-fence CSFH models can explain the photometry of the seven z∼6 galaxies confidently showing SEDs consistent with young light-weighted ages as well as relatively weak nebular lines discussed in <ref>.
Five of these seven galaxies are considerably better fit with the picket-fence models relative to standard CSFH models (Δχ^2 > 10) and, in each of these five cases, the inferred LyC escape fraction is >50% with >84% probability from the output posterior.
We show the picket-fence CSFH model SEDs for the two illustrative example galaxies in the rightmost panels of Fig. <ref>.
For one of these objects, the best-fitting picket-fence CSFH model has χ^2 = 19.5 across the 14 fitted data points, implying a somewhat poorer fit relative to the best-fitting TcSFH model (χ^2 = 9.8).
However, in four of the seven galaxies under consideration here, there is little-to-no preference in the data for the TcSFH models over the picket-fence CSFH models (Δχ^2 < 5).
As discussed in the following sub-section, these two different scenarios (bursty SFH or substantial LyC escape) have very different implications for reionization, and deep spectroscopic observations will ultimately be required to better determine the physical origin of very early galaxies showing young light-weighted ages and relatively weak line emission.
§.§ Potential Implications for Reionization
Above, we have demonstrated that the z∼6 and Hα EW trends found in <ref>–<ref> can be explained by a combination of decreasing metallicity with UV luminosity, in addition to either 1) systematically more very-recently rising SFHs among brighter systems with small LyC leakage at all luminosities or 2) systematically much greater LyC leakage among fainter systems with constant SFHs at all luminosities (Fig. <ref>).
To better understand the potential implications for the ionizing efficiency of bright vs. very faint galaxies in these two limiting cases of model assumptions (i.e. bursty SFH with ≈ 0 or CSFH with 0<<0.8), we infer the distribution of ionizing photon production efficiencies () as a function of UV magnitude in each case.
Here, we are considering the defined as the production rate of LyC photons divided by the observed luminosity at rest-frame 1500 Å (i.e. no correction is made to L_1500 for dust attenuation or nebular continuum emission).
With our TcSFH beagle models that adopt = 0, we infer median ionizing photon production efficiencies of = 25.65^+0.02_-0.02, 25.48^+0.02_-0.01, and 25.36^+0.03_-0.03 in the bright (⟨⟩ = -20.0), faint (⟨⟩ = -18.7), and very faint (⟨⟩ = -17.5) z∼6 subsets, respectively (Fig. <ref>c).
This is consistent with expectations given that the very faint population is inferred to generally have more very-recently declining SFHs with these models (Fig. <ref>b), resulting in a lower fraction of O stars producing the bulk of ionizing photons.
The standard deviation of the assumed log-normal distribution in is inferred to moderately increase from ≈0.20 dex in the bright and faint bins to ≈0.3 dex in the very faint bin, also consistent with the inferred larger variation in recent SFHs among the very faint population.
The UV luminosity dependence on is inferred to be much weaker with the CSFH models that allow for significant LyC leakage (Fig. <ref>f).
From these fits, the median inferred is 25.75^+0.02_-0.01, 25.64^+0.01_-0.01, and 25.60^+0.02_-0.01 in the respective UV luminosity bins, with the standard deviation in the range ≈0.15–0.18 dex in all bins.
Such a small UV luminosity dependence on yet large dependence on is accommodated in the models by pushing to the very large values (≳0.5) in the very faint population.
We also note that the CSFH models allow for less dynamic range in the inferred value of an individual galaxy given that a significant population of O stars is always assumed to be present (given the allowed age of a galaxy at z≳6), in contrast to scenarios with the TcSFH models where the SFR can sharply decline in the most recent ∼3 Myr (see Fig. <ref>).
These differences in model assumptions contribute to the systematically higher and narrower inferred distributions with the CSFH models.
The outputs of the TcSFH and picket-fence CSFH models therefore have very different implications for the contribution of galaxies to reionization along the UV luminosity function.
The picket-fence CSFH models yield a substantial, systematic increase in towards very faint UV luminosities yet a weak UV-luminosity dependence on , implying that the faintest galaxies vastly dominate the ionizing photon budget for reionization.
However, the TcSFH models assume that all galaxies regardless of have small , but that the most UV-luminous galaxies have systematically larger .
While the assumption that all reionization-era galaxies have small may very well be invalid, so too may be the assumption that all such galaxies have a CSFH given the abundance of extreme line emitters.
Our primary intent here is to demonstrate that existing uncertainties on what is physically driving the EW trends with directly result in large uncertainties on the relative contribution of galaxies along the UV luminosity function to reionization.
In summary, the UV-luminosity dependence on and Hα EWs implied by the NIRCam data provide empirical evidence on a statistical basis for distinct differences in the nature of UV-bright and UV-faint z∼6-9 galaxies.
We have demonstrated that these trends can be explained by either lower metallicity and more recently declining star formation histories in the very UV-faint population, or lower metallicities and higher LyC escape fractions in the very UV-faint population.
The degeneracy of bursty SFHs and LyC leakage on the nebular lines cannot be broken with photometry alone and it may very well be the case that both factors play a significant, perhaps connected, role (e.g. ).
As noted previously in this sub-section, additional mechanisms such as different IMFs, low-luminosity AGN, or α-enhanced metallicities could also play a significant role in driving the EW distribution trends found in <ref>–<ref>.
These various scenarios will have different implications for the contributions of galaxies along the UV luminosity function to cosmic reionization, and thus deep spectroscopic efforts are critical to better determine the physical mechanisms responsible for the and Hα EW trends.
§ OVERDENSITIES AROUND LYALPHA EMITTERS AT Z > 7
It has long been proposed that galaxies exhibiting strong Lyα emission (EW>25 Å) at z>7 often trace strong galaxy overdensities that are capable of generating large ionized bubbles prior to the completion of reionization <cit.>.
In this scenario, strong z>7 Lyα emitters (LAEs) would commonly act as signposts of the earliest sites of structure formation.
Here, we use the improved photometric redshifts enabled by the JADES/NIRCam GOODS-N and GOODS-S imaging to better test the connection between overdensities and strong z>7 Lyα emission.
We search the literature for z>7 galaxies with known high-confidence Lyα detections in the GOODS fields (i.e. S/N>10 in Lyα alone or S/N>10 of a systemic line plus a >7σ Lyα detection), resulting in three objects: JADES-GS-z7-LA at = 7.281 <cit.>, z8_GND_5296 at = 7.508 <cit.>, and z7_GND_16863 at = 7.599 <cit.>.
Two of these systems fall within the JADES/NIRCam footprint (z8_GND_5296 and JADES-GS-z7-LA) and we investigate whether either show a significant photometric overdensity.
The galaxy z8_GND_5296 is a UV-bright ( = -21.5) system in the GOODS-N field with a most recent EW measurement of 33±4 Å from <cit.>.
JADES-GS-z7-LA is, on the other hand, a very UV-faint ( = -16.7) galaxy in the GOODS-S field with a near-unity escape fraction yielding an extremely large EW of 400±90 Å implying that it lies within a very large ionized bubble <cit.>.
Both of these strong LAEs exhibit very high EWs of ≈1500 Å placing them in the top ≈17% and ≈7% of the EW distribution, respectively, given their UV luminosities (see <ref>).
Here we focus on quantifying the photometric overdensities (following e.g. ) of the two z>7 LAEs in JADES noted above given that deep spectroscopic follow-up of z∼7-8 candidates surrounding these rare sources remains incomplete.
We thus aim to supplement our base z∼7-9 selection criteria with additional cuts to better identify neighboring galaxies lying relatively close in redshift space to the two strong z>7 LAEs in JADES.
Fortunately, both of these LAEs lie at z=7.0-7.6 where relatively high-precision photometric redshifts can be obtained by exploiting the medium-band photometry, thereby improving our photometric overdensity estimates.
At z≈7.0-7.6, the [OIII]λλ4959,5007 doublet falls in the F410M filter yet outside of the F356W bandpass and thus galaxies in this redshift range with large EWs will show remarkably red F356W-F410M colors.
From the inferred EW distributions in the JADES dataset (see Table <ref> and Fig. <ref>), we expect the majority of < -18 galaxies at z≈7.3 will exhibit EWs of >400 Å which, assuming a flat rest-optical continuum (in F_ν), will result in F356W-F410M colors of ≳0.6 mag at z≈7.0-7.6.
Such prominent colors should be relatively easy to identify for all but the faintest objects in our deep JADES imaging.
Moreover, at z≈7.0-7.6 the F335M band is entirely redward of the Balmer break at 3648 Å and thus we would generally expect flat F335M-F356W colors given the typically little dust attenuation inferred for the sample (A_V ≲ 0.01 mag).
Guided by the above expectations for the colors of z≈7.0-7.6 galaxies, we identify potential neighbors of the very UV-faint ( = -16.7) z=7.28 Lyα emitter JADES-GS-z7-LA and the UV-bright (= -21.5) z=7.51 Lyα emitter z8_GND_5296 by supplementing our F090W dropout selection criteria (<ref>) with the below additional cuts:
* F356W - F410M > 0.6
* F335M - F356W < 0.3
* F150W < 29.0
Here, we have added the condition of F150W < 29 (corresponding to ≲ -18.0) to ensure that our selection of candidate neighbors is fairly complete at z≈7.0-7.6 given the different depth tiers of the JADES imaging <cit.>.
The on-sky positions of the candidate z≈7.0-7.6 galaxies in the JADES GOODS-S and GOODS-N footprints are shown in Fig. <ref>, along with the positions of the two strong z>7 LAEs covered by JADES.
In the GOODS-S field containing the very UV-faint LAE, we identify 43 potential z≈7.0-7.6 galaxies with F150W < 29 across the 42.2 arcmin^2 JADES area with F335M coverage.
Across the 47.2 arcmin^2 JADES/GOODS-N area with F335M coverage, we find 42 z≈7.0-7.6 galaxies which includes the UV-bright (= -21.5) LAE that is the source of interest in this field (z8_GND_5296).
The very similar numbers of z≈7.0-7.6 candidates in each field despite significantly shallower NIRCam exposure times over much of GOODS-N <cit.> is consistent with past findings that the (completeness-corrected) number density of z∼7 Lyman-break galaxies is considerably higher over GOODS-N than GOODS-S <cit.>.
Below, we first discuss the surface density of potential z≈7.0-7.6 neighbors surrounding the LAEs selected from JADES, then proceed to calculate the surrounding photometric overdensities folding in the expected completeness of our selection.
It is immediately clear from Fig. <ref> that the surface density of z≈7.0-7.6 galaxies is unusually high near the GOODS-S LAE JADES-GS-z7-LA.
In particular, about 1 arcmin East of this very UV-faint LAE lies a complex of six distinct z≈7.0-7.6 galaxies with a maximum separation of 23 arcsec from one another.
The surface density of z≈7.0-7.6 galaxies in this very small (0.073 arcmin^2) region of GOODS-S is 87.3 galaxies/arcmin^-2, a factor of ≈70× higher than that on average over the deep imaging region covering JADES-GS-z7-LA.
The six objects comprising this dense concentration of z≈7.0-7.6 galaxies have absolute UV magnitudes in the range -19.3 ≤≤ -18.2 and EWs ranging between ≈400–2000 Å.
Moreover, there are two UV-bright (-20.4 ≤≤ -20.2) galaxies located approximately 1 arcmin to the South of the UV-faint LAE JADES-GS-z7-LA.
We also identify five candidate z≈7.0-7.6 galaxies lying near (< 1.5 arcmin separation) the UV-bright LAE in GOODS-N (z8_GND_5296), all located roughly to the southeast (see Fig. <ref>).
Considering the smallest rectangular area that encloses all six of these objects, we obtain a surface density of 3.6 galaxies/arcmin^-2 which is approximately 4× higher than the average over the full JADES/GOODS-N footprint with F335M.
These five relatively nearby objects have inferred EWs of ≈500–1500 Å and UV magnitudes of -20.0 ≤≤ -18.4, indicating that they are all substantially (≥4×) fainter than z8_GND_5296.
We now quantify the photometric overdensities around the two strong z=7.3-7.5 LAEs in JADES by comparing the nearby z≈7.0-7.6 galaxy counts to that expected from literature luminosity functions at this epoch.
Our procedure largely follows that of <cit.> which we outline below.
As a first step towards comparing with the expected cosmic mean density, we must first correct our measured z≈7.0-7.6 galaxy counts for selection completeness.
To this end, we analytically estimate our selection completeness as a function of true redshift and absolute UV magnitude (as in ) by generating mock SEDs each with a flat continuum (in F_ν) and IGM attenuation incorporated using the analytic model from <cit.>.
Given that our selection for this overdensity analysis depends on emission via the F356W-F410M and F335M-F356W color cuts above, we also add emission into the mock SEDs following our results on the z∼7-9 EW distribution in <ref>, considering additional constraints on the z∼7 EW distribution among the very bright (-22.5 ≲≲ -21) population from <cit.>.
Specifically, for a given , we assume a log-normal EW distribution defined by median EW log(/Å) = max([2.87 , -0.16 (+20) + 2.87]) and standard deviation = min([0.25 , 0.08 (+20) + 0.3]) dex and adopt a fixed [OIII]λ5007/Hβ ratio of 6 <cit.>.
In doing so, we account for our strong incompleteness to objects with relatively weak emission (EW≲400 Å) for this overdensity analysis, though we note that such objects are expected to be in the minority at F150W < 29 (≲ -18) given the derived EW distributions.
In the selection completeness simulations, for each grid point in redshift (z=6.5-8.5, 0.05 unit spacing) and absolute UV magnitude (-22 ≤≤ -17, 0.25 mag spacing), we generate 1000 mock SEDs each with a different EW value pulled randomly from the parametrized distribution above.
To account for photometric scatter in our selection, we add Gaussian noise to the `true' photometry derived from each mock SED.
The photometric noise is computed as a function of separately for each of the four different imaging regions (medium, deep, and very deep in GOODS-S as well as medium in GOODS-N) by fitting a linear relationship between and the logarithm of the photometric error in each band using data from all z∼7-9 galaxies selected in each imaging region.
We find that such a linear relationship adequately captures the general trend between photometric uncertainty and imposed by typically larger galaxy sizes (and hence Kron apertures) at brighter luminosities.
The resulting simulated completeness of our z≈7.0-7.6 selection is shown in Fig. <ref> for each of the four different JADES/NIRCam imaging regions.
Consistent with our targeted redshift interval, the estimated completeness is ≳50% at z≈7.05-7.60 at the brightest luminosities and very low at redshifts z<6.9 and z>7.7 for all four regions.
We also find that the completeness is ≳50% at absolute UV magnitudes of ≲ -18.5 in the very deep and deep imaging regions in GOODS-S, while such relatively high completeness is achieved at a slightly brighter magnitude threshold of ≲ -19 in the medium-depth regions of GOODS-S and GOODS-N.
These selection completeness grids are convolved with the UV luminosity function of <cit.> to estimate the expected surface density counts of z≈7.0-7.6 galaxies in each imaging region under the assumption of cosmic mean density.
Following the methods of <cit.>, we compute the amplitude of the photometric overdensities around JADES-GS-z7-LA and z8_GND_5296 as a function of angular separation by dividing the GOODS-S and GOODS-N footprints into rings of varying concentric circular radii centered on each z>7 Lyα emitter.
In each ring, the amplitude of the photometric overdensity at that angular separation is computed as the surface density of objects identified by our z≈7.0-7.6 selection divided by the cosmic mean.
Given its near-unity Lyα escape fraction and relatively small velocity offset (≈120 km/s), JADES-GS-z7-LA is expected to reside in a very large (R ≳ 3 physical Mpc) ionized bubble <cit.>.
As discussed in <cit.>, this object cannot have created such a large bubble on its own given its very low luminosity (= -16.7), implying that the local ionizing photon budget must be dominated by neighboring galaxies.
With the JADES/GOODS-S imaging, we have identified a very dense group of z≈7.0-7.6 galaxies located ≈50 arcsec (≈250 kpc in projection) East of JADES-GS-z7-LA (Fig. <ref>) where the surrounding photometric overdensity is very large (≈25) on small scales (see Fig. <ref>).
It is conceivable that these objects all lie at very similar redshifts as JADES-GS-z7-LA and are assisting in powering a large ionized bubble in the vicinity <cit.>.
The two UV-bright (≈ -20.3) systems located at a very similar projected distance to the South of JADES-GS-z7-LA (Fig. <ref>) may be significantly contributing as well.
While the surface density of z≈7.0-7.6 galaxies measured in the ≲2 arcmin radius (≈600 kpc projected) around JADES-GS-z7-LA itself is consistent with the cosmic mean (see Fig. <ref>), this can easily rise by a factor of ∼2–3 if the nearby galaxies are largely located near JADES-GS-z7-LA along the line of sight (i.e. Δ z ≲ 0.1).
In contrast to the UV-faint Lyα emitter described above, we do not identify any significant photometric overdensity near the UV-bright (= -21.5) z=7.51 Lyα emitter z8_GND_5296 (Fig. <ref>).
The surface density of z≈7.0-7.6 objects on both small (<1.5 arcmin; <450 kpc projected) and large (6–10 arcmin; ∼2-3 Mpc projected) separations from z8_GND_5296 are consistent with the cosmic mean.
This may indicate that the strong Lyα emission from z8_GND_5296 is due more to its physical properties.
With an estimated EW of 1540^+220_-160 Å, z8_GND_5296 lies in the high-end tail of the EW distribution for UV-bright z∼7-9 galaxies (see Fig. <ref>).
Accordingly, this source has a large inferred ionizing photon production efficiency of = 25.73^+0.06_-0.05 (from the TcSFH beagle SED fits) indicating that it has a relatively large intrinsic EW <cit.> thus requiring stronger attenuation (from either the host galaxy or IGM) to make it appear as a weak emitter.
The detection of the systemic [CIII]λ1908 line from z8_GND_5296 may also indicate that its photons are redshifted far into the damping wing (velocity offset ≈400 km/s; ) as commonly found among similarly bright z∼7 galaxies (see and references therein), resulting in relatively low opacity through the IGM <cit.>.
However, this is speculative as it is unclear which component of the [CIII] doublet is currently detected given skyline contamination in the ground-based spectrum <cit.>.
Detailed spectroscopic measurements (e.g. ; Witstok et al. 2023 submitted; Helton et al. in prep) are required to better assess the relative role of internal properties to the strong emission seen from this UV-bright emitter, as well as measure the redshifts of the neighboring z≈7.0-7.6 candidates to better constrain its surrounding overdensity (see also ).
§ SUMMARY
We have used deep nine-band NIRCam imaging taken as part of the JADES program to characterize the star-forming and ionizing properties of a very large (N=756) sample of Lyman-break z∼6-9 galaxies.
This sample includes hundreds of reionization-era galaxies in the very UV-faint regime (-18 ≤≲ -16.5; see Fig. <ref>) where it was effectively impossible to measure the rest-optical SEDs (at least among a statistical sample) with /IRAC.
Our main conclusions are summarized below.
* The faintest galaxies in our sample (∼ -17) tend to have inferred stellar masses of (1–3)×10^7 while those at the brightest end of our sample (-22 ≲≲ -21) have ∼100× higher stellar masses (Fig. <ref>). Masses inferred from models with priors weighted towards extended SFHs (i.e. `continuity' priors) are systematically ≈0.5 dex higher than those assuming CSFH, though can rise to factors of 10–100× higher in the most extreme cases for galaxies with the youngest light-weighted ages (∼ 3 Myr; see Fig. <ref>g).
* There are no galaxies in our sample where the photometric data clearly imply very large stellar masses (>3×10^10 ). There are only 13 galaxies in our sample with inferred stellar masses >3×10^9 when adopting the continuity prior that favors extended SFHs (Fig. <ref>). Some of these objects are relatively faint (∼ -19) and show strong Balmer breaks consistent with old stellar populations. Others are simply so luminous (∼ -22) such that a massive evolved stellar population can be hidden under the light of more recently-formed stars. The most massive candidate in our sample has an inferred mass of 1×10^10 when including rest-frame near-infrared SED constraints from parallel MIRI imaging (Fig. <ref>).
* The typical galaxy in our sample shows a NIRCam SED consistent with a young light-weighted age of ∼ 50 Myr (Fig. <ref>). However, we confidently identify several individual galaxies showing strong Balmer breaks implying much older ages (∼ 250-1000 Myr; Fig. <ref>). Some of the objects with confident strong Balmer breaks show signs of recent star formation from significant emission line signatures while others show no indications of emission lines in the photometry. Follow-up spectroscopy is necessary to determine if any galaxies in this latter sub-population may be (temporarily) quenched.
* We find a strong, highly-significant decline in the typical EWs of z∼6-9 galaxies towards lower UV luminosities (median EW≈800 Å at = -20 yet ≈350 Å at = -17.5; see Fig. <ref> and Table <ref>). We verify that this EW trend with is reflected in the typical NIRCam colors of our sample (Fig. <ref>). The EW distribution is also found to broaden considerably at lower UV luminosities (standard deviation ≈0.3 dex at = -20 yet ≈0.5 dex at = -17.5). We infer a slight (≈0.05–0.1 dex) decline in the typical EW between the z∼6 sample and the z∼7-9 sample at fixed .
* In remarkable contrast to the strong decline in EW with , we find that the Hα EW distribution changes very weakly (if at all) across a factor of ≈10 in UV luminosity among our z∼6 sample (where Hα falls in F444W; Fig. <ref>). If we assume that the z∼6 Hα EW distribution does not change with over the range probed by our z∼6 sample, we infer a log-normal distribution with median EW=630^+10_-40 Å and a standard deviation of 0.26±0.01 dex.
* We demonstrate that the and Hα EW trends with can be explained by a combination of lower metallicity and systematically more recently-declining SFHs at lower UV luminosities (top panels of Fig. <ref>). In this interpretation, the brightest z∼6 galaxies in our sample (⟨⟩ = -20.0) have a median inferred metallicity ≈3× higher than that of the faintest z∼6 galaxies (⟨⟩ = -20.0; Z ≈ 0.06 Z_⊙). Moreover, the median inferred ratio of SFR averaged over the past 3 Myr to that averaged over the past 50 Myr (SFR_3 Myr / SFR_50 Myr) is found to be ≈3.5× higher among the brightest z∼6 objects relative to the faintest galaxies. The brightest galaxies are frequently inferred to be experiencing a recent strong upturn in SFR (i.e. (SFR_3 Myr / SFR_50 Myr > 1) while the faintest galaxies are inferred to be a more even mixture of objects experiencing a recent strong rise in SFR as a recent strong downturn in SFR.
* There are two particular subsets of our sample which provide evidence in favor of bursty SFHs. A substantial fraction of galaxies in our sample (≈10–20% depending on ) are inferred to exhibit extremely high EWs (>1500 Å; Fig. <ref>) implying that they have recently experienced a dramatic (factor ≳5) increase in SFR over the past 3 Myr (upper panels of Fig. <ref>). Another subset of galaxies in our JADES sample show relatively weak Balmer breaks yet also weak nebular line signatures (Fig. <ref>) implying that they may be experiencing a lull in star formation activity that followed a burst of star formation that occurred ∼5–50 Myr ago (lower panels of Fig. <ref>). These two sub-populations may therefore be caught during the peaks and trough of bursty star formation histories among reionization-era galaxies. We note that other recent studies are building empirical evidence for bursty SFHs among early galaxies <cit.>.
* We also discuss how the and Hα EW trends with may be influenced by a strong correlation between UV luminosity and LyC escape fraction (bottom panels of Fig. <ref>). When allowing for substantial LyC escape yet enforce constant SFHs, we infer that the faintest z∼6 galaxies in our sample (⟨⟩ = -17.5) are typically very efficient at leaking LyC photons into the IGM (median ≈0.5) while the brightest (⟨⟩ = -20.0) objects leak only a moderate fraction of their LyC photons (median ≈0.08). We discuss how this scenario has very different implications for the contribution of galaxies along the luminosity function to cosmic reionization compared to the interpretation of bursty SFHs. This highlights the need for deep spectroscopic follow-up to better determine the physical origin of the and Hα EW trends with UV luminosity found in this analysis.
* Finally, we quantify the photometric overdensities around two strong Lyα emitters at z>7 in the JADES footprint. The very UV-faint (M_UV=-16.7) z=7.28 Lyα emitter lies close to a very dense concentration of z≈7.0-7.6 galaxies (Fig. <ref>) that may have helped generate a large ionized bubble leading to efficient Lyα transmission through the largely neutral IGM. However, the UV-bright (M_UV=-21.5) Lyα emitter shows no significant nearby overdensity (Fig. <ref>) perhaps suggesting that efficient ionizing photon production and a large Lyα velocity offset strongly contributed to its Lyα detection, and that not all strong Lyα emitters in the reionization era necessarily occupy large ionized bubbles.
The JADES results presented here provide a valuable observational baseline against which to compare predictions of reionization-era galaxy properties from models <cit.>.
Such future work will help build insight into the nature and assembly of the faintest (and brightest) z∼6-9 galaxies, and their role in cosmic reionization.
§ ACKNOWLEDGEMENTS
RE thanks John Chisholm, Charlotte Mason, and Adele Plat for enlightening discussions that improved this work.
EE, DJE, BDJ, BR, GR, MR, FS, DPS, LW & CNAW acknowledge support from JWST/NIRCam contract to the University of Arizona NAS5-02015.
DPS acknowledges additional support from the National Science Foundation through the grant AST-2109066.
LW acknowledges additional support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2137419.
WB acknowledges support by the Science and Technology Facilities Council (STFC), ERC Advanced Grant 695671 "QUENCH".
KB acknowledges support by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
AJB, AJC, JC, IEBW, AS & GCJ acknowledge funding from the "FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 789056)
S.C acknowledges support by European Union’s HE ERC Starting Grant No. 101040227 - WINGS.
ECL acknowledges support of an STFC Webb Fellowship (ST/W001438/1)
A.L.D. thanks the University of Cambridge Harding Distinguished Postgraduate Scholars Programme and Technology Facilities Council (STFC) Center for Doctoral Training (CDT) in Data intensive science at the University of Cambridge (STFC grant number 2742605) for a PhD studentship.
DJE is also supported as a Simons Investigator.
TJL and RM acknowledge support by the Science and Technology Facilities Council (STFC) and by the ERC through Advanced Grant 695671 "QUENCH".
RM also acknowledges funding from a research professorship from the Royal Society.
DP acknowledges support by the Huo Family Foundation through a P.C. Ho PhD Studentship.
RS acknowledges support from a STFC Ernest Rutherford Fellowship (ST/S004831/1).
The research of CCW is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
JW acknowledges support from the ERC Advanced Grant 695671, “QUENCH”, and the Fondation MERAC.
This material is based in part upon High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department.
The authors acknowledge use of the lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST-1828315.
This work made use of the following software: numpy <cit.>; matplotlib <cit.>; scipy <cit.>; astropy[<https://www.astropy.org/>], a community-developed core Python package for Astronomy <cit.>; Source Extractor <cit.> via sep <cit.>; photutils <cit.>; beagle <cit.>; multinest <cit.>; prospector <cit.>; dynesty <cit.>; sedpy <cit.>; and fsps <cit.> via python-fsps <cit.>.
§ DATA AVAILABILITY
The ACS data used in this work are available from the Hubble Legacy Field archive (<https://archive.stsci.edu/prepds/hlf/>).
A portion of the JADES data utilized in this work is now available as an early release via the Mikulski Archive for Space Telescopes (<https://archive.stsci.edu/hlsp/jades>).
The remaining JADES data will be released following the 12 month proprietary period, if not sooner as part of a subsequent early release by the team.
mnras
|
http://arxiv.org/abs/2306.06184v1
|
20230609182104
|
A Unified Model and Dimension for Interactive Estimation
|
[
"Nataly Brukhim",
"Miroslav Dudik",
"Aldo Pacchiano",
"Robert Schapire"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
Modeling the impact of thermal stresses induced by wellbore cooldown on the breakdown pressure and geometry of a hydraulic fracture
*
July 31, 2023
===================================================================================================================================
We study an abstract framework for interactive learning called
interactive estimation in which
the goal is to estimate a target from its “similarity” to points queried by the learner.
We introduce a combinatorial measure called dimension
which largely captures learnability in our model.
We present a simple, general, and broadly-applicable algorithm, for which we obtain both regret and PAC generalization bounds that are polynomial in the new dimension.
We show that our framework subsumes and thereby unifies
two classic learning models:
statistical-query learning and structured bandits.
We also delineate how the dimension is related to well-known parameters for both frameworks, in some cases yielding significantly improved analyses.
§ INTRODUCTION
We study a general interactive learning protocol called interactive estimation.
In this model, the learner repeatedly queries the environment with an element from a set of alternatives,
and observes a stochastic reward whose expectation is given by an arbitrary measure of the “similarity” between the queried alternative and the unknown ground truth.
Thus, in rough terms,
the goal is to estimate a target from its similarity to queried alternatives.
By studying such a general abstraction of interactive learning,
we are able to reason about the properties of a very broad family of learning settings, and to make connections across a variety of contexts.
Our results are based on a combinatorial complexity measure we introduce called the dimension, which we show largely captures learnability in our model. Intuitively, this measure corresponds to the length of the longest sequence of alternatives
in which each one has a similar suboptimal value of similarity to all its predecessors. We then use the measure to analyze the performance of a simple, broadly-applicable class of algorithms which repeatedly make new queries that best fit the preceding observations. We prove both regret bounds and PAC generalization bounds that are all polynomial in the dimension.
We show that our learning framework subsumes two classic learning models that were seemingly unrelated prior to this work:
First, our model subsumes
the statistical query (SQ) model, introduced by <cit.> for designing noise-tolerant learning algorithms.
In the SQ model, the learner can sequentially ask certain queries of an oracle, who responds with answers that are only approximately correct, with the goal of correctly estimating a target. Despite its simplicity, it has been proven to be a powerful model. Indeed, a wide range of algorithmic techniques in machine learning are implementable using SQ learning. Thus, it has been
proven useful, not only for designing noise-tolerant algorithms, but also for its connections to other noise
models, and as an explanatory tool to prove
hardness of many problems
(see the survey of <cit.>).
We show that our framework subsumes the SQ model, and furthermore that the dimension generalizes well-known parameters that characterize SQ learnability.
Second, our model captures
structured bandits, in which the learner repeatedly chooses actions
which yield stochastic rewards, with the goal of minimizing regret relative to the best action in hindsight. Over more than a decade, the eluder
dimension <cit.> has been a central technique for analyzing regret for contextual
bandits and reinforcement learning (RL) with function approximation <cit.>.
We will see that
the dimension is upper-bounded by the eluder dimension, and that there can in fact be a large gap between the two. This sometimes leads to an improved analysis when relying on the proposed measure rather than the eluder dimension.
Because SQ and bandits are both subsumed by our framework, all the results mentioned above directly apply to those settings as well,
including the applicability of our general-purpose algorithms.
To summarize, our main contributions are as follows:
* Unified framework. We derive a general framework which captures various interactive learning settings, including specifically SQ and bandits.
* Novel dimension, performance bounds. We introduce the dimension that largely characterizes learnability in our framework. We study a general, simple algorithm, and give a novel analysis that results in both regret and PAC generalization bounds that are polynomial in the new
dimension. We also give lower bounds in the SQ and bandit settings.
* Improved analysis.
We show instances in which the standard analysis of a certain class of algorithms using the eluder dimension yields bounds that are arbitrarily large, but in which an analysis using our dimension yields low regret bounds.
Related work.
The interactive estimation model we consider in this work is defined with respect to an evaluation function that can be thought of as an arbitrary measure of the “similarity” between
the queried alternative and the target.
Previously, <cit.> developed a theory
of similarity-based learning that generalizes kernel methods,
providing sufficient conditions for a similarity function to be useful for learning.
<cit.> review several approaches to classification based on similarity between examples, including, for instance, kernels and nearest neighbors.
<cit.> studied a learning-by-distances model that resembles ours using a metric as a measure of similarity.
In comparison to these works,
our model admits an arbitrary similarity measure for which we derive a general dimension, algorithm, and bounds.
In the context of bandit and reinforcement learning, a parameter called the decision-estimation coefficient (DEC) has recently been proposed by <cit.>
to characterize learnability in interactive decision making. Unlike DEC, our dimension is combinatorial in nature, and applies to settings like SQ, which are not captured by DEC.
As discussed above, our model subsumes SQ and bandits,
both of which have been extensively studied
(see the references above as well as various surveys <cit.>).
§ SETTING
In this paper, we study an interactive learning protocol called interactive estimation. In this protocol, the learner is provided with a set of alternatives, and an evaluation function :×→ [-1,1].
Intuitively, can be viewed as a measure of “similarity,” though it need not be symmetric.
There is also a distinguished alternative z^*∈ called the target, fixed throughout the interaction, and unknown to the learner.
In each of a sequence of steps t=1,…,T, the learner selects
one alternative z_t∈ and receives a stochastic reward r_t∈ [-1,1] drawn independently, conditioned on z_t,
with expectation satisfying [r_t z_t]=(z_t z^*).
Informally,
by choosing alternatives and observing their similarity to z^*, the learner aims to get close to the target.
The special case when _t=(z_t z^*),
that is, when rewards are deterministic functions of the queried alternatives, is referred to as the deterministic setting.
We generally assume (z^* z^*)≥(z z^*) for all z∈ and denote this optimal value as α^*(z^* z^*). We will assume that the value of α^* is known to the learner or that we are provided with an alternate optimality level α≤α^* such that the task is to identify z with (z z^*)≥α. At the end of Section <ref> we discuss how this assumption can be relaxed.
We consider two alternative goals for a learner in this model: sublinear regret and PAC generalization.
A learner achieves sublinear regret relative to an optimality level α≤α^* if
(T,α) = o(T), where
(T,α) = [t]∑_t=1^Tα - (z_t z^*)
.
We say that a learner achieves PAC generalization if
for any ϵ, δ > 0 and α≤α^*,
with probability at least 1-δ (over the randomness of the query responses and the learner's own randomization),
after m(ϵ,δ,α) interactions in the protocol above, the learner outputs such that
( z^*) ≥α - ϵ. The function m(ϵ,δ,α) is referred to as sample complexity.
We recover standard notions of regret and PAC generalization by setting α=α^*.
[Point on a sphere]
Let · denote the standard Euclidean norm in ^n and let
=_n-1=∈^n: =1 be the unit sphere in ^n.
The goal is to estimate an unknown point ^*∈_n-1 based on rewards equal to the inner product between the queries and the target, that is, r_t=(_t^*)⟨_t , ^*⟩.
We now introduce the two main examples corresponding to classic learning models that are subsumed by the interactive estimation model.
[Structured bandits]
Let be an action set, a space of reward functions f : →[-1,1], and f^*∈ the target reward function.
In step t, the learner chooses an action a_t∈ and receives reward r_t∈[-1,1] with [r_t a_t]=f^*(a_t). The goal is to maximize the sum of rewards.
Let a^* = _a ∈ f^*(a) be an optimal action.
To represent bandits in our formalism, we let = ×,
z^* = (f^*, a^*), and
(f,a)(f^*, a^*) = f^*(a).
The structured bandit problem has been extensively studied, and Example <ref> captures its expressiveness within our framework. For example, it recovers the possibly simplest case of K-armed bandits, by considering = {1,...,K} and = [0,1]^K. At each round, the learner chooses arm a_t ∈ and observes a reward r_t which is drawn from a distribution with mean f^*(a_t).
See Appendix <ref> for a more concrete example of K-armed bandits instantiated within our framework.
In Section <ref> we also give concrete bounds for other example classes including linear bandits and GLM bandits.
[SQ learning]
Given a domain , the goal is to learn a binary
classifier h^*:→± 1 from some hypothesis
class ⊆̋{± 1}^, based on training examples (x,y) drawn from some distribution D such that y=h^*(x). In step t, the learner produces a hypothesis
h_t and observes the accuracy of h_t on a fresh finite sample. In this case, =$̋, the evaluation function is equal to the expected accuracy(hh^*)= _x∼D[h(x)h^*(x)] , and the reward is the empirical accuracy on a fresh sample.=-1
The SQ learning model considered in this work (Example <ref>) differs from the original model of <cit.>
because it is restricted, as in previous works <cit.>, to so-called correlational queries (called CSQs) and assumes stochastic responses, as opposed to allowing arbitrary queries and adversarial responses.
We discuss relationships between various SQ variants in Appendix <ref>.
We finish this section by introducing a central concept of this paper, a new combinatorial complexity measure called the dimension which, as we will see, largely captures learnability in the interactive estimation protocol.
For a set , scalars α∈, ϵ > 0, and evaluation function :×→ [-1,1], the dimension_̣(, α, ϵ) is the largest integer d for which there exist z_1,…, z_d ∈ with (z_i z_i ) ≥α, and a scalar c ≤α - ϵ, such that for all i<j,
(z_i z_j) - c≤ϵ/√(d).
Furthermore, denote the monotonic dimension as _(, α, ϵ)max_ϵ' ≥ϵ_̣(, α, ϵ').
Note that_̣(, α, ϵ)=0if there is nozsuch that(zz)≥α, and otherwise_̣(, α, ϵ)≥1. In particular, ifα≤α^*then_̣(, α, ϵ)≥1.
In rough terms, this dimension corresponds to the longest sequence of points withα-large self-evaluation, such that the evaluation(z_iz_j)of each pointz_irelative to every successive point z_j is “small” (significantly less thanα), and also tightly clustered around some valuec.
Thus, each point is similar to itself, but dissimilar from all successive points to about the same degree.
The idea is illustrated in Figure <ref>. The monotonic dissimilarity
dimension is the tightest upper bound on the dissimilarity dimension that is non-increasing in ϵ.
Various concrete examples where the dimension can be bounded are provided in
Section <ref>.
For instance, using a general bound for linear bandits from Section <ref>, we can show that for the task of finding a point on a sphere based on inner products (Example <ref>), the dimension_(, α, ϵ) ≤4n + 3, a bound that is independent of bothαandϵ.
§ ALGORITHMS AND UPPER BOUNDS
In this section we analyze algorithms for the interactive estimation protocol, which we call interactive estimation algorithms.
We show that when an interactive estimation algorithm satisfies
two properties, large self-evaluations and decaying estimation error, then its regret can be bounded using the dimension. We introduce a simple algorithm (Algorithm <ref>), which satisfies these properties for many standard classes of alternatives.
The first property requires that the algorithm only select alternatives that would achieve the expected reward of at leastαif they were the target:
An interactive estimation algorithm has α-large self-evaluations if at every time step t=1,…,T, it selects a query z_t such that z_t ∈_α, where
_α = { z ∈ : (z z) ≥α}.
Algorithm <ref> satisfies this property, with the optimality levelαprovided as input. At the end of the section, we discuss the case whenαis not provided andα^*is unknown. We derive an optimistic version of Algorithm <ref> that achievesα^*-large self-evaluations with high probability.
The second property states that the queries produced by the algorithm provide increasingly good estimates
of the expected rewards in the previous rounds (that is they are good estimators
with the benefit of hindsight), as quantified by the square loss.
An interactive estimation algorithm has decaying estimation error if there exists C_T,δ≥ 0 growing sublinearly in T, that is, C_T,δ=o(T), such that with probability at least 1-δ the sequence of queries z_1,…,z_T produced by the algorithm satisfies
∑_i=1^t-1(z_i z_t) - (z_i z^*) ^2 ≤ C_T,δ
for all t∈{1,…,T} simultaneously.
Algorithm <ref> optimizes an empirical version of
Eq. (<ref>), with the observed rewardsr_iin place of the expectations(z_iz^*). Thus, in the deterministic setting, with_i=(z_iz^*), Algorithm <ref> satisfies this property withC_T,δ=0. It can also be shown that it satisfies this property when the set of alternatives is finite:
Assume that || < ∞.
Then Algorithm <ref> satisfies the decaying estimation error property with
C_T,δ = Oln(2T||/δ).
In the general case, whenis infinite, we show that Algorithm <ref> satisfies the decaying estimation error property withC_T,δ = Olog(TN/δ)whereNis a suitable covering number of(see Corollary <ref> in Appendix <ref>). For example, for linear bandits, which is an instance of Example <ref> in which the action set and function class correspond to a subset of the unit ball inℝ^n, we obtainC_T,δ = Onlog(1/ϵ) + log(T/δ).
In Appendix <ref>, we discuss an approach in which we have access to an online regression oracle for the least squares problem in step <ref>. We show that a suitably modified version of Algorithm <ref> has a decaying estimation error as long as the online regression oracle achieves a sublinear regret
(but without further dependence on a covering number).
To develop some intuition how Algorithm <ref> works,
we can again consider theK-armed bandit problem (a special case of Example <ref>), and suppose thatα=0.75. In each step, the algorithm picks a pair(f_t,a_t), wheref_t∈[0,1]^Kis the vector of mean reward estimates anda_tis the arm with the largest mean estimate. The estimatesf_t(a),a=1,…,K, are formed by optimizing the least squares error of the observed rewards, under the constraint that at least one of the mean estimates must be above0.75. As a result, the algorithm pulls the arm with the largest average reward as long as that average is above0.75(arms that have not been pulled are assumed to have averages above0.75). If all the averages are below0.75then the algorithm selects the armawith the smallest valuen_a(0.75-μ̂_a)^2, wheren_ais how many times the arm has been pulled so far andμ̂_ais its average reward; it can be verified that this solves the least squares problem subject to the constraint that at least one of the mean estimates is above0.75.=-1
We next state our main results: a regret bound and a PAC generalization guarantee. They are both based on bounding how many “bad” queries
any algorithm with large self-evaluations and decaying estimation error can make. Concretely, we say that a queryz ∈isϵ-bad if its
suboptimalty gap is greater thanϵ, that is, if
(z z^*) < α - ϵ.
The next lemma shows that the number ofϵ-bad queries is upper bounded polynomially in the dimension.
Let ϵ,δ>0, and let d= _(, α, ϵ) < ∞
for some set , evaluation function and α≤α^*.
Let be an interactive estimation algorithm with α-large self-evaluations and a decaying
estimation error with some C_T,δ. Then, with probability at least 1-δ, the number of ϵ-bad queries that makes in T steps is
at most
2d^1.5ln(4/ϵ) + 12d^2.5C_T,δ/ϵ^2.
Consequently, if C_T,δ≥ln(2T), then with probability at least 1-δ, the number of ϵ-bad queries is at most 36d^2.5C_T,δ/ϵ^2, and if C_T,δ=0 then it
is at most 2d^1.5ln(4/ϵ).
The above result is the core component of our main theorems. The proof is given in Appendix <ref>; here we sketch the main ideas. The goal is to show that the “bad”
interval[-1, α- ϵ]cannot contain too many queries made by. The proof starts by partitioning this interval
into disjoint subintervals and then bounds the number of queries in each subinterval. It does so by constructing a graph with nodes corresponding to queries, which are connected by an edge if they satisfy the dimension conditions. The decaying errors that imply a certain minimum number of edges (as a function of number of queries). On the other hand, the dissimilarity dimension bounds the size of the largest clique, which implies an upper bound on the number of edges (using Turán’s Theorem <cit.>, a standard result from extremal graph theory). Combining the bounds yields an upper bound on the number of queries in the subinterval. Summing across subintervals proves the lemma.=-1
The following theorems use Lemma <ref> to bound both the regret and PAC sample complexity. The proofs are deferred to Appendices <ref>
and <ref>.
Let δ, T > 0,
and let d= _(, α, 1/T)
for some set , evaluation function and α≤α^*.
Let be an interactive estimation algorithm with α-large self-evaluations and a decaying
estimation error with some C_T,δ. If C_T,δ≥ln(2T) then with probability at least 1-δ, the regret of satisfies
Regret(T,α)≤ 1+ 12d^1.25√(C_T,δ T).
In the deterministic setting,
Regret(T,α)≤ 1+12d^1.5.
For an algorithm with a decaying estimation error, the termC_T,δis sublinear inT, implying a sublinear regret in Theorem <ref>. For example, Algorithm <ref> has a decaying estimation error withC_T,δthat scales logarithmically withT/δfor many standard function classes, and so the overall regret scales asO(√(TlogT))(see Corollary <ref> in Appendix <ref>).=-1
To derive PAC generalization guarantees, we apply a variant of online-to-batch reduction to any algorithm with large self-evaluations and a decaying estimation error. The resulting approach, shown in Algorithm <ref>, satisfies the following guarantee (proved in Appendix <ref>):
Let ϵ,δ>0, and let d= _(, α, ϵ)
for some set , evaluation function and α≤α^*.
Let be an interactive estimation algorithm with α-large self-evaluations and a decaying
estimation error with C_T,δ≥ln(2T),
and suppose that we run
Algorithm <ref> with
as the base algorithm,
T≥64d^2.5(C_T,δ/2)/ϵ^2,
n_1=⌈log_2(4/δ)⌉,
and
n_2=⌈ 128ln(8n_1/δ)/ϵ^2⌉.
Then, with probability at least 1-δ, the output ∈ satisfies
( z^*) ≥α - ϵ,
and the overall number of issued queries is
Od^2.5(C_T,δ/2) +ln^2(1/δ)/ϵ^2.
In the deterministic setting,
it suffices to run
with T>2d^1.5ln(4/ϵ) and return ẑ = z_t̂ where t̂ = _t∈{1,…,T} r_t is the index of the largest observed reward.
Then, with probability 1, we obtain ( z^*) ≥α - ϵ and issue at most O(d^1.5ln(4/ϵ)) queries.
Unknown α^* and optimism. Algorithms <ref> and <ref> achieve performance guarantees with respect to a provided optimality levelα≤α^*. When it is not easy to provide a non-trivialα(for example, whenα^*is unknown and cannot be non-trivially bounded), Algorithm <ref> uses the optimistic least squares algorithmic template (see, e.g., <cit.>) to ensureα^*-large self-evaluations with high probability and to achieve a sublinear(T,α^*). Algorithm <ref> takes as input a confidence radius parameterRof the same order as the decaying estimation error parameterC_T,δfor Algorithm <ref>. We can then show thatz^* ∈_twith high probability for allt∈{1,…,T}. Therefore,z_tmust satisfy(z_tz_t) ≥(z^*z^*)=α^*. In
Appendix <ref> we show this modified version of the algorithm satisfies the decaying estimation error property. This technique allows us to achieve a sublinear(T,α^*)without knowingα^*beforehand.
Similar to the case of fixedα, it is possible to derive a version of Algorithm <ref> that leverages an online regression oracle. (See Appendix <ref> for details.)=-1
§ STATISTICAL QUERIES
In this section we consider the statistical query (SQ) model, as defined in Example <ref>.
In particular, we study the connection between our generalized framework and SQ learning, showing specifically that the dimension can be used to recover generalization bounds based on a known combinatorial parameter that characterizes SQ learning, called the strong SQ dimension.
There are several notions of such a dimension
<cit.>.
Here we focus on the one due to <cit.>:
For a fixed distribution D over , the strong SQ dimension of a hypothesis class ⊆̋{±1}^ with respect to some ϵ >0, denoted (,̋ϵ), is the largest number d for which there exist h_1,…, h_d ∈$̋
such that:
* | ⟨ h_i, h_j ⟩|≤ 1 - ϵ for all 1 ≤ i< j ≤ d, and
* | ⟨ h_i, h_j⟩- ⟨ h_i', h_j'⟩| ≤1/d for all 1 ≤ i< j ≤ d, 1 ≤ i'< j' ≤ d,
where⟨ h, h' ⟩ := _x ∼ D [h(x)h'(x)].
The and strong SQ dimensions are closely related to one another in the sense of each providing a kind of polynomial bound on the other, as stated in the next proposition
(see Appendix <ref> for the proof).
Let D be a fixed distribution over ,
and let ⊆̋{±1}^ be a hypotheses class.
For ϵ >0,
let (ϵ)=(,̋ϵ),
and let (ϵ) =_̣(,̋ 1, ϵ).
If (ϵ)≥ 2 then
min(ϵ), 4ϵ^2 ((ϵ))^2≤(ϵ)
≤max(ϵ/4),
4ϵ^2 ((ϵ/4)+1)^2
.
Similarly, if (4ϵ)≥ 2 then
min(4ϵ), √((4ϵ))/8ϵ≤(ϵ)
≤max(ϵ), √((ϵ)+1)/2ϵ.
We next give a lower bound based on the strong SQ dimension, which together with Proposition <ref> will allow us to lower bound sample complexity of any interactive estimation algorithm in the SQ setting in terms of the dimension.
Let ϵ > 0, and let ⊆̋{± 1}^ be a hypothesis class with strong SQ dimension =(,̋ 2ϵ) ≥ 11.
Let be any interactive estimation algorithm with the property that for any
target h^* ∈$̋,outputs anϵ-approximation toh^*with probability at least2/3using at mostmqueries. Thenm > √()/12.
The proof relies on a reduction to a lower bound of <cit.>. However, the lower bound of <cit.> holds within an SQ model that differs from ours, in that it allows adversarial query responses. Therefore, we first need to show how to obtain a learning algorithm'that can be used with an adversarial oracle from an interactive estimation algorithmthat uses an unbiased stochastic query oracle (as we assume in this work). To do this, we apply the reduction technique developed by <cit.>.
(See Appendix <ref> for the full proof and additional details.)
Combining Theorem <ref> and Proposition <ref> yields a lower bound on the sample complexity of interactive estimation in the SQ setting, for a sufficiently smallϵ,
in terms of the dimension:
Let ϵ>0, and let ⊆̋{± 1}^ be a hypothesis class with strong SQ dimension (,̋ 2ϵ) ≥ 11.
Let (ϵ) = _̣(,̋1,ϵ).
Assume
ϵ≤ 1/2√((ϵ)).
Let be any interactive estimation algorithm with the property that for any
target h^* ∈$̋,outputs anϵ-approximation toh^*with probability at least2/3using at mostmqueries.
Thenm > √((ϵ))/12.
§ BANDITS
In this section we focus on the bandits setting described in Example <ref>.
We study the relationship between the dimension and the eluder dimension <cit.>, a common combinatorial dimension for bounding regret of bandit algorithms. We show that eluder dimension can be used to upper bound the dimension, and we also highlight the cases when dimension leads to a tighter analysis.
Throughout this section we follow the setup introduced in Example <ref>. We consider an action set, a classof reward functionsf:→[-1,1], and a target reward functionf^*∈. We map this to our setting by considering the set of alternatives=×, evaluation function(f,a)(f', a')=f'(a)and the target(f^*,a^*),
wherea^*=_a∈ f^*(a).
§.§ Comparison with eluder dimension
We start by describing the relationship between our dimension and the eluder dimension.
Following <cit.>, we defineϵ-dependence andϵ-eluder dimension as follows:
An action a ∈ is ϵ-dependent on actions { a_1, …, a_n}⊆ with respect to if any pair of functions f, f' ∈ satisfying √(∑_i=1^n (f(a_i) - f'(a_i))^2 )≤ϵ also satisfies f(a)-f'(a)≤ϵ. Furthermore, an action a is ϵ-independent of {a_1,…, a_n} with respect to if it is not ϵ-dependent on {a_1, …, a_n}.
The ϵ-eluder dimension (, ϵ) is the length d of the longest sequence of elements in such that every element is ϵ-independent of its predecessors. Moreover, the monotone eluder dimension is defined as
(, ϵ)max_ϵ' ≥ϵ(, ϵ').
The next theorem shows that the dimension is upper bounded by the eluder dimension (see Appendix <ref> for a proof):
Let = ×, =, ϵ>0, α≤α^*. Then
_( , α, 3ϵ/2)
≤
9 (,ϵ).
Nevertheless, as the next example shows, the eluder dimension can be arbitrarily large, while the dimension remains constant. In this example, the action set is a circle in^2, that is,={∈^2: =1}. We fix two open semicirclesU_0,U_1⊆with positivexandycoordinates, respectively, and
for anyN ∈ℕandϵ>0, construct a function class_N,ϵwith all the functionsf: → [-1,1]obtained by the following process.
First, pick one of the semicirclesU_jand anyNpoints fromU_j. On each of these points,fcan equal either+ϵor-ϵ. Everywhere else inU_j,fequals zero, and everywhere outsideU_j, it equals the linear function⟨,⟩parameterized by some∈∖ U_j.
Thus, the functions are constructed to be “simple” (namely, linear) near the optimal action,
but complex far from it. The eluder dimension is large to capture overall complexity, whereas the dissimilarity dimension is small to capture the simplicity near the optimum.
(See Appendix <ref> for
the formal construction of_N,ϵand
the proof of Proposition <ref>.)
Let ϵ∈ (0, 1/2), N ∈ℕ and consider the action set =. Then, there is a function class _N,ϵ⊆ [-1, 1]^, such that for _N, ϵ_N, ϵ×, =, it holds that
_̣(_N, ϵ, 1, ϵ) ≤ 16, but the eluder dimension is lower bounded as
(_N,ϵ,ϵ)≥ N.
Thus, our regret bound based on the dissimilarity dimension implies that (optimistic) least squares algorithms have a regret independent ofN. The same analysis with the eluder dimension <cit.> yields a regret bound scaling polynomially withN.
This shows that in the cases when the function classes are simple near the optimum, but complex far from it, the dissimilarity dimension can better capture the statistical complexity of bandit optimization
than the eluder dimension.
§.§ Dissimilarity dimension bounds
We next derive dimension bounds for several standard bandit classes. Existing bounds on eluder dimension can be used to immediately bound the dissimilarity dimension, but in several cases we are able to obtain tighter bounds.
We first consider linear bandits.
Let_n={∈^n: ≤ 1}be the unit ball in^n.
Actions are chosen from a set⊆_n;
the reward function class is^lb = {f_ : ∈Θ},
whereΘ⊆_nandf_() = ⟨, ⟩.
The corresponding set of alternatives is denoted^lb = ^lb×. In this case we obtain the following bound (see Appendix <ref> for a proof):
Let ^lb be as defined above, let =, and let ϵ>0, α≤α^*. Then _̣(^lb, α, ϵ) ≤ 4n+3.
Moreover, when α=1, then _̣(^lb, α, ϵ) ≤ 2n+1.
The proof proceeds by deriving an upper bound as well as a lower bound on the rank of the matrixwith entriesM_ij=ρ(z_i z_j)-cobtained from elementsz_1,…,z_dthat satisfy the dimension condition ford = _̣(, α, ϵ)with a scalarc. The upper bound on the rank isn+1, and the lower bound isd/4(which can be tightened tod/2whenis symmetric). The upper bound is obtained by basic linear algebra and the lower bound from a standard result on ranks of perturbed identity matrices <cit.>.
Combining these bounds then yields the claim of Theorem <ref>.
Similar to existing bounds on eluder dimension <cit.>, our bound in Theorem <ref> is linear inn. However, the eluder dimension bound has an additional dependence on1/ϵ, while our bound does not.=-1
Next, we consider generalized linear model (GLM) bandits. Similar to linear bandits, the action set is⊆_n, but the function class includes a nonlinearity. Specifically, we are provided with a functiong:→that is differentiable and strictly increasing, and consider the function class^glm = {f_ : ∈Θ}whereΘ⊆_nandf_()=g(⟨, ⟩).
Furthermore, we assume that there areh, h >0such that for all∈,∈Θ, we haveh≤ g'(⟨, ⟩)≤h. Definer =h / h. We again denote^glm = ^glm×.
Using an existing bound on the eluder dimension for GLM bandits (<cit.>, Proposition 7) and the fact that our dimension is bounded by the eluder dimension (Theorem <ref>), we obtain the following bound (see Appendix <ref> for a proof):
Let ^glm be as defined above, let =, and let ϵ>0, α≤α^*. Then _̣(^glm, α, ϵ) ≤ O(n r^2 log(h/ϵ)).
By considering a different proof technique, along the lines of Theorem <ref>, it might be possible to tighten this bound. We leave this extension for future work.
Next, we consider a bandit setting that is similar to GLMs, but in this case the non-linearity
is provided by the non-differentiable rectified linear unit (ReLU) activation function(x) = max{x, 0}. We consider the action set=_n, and the set of reward functions^consisting of all functions of the formf_,b() = (⟨, ⟩ - b)for some∈_nandb ∈ [0,1). The subset of^with a fixed value ofbis denoted^_b, and we consider the set of alternatives^_b = ^_b ×_n.
Unlike the classes considered above, this setting can be shown to be challenging to learn in the general case.
Indeed, it turns out that eluder dimension (as well as a related measure called star dimension) is growing at least exponentially withn<cit.>. The same lower bound can be shown for the dimension by a similar proof technique. The following theorem also provides an exponential upper bound, showing that in certain regimes the exponential dependence is tight (see Appendix <ref> for a proof):
Let ^ be as defined above, let =, and let ϵ,b>0 such that
b≤ 1-ϵ. Then _̣(^_b, 1-b, ϵ) = O ϵ^-n/2, and _̣(^_1-ϵ, ϵ, ϵ) = Ωϵ^-n/2.
We note that previous work (<cit.>, Theorem 5.1) has shown that for a function class of one-layer neural networks with ReLU activations, obtaining sublinear regret requiresT = Ω(ϵ^-(n-2)).
§ CONCLUSION
In this paper, we have introduced a new model for interactive estimation and proposed a new combinatorial dimension, called dissimilarity dimension, to study the hardness of learning in this model. In (stochastic, correlational) statistical query learning, our dimension is polynomially related
to the strong SQ dimension. In bandits, our dimension is upper bounded by the eluder dimension, and there are examples where the dissimilarity dimension leads to much tighter regret bounds.
While this work provides an initial investigation of the dissimilarity dimension, many open questions remain. For example, our regret bound for the general setting scales asd^1.25. Is it possible to tighten this to linear dependence, as is the case, for example, for eluder dimension? On the algorithmic side, we currently require solving a least squares problem of sizetin iterationt. Although we also introduce an algorithm that leverages an online regression oracle (see Appendix <ref>), the oracle-based approach still requires solving a least squares problem (on the data smoothed by the oracle). Is it possible to derive dissimilarity-dimension-based regret bounds directly for the predictions produced by the oracle? Ultimately, we hope investigations of relationships between dissimilarity dimension and related notions may help us understand the hardness of learning in interactive settings.
plainnat
§ MISSING PROOFS OF SECTION <REF>
§.§ Analysis of Least Squares Algorithms (Algorithms <ref> and <ref>)
Our analysis relies on the following variant of
Freedman’s inequality <cit.> (see <cit.> and <cit.>).
Let R >0 and let X_1, …, X_n
be a sequence of real-valued random variables, such that for all i ∈ [n] it holds that X_i ≤ R and [ X_i X_1,…,X_i-1]=0. For any δ∈ (0,1), and η∈ (0,1/R), with probability at least 1-δ,
∑_i=1^n X_i ≤η∑_i=1^n[ X_i^2 X_1,…,X_i-1] + ln(1/δ)/η.
Next we define anϵ-cover of a set, that will be used in the bound of Theorem <ref>.
Let ψ be the pseudometric over the set defined, for any z_1,z_2 ∈, as
ψ(z_1, z_2) =
sup_z ∈ (z z_1) - (z z_2).
We say a set N ⊆ is an ϵ-cover of with respect to ψ
if for every z ∈ there exists some
z' ∈ N such that ψ(z, z') ≤ϵ. We denote by 𝒩(, ϵ) the minimum cardinality of
any ϵ-cover of .
For example, in the case of linear bandits (see Section <ref>) when = ^lband = , it can be shown that𝒩(^lb, ϵ)is upper bounded by theℓ_2-covering number of then-dimensional unit ball. This is because for anyz, z_1, z_2 ∈Θ×𝒜,
ρ(z z_1 ) - ρ(z z_2)
=
⟨_1, ⟩ - ⟨_2, ⟩≤_1 - _2≤_1 - _2.
The bound on𝒩(^lb, ϵ)now follows because
theℓ_2-covering number of the unit ball with radiusϵisO(3/ϵ)^n(see, for example, Lemma D.1 of <cit.>).
We next show that Algorithm <ref> satisfies the decaying estimation error property withC_T,δthat scales logarithmically with the covering number with respect toψ.
Consider the setting from Section <ref>, where the learner sequentially issues the queries z_1, …, z_T and receives responses r_1, …, r_T.
Assume there is β≥ 0 such that r_t - [r_t z_t]≤β for all t, and β'≥ 2β such that for all z,z',
(z z') - (z z^*)≤β'.
Let be a set of alternatives such that z^* ∈ and let ẑ_t be defined as the least squares optimizer,
ẑ_t = _z ∈∑_i=1^t-1(z_i z) - _i^2.
Then, for any sequence of queries z_1,…,z_T∈ (possibly equal to ẑ_1, …, ẑ_T), we have with probability 1-δ, for all t ∈ [T] simultaneously,
∑_i=1^t-1(z_iẑ_t) - (z_i z^*)
^2
≤ C_T,δ, and
z^* ∈z∈: ∑_i=1^t-1(z_i z) - (z_iẑ_t )^2 ≤ C_T,δ,
where C_T,δ = 16ββ'ln
2T𝒩(, β'/T) / δ.
For i=1,…,T, let h_i=(z_1,r_1,…,z_i-1,r_i-1,z_i) denote the history of interaction up to the query z_i, but excluding the response r_i, and let
ξ_i=_i-( z_i z^*). In the interactive estimation setting, we then have [ξ_i h_i] = 0, and by the lemma assumption, [ξ_i^2 h_i] ≤β^2.
Since ẑ_t is the minimizer of the least squares loss up to time t,
we have
∑_i=1^t-1(z_iẑ_t) - _i^2
≤∑_i=1^t-1(z_i z^*) - _i^2,
which can be rewritten, substituting _i = (z_i z^*) + ξ_i, as
∑_i=1^t-1(z_iẑ_t) - (z_i z^*) - ξ_i
^2
≤∑_i=1^t-1ξ_i^2.
Therefore, by re-arranging terms, we get
∑_i=1^t-1(z_iẑ_t ) - (z_i z^*)
^2
≤
2∑_i=1^t-1ξ_i(z_iẑ_t ) - (z_i z^*)
.
Set ϵ_1=β'/T, and let N be a minimal ϵ_1-cover of with respect to the pseudometric ψ (see Eq. <ref>). Furthermore, let
ẑ_t^ϵ∈ N be an element of this cover that is ϵ_1-close to ẑ_t (with respect to ψ). Then,
∑_i=1^t-1(z_iẑ_t ) - (z_i z^*)
^2
≤
2∑_i=1^t-1ξ_i(z_iẑ_t ) - (z_i z^*)
=
2∑_i=1^t-1ξ_i(
(z_iẑ_t ) - (z_iẑ_t^ϵ)
+ (z_iẑ_t^ϵ) - (z_i z^*)
)
≤
2tβϵ_1
+
2∑_i=1^t-1ξ_i(z_iẑ_t^ϵ) - (z_i z^*)
,
where the first inequality follows from Eq. (<ref>), and the
last inequality follows because ξ_i≤β and ẑ_t^ϵ is ϵ_1-close to ẑ_t.
Now, for any z ∈ N and i ∈ [T], define
K^z_i =
ξ_i (z_i z) - (z_i z^*)
.
Since [ξ_i h_i] = 0, we have, for any fixedz∈ N, [K^z_i h_i]=0. This means that for any fixed z∈ N, K_1^z,…,K_T^z is a martingale difference sequence.
By the lemma assumptions, |K^z_i| ≤ββ'. Also,
(K^z_i)^2
h_i
≤β^2
(z_i z) - (z_i z^*)
^2
h_i
=
β^2
(z_i z) - (z_i z^*)^2.
Thus, by Freedman's inequality (Lemma <ref>) with η=1/(4ββ') and δ'=δ/TN,
we obtain that for any fixed z∈ N and t ∈ [T], with probability at least 1-δ',
∑_i=1^t-1ξ_i(z_i z) - (z_i z^*)
≤1/4ββ'∑_i=1^t-1β^2(z_i z) - (z_i z^*)
+ 4ββ'lnTN/δ
=
β/4β'∑_i=1^t-1(z_i z) - (z_i z^*)
^2
+ 4ββ'lnTN/δ.
Taking a union bound over all z∈ N and t∈ [T], we obtain that
Eq. (<ref>) holds with probability at least 1-δ simultaneously for all z∈ N and t ∈ [T]. Henceforth, we assume that we are in the event
when Eq. (<ref>) holds for all z∈ N and t ∈ [T].
Applying the bound of Eq. (<ref>)
with z=ẑ_t^ϵ
to the sum on the right-hand side of Eq. (<ref>) then yields
∑_i=1^t-1(z_iẑ_t ) - (z_i z^*)
^2
≤
2tβϵ_1
+
β/2β'∑_i=1^t-1(z_iẑ_t^ϵ) - (z_i z^*)
^2
+
[t]
8ββ'lnTN/δ.
Using the inequality (a+b)^2≤ 2a^2+2b^2, which holds for any a,b∈, and the fact that ẑ_t^ϵ and ẑ_t are ϵ_1-close, we obtain, for every i=1,…,t-1,
(z_iẑ_t^ϵ) - (z_i z^*)
^2
=
(z_iẑ_t^ϵ)
- (z_iẑ_t)
+
(z_iẑ_t)
- (z_i z^*)
^2
≤
2ϵ_1^2
+
2(z_i z^*)
- (z_iẑ_t)
^2.
Plugging this into the right-hand side of Eq. (<ref>) yields
∑_i=1^t-1(z_iẑ_t ) - (z_i z^*)
^2
≤
2tβϵ_1
+
β/β'tϵ_1^2
+
β/β'∑_i=1^t-1(z_iẑ_t) - (z_i z^*)
^2
+
[t]
8ββ'lnTN/δ
≤
2tβϵ_1
+
tβϵ_1^2/β'
+
1/2∑_i=1^t-1(z_iẑ_t) - (z_i z^*)
^2
+
[t]
8ββ'lnTN/δ,
where the last inequality follows by the assumption that β' ≥ 2β.
Then, by re-arranging terms and multiplying by 2, we get
∑_i=1^t-1(z_iẑ_t ) - (z_i z^*)
^2
≤
4tβϵ_1
+
2tβϵ_1^2/β'
+
16ββ'lnTN/δ.
Recall that we set ϵ_1 = β'/T, and N is a minimal ϵ_1-cover of , so N=𝒩(, β'/T). Plugging these values in the previous equation, we thus obtain
that with probability at least 1-δ, for all t ∈ [T],
∑_i=1^t-1(z_iẑ_t ) - (z_i z^*)
^2
≤
4ββ'
+
2ββ'/T
+
16ββ'lnT𝒩(, β'/T)/δ
≤
16ββ'ln2T𝒩(, β'/T)/δ,
where the last inequality follows because 4+2/T≤ 16ln 2 for T≥ 1.
Finally, when Eq. (<ref>) holds, we also have
z^* ∈
z∈: ∑_i=1^t-1(z_i z) - (z_iẑ_t )^2
≤
16ββ'ln2T𝒩(, β'/T)/δ.
Considering Algorithm <ref> and using Theorem <ref> with=_α,z_t = ẑ_t,β=2andβ'=4then immediately yields the following corollary
(α-large self-evaluations follow becauseẑ_t∈_α):
Consider the setting from Section <ref> with a set of alternatives and an evaluation function .
Let α be an optimality level such that
𝒩(_α, 4/T)=e^o(T). Then Algorithm <ref> has α-large self-evaluations and satisfies
the decaying error property with
C_T,δ = 128ln
2T𝒩(_α, 4/T) / δ.
Similarly, Theorem <ref> also implies that Algorithm <ref> satisfies the decaying error property as well asα^*-large self-evaluations, althoughα^*is not known:
Consider the setting from Section <ref> with a set of alternatives and an evaluation function ,
and assume that 𝒩(, 4/T)=e^o(T). Then Algorithm <ref>
with R=128ln
2T𝒩(, 4/T) / δ
has α^*-large self-evaluations and satisfies
the decaying error property with
C_T,δ = 4R=512ln
2T𝒩(, 4/T) / δ.
We apply Theorem <ref> with =, β=2 and β'=4. Our choice of R in Algorithm <ref> coincides with the value of C_T,δ appearing in Theorem <ref>, and therefore the theorem implies that z^*∈_t with probability at least 1-δ for all t∈[T]. In that case the queries z_t issued by the algorithm satisfy
(z_t z_t)≥(z^* z^*)=α^*
and thus the algorithm has α^*-large self-evaluations.
For the second part, the triangle inequality implies that
with probability at least 1-δ for all t∈[T],
√(∑_i=1^t-1(z_i z) - (z_i z_t )^2
) ≤√(∑_i=1^t-1(z_i z) - (z_iẑ_t )^2
)
+
√(∑_i=1^t-1(z_iẑ_t) - (z_i z_t )^2
)
≤√(R)+√(R),
where the bound on the first term on the right-hand side follows by Theorem <ref> and the bound on the second term by the fact that z_t∈_t.
§.§ Proof of Theorem <ref>
The theorem follows immediately from Corollary <ref>,
because𝒩(_α, ϵ) ≤ |_α|≤ ||<∞for anyαandϵ.
§.§ Online Regression Oracles
We assume access to an online regression oracle, which solves a regression problem over a function classΦ=ϕ_z: z∈indexed byz∈, whereϕ_z:→is defined asϕ_z(z')=(z' z)for allz'∈; that is, functionsϕ_zevaluatein its first argument.
The oracle operates in the following protocol: In each time step, the oracle receives an observationz_t, produces a prediction_t∈ℝ, and finally receives a responser_tand incurs square loss(_t-r_t)^2.
We assume that for anyTand sequence of observations and responses (even if generated adaptively), the oracle satisfies the following regret bound:
∑_t=1^T (_t- _t)^2 - inf_z ∈∑_t=1^T (( z_t z) -_t )^2 ≤T,
where·is a non-decreasing sublinear function (that typically also depends on various properties ofρ,, and the range of responsesr_t).
For many function classes,
there are well-known constructions of online regression oracles that satisfy Eq. (<ref>) <cit.>.
For example, ifΦis finite, there are oracles withT = O(lnΦ)and for parametric classes, such as linear functions, there are oracles withT = O(dlog(T/d)). More examples can be found in Section 2.2 of <cit.>.
We now analyze Algorithm <ref> under the assumption of access to an online regression oracle. This algorithm takes as input an online regression oracle. Algorithm <ref> can also be modified using the optimistic least squares template of <cit.> to handle the case whenα^*is unknown. This is done by replacing step <ref> of Algorithm <ref> with the following two steps:
_t+1 =z∈: [t]∑_i=1^t ( (z_i z) - _i)^2 ≤ R,
z_t+1 = _z∈_t+1(z z),
whereR = 8 T + 64βmaxβ, β'lnT/δandβ, β'are defined as in Lemma <ref>. The results of Lemma <ref> justify the validity of these choices (using a similar reasoning as in Corollary <ref>) and
imply that Algorithm <ref> satisfies the decaying estimation error property (Definition <ref>), provided that the regression oraclesatisfies the regret bound of Eq. (<ref>).
Consider the setting defined in Section <ref> with α≤α^*, and assume there are β, β' ≥ 0 such that |r_t - [r_t z_t]|≤β for all t and there is β'≥ 2β s.t. for all z ∈ and ρ∈Γ_z, | ρ - (z z^*) | ≤β' where Γ_z ⊂ℝ is the space of plausible responses of for input z.
The sequence of queries z_1, …, z_T as defined in Algorithm <ref> satisfies with probability at least 1-δ for all t ∈ [T] simultaneously,
∑_i=1^t( (z_i z_t+1) - (z_i z^*) )^2 ≤ C_T,δ and z^* ∈ z∈_α: ∑_i=1^t ( (z_i z) - _i)^2 ≤ C_T,δ
where C_T,δ = 8T + 64βmaxβ, β'lnT/δ.
Let ξ_i = _i - ( z_i z^*). Recall that [r_i z_i] = (z_i z^*) and therefore, by assumption, |ξ_i| ≤β for all i. By definition, the online regression oracle satisfies
∑_i=1^t (_i - _i)^2 ≤inf_z ∈∑_i=1^t (( z_i z) -_i )^2 + t
(i)≤∑_i=1^t (( z_i z^*) -_i )^2 + t
= ∑_i=1^tξ_i^2 + t.
Inequality (i) holds because z^* ∈. Expanding the LHS,
∑_i=1^t (_i - _i)^2 = ∑_i=1^t(_i - (z_i z^*))^2 - 2ξ_i ( _i - (z_i z^*) ) + ξ_i^2.
Plugging this back into Eq. (<ref>) and rearranging,
we obtain
∑_i=1^t (_i - (z_i z^*))^2≤∑_i=1^t 2ξ_i ( _i - (z_i z^*) ) + t.
For any i ∈ [T] we define:
K_i = ξ_i ( _i - (z_i z^*) )
Observe that
𝔼[ K_i { z_ℓ, _ℓ}_ℓ=1^i] = 0.
Thus K_1, …, K_T is a martingale difference sequence.
Notice that | K_i | ≤ββ' and that
𝔼K_i^2{ z_ℓ, _ℓ}_ℓ=1^i≤β^2 𝔼[( _i - (z_i z^*) )^2{ z_ℓ, _ℓ}_ℓ=1^i] = β^2 ( _i - (z_i z^*) )^2.
Then, by plugging this into Freedman's inequality (Lemma <ref>) with η:=1/4min(1/β^2, 1/ββ') and δ':=δ/T, we get that for any fixed t ∈ [T], with probability at least 1-δ,
∑_i=1^tξ_i ( _i - (z_i z^*) ) ≤1/4∑_i=1^t( _i - (z_i z^*) )^2 + 4 βmaxβ, β'lnT/δ.
Plugging Eq. (<ref>) back into Eq. (<ref>) and rearranging terms yields
∑_i=1^t (_i - (z_i z^*))^2 ≤ 2 t
+ 16βmaxβ, β'lnT/δ
with probability at least 1-δ for all t ∈ [T]. Thus we conclude that with probability at least 1-δ, for all t ∈ [T],
z^* ∈ z∈_α: ∑_i=1^t ( (z_i z) - _i)^2 ≤ C_T,δ.
By the triangle inequality,
√(∑_i=1^t ( (z_i z^* ) - (z_i z_t+1))^2 )≤√(∑_i=1^t ( (z_i z^* ) - _i)^2 ) + √(∑_i=1^t ( _i - (z_i z_t+1))^2 ).
Since by definition z_t+1 = _z∈_α∑_i=1^t((z_i z) - _i)^2, we have
∑_i=1^t((z_i z_t+1) - _i)^2 ≤∑_i=1^t((z_i z^*) - _i)^2.
Substituting back into the triangle inequality above,
√(∑_i=1^t ( (z_i z^* ) - (z_i z_t+1))^2 )≤ 2√(∑_i=1^t ( (z_i z^* ) - _i)^2 ),
implying
∑_i=1^t ( (z_i z^* ) - (z_i z_t+1))^2 ≤ 4 ∑_i=1^t ( (z_i z^* ) - _i)^2.
Plugging Eq. (<ref>) on the right-hand side yields
∑_i=1^t ( (z_i z^* ) - (z_i z_t+1))^2 ≤ 8 t + 64 βmaxβ, β'lnT/δ.
The result follows by using the monotonicity of t.
§.§ Proof of Lemma <ref>
The proof uses Turán's Theorem <cit.>, a standard result from extremal graph theory that bounds the number of edges of a graph that does not contain a clique of a given size:
Let G=(V,E) be an undirected graph without self-loops and whose largest clique is of size at most d. Then
E≤1-1/dV^2/2.
We now turn to the proof of Lemma <ref>. First note that ifϵ≥ 1+αthen no query isϵ-bad, becauseα-ϵ≤ -1≤(z z^*)for everyz∈,
and therefore the lemma holds. In the remainder of the proof, we assume that0<ϵ<1+α.
Consider the queriesz_1,…,z_Tand their corresponding values relative toz^*, denoted asv_t=(z_t z^*)fort∈[T]. A queryz_tisϵ-bad if its corresponding valuev_tis in the intervalI=[-1,α-ϵ). The proof proceeds by partitioning the intervalIinto subintervals and separately bounding the number of valuesv_tin each subinterval.
To define these subintervals, letq=1+1/√(d), and consider the sequence of suboptimality gapsϵ_i=q^i-1ϵfori=1,…,n+1, where
n=log_q1+α/ϵ.
The gapsϵ_iform an increasing sequenceϵ, qϵ, q^2ϵ,…such that the last element satisfies
ϵ_n+1 =q^nϵ≥1+α/ϵϵ=1+α.
Using these gaps we define intervalsI_i=[α-ϵ_i+1,α-ϵ_i)fori=1,…,n.
Sinceϵ_n+1≥ 1+α, the unionI_1∪…∪ I_n=[α-ϵ_n+1,α-ϵ)covers the intervalI. We bound the number of valuesv_tin each intervalI_i.
LetS_ibe the set of query indices with values inI_i, that isS_i=t∈[T]: (z_t z^*)∈ I_i, letm_i=S_i, and assume thatm_i≥ 2(the casem_i≤ 1will be dealt with later).
Furthermore, letc_i=α-(ϵ_i+1+ϵ_i)/2be the midpoint of the intervalI_i. Since the width of the intervalI_iisϵ_i+1-ϵ_i=(q-1)ϵ_i=ϵ_i/√(d), we obtain
(z_t z^*)-c_i≤ϵ_i/2√(d)
for allt∈ S_i.
Letd_i=_̣(, α, ϵ_i)be the (non-monotonic) dimension with respect to the suboptimality gapϵ_i. Sinceα≤α^*andϵ_i≥ϵ, we have1≤ d_i≤ d. We construct an upper bound onm_i, exploiting the fact thatd_iis the dissimilarity dimension with respect toϵ_i.
In the rest of the proof we refer to a pair of queries with indicess,t∈ S_isuch thats<tas dissimilar if
(z_s z_t)-c_i≤[t]ϵ_i/√(d_i).
This is exactly the property appearing in the definition of the dissimilarity dimension with respect toϵ_i, and so there cannot be more thand_iqueries such that every pair is dissimilar (note that all the queriesz_tsatisfy(z_t z_t)≥αthanks toα-large self-evaluations).
The derivation of the bound onm_iproceeds in several steps. First, we identify pairs of dissimilar queries and construct a graph where each edge corresponds to a dissimilar pair.
Second, we use Turán's Theorem (Theorem <ref>) to upper bound the number of such pairs, using the fact that the graph cannot contain a clique of size greater thand_i. Finally, using the bound on the number of dissimilar pairs, we boundm_i.
To start, lett_1<t_2<…<t_m_ibe the query indices included inS_i, and letk∈2,…,m_i. Consider a uniform distribution overℓ∈[k-1]. Then by Markov's inequality and the decaying estimation error property, we obtain
1/k-1∑_ℓ=1^k-1(z_t_ℓ z_t_k) - (z_t_ℓ z^*)^2
≥ϵ_i^2/4d ≤1/k-1∑_ℓ=1^k-1(z_t_ℓ z_t_k) - (z_t_ℓ z^*)^2
·4d/ϵ_i^2
≤1/k-1∑_s=1^t_k-1(z_s z_t_k) - (z_s z^*)^2
·4d/ϵ_i^2
≤C_T,δ/k-1·4d/ϵ_i^2.
Multiplying byk-1, we therefore obtain
s∈ S_i:
s<t_k
and (z_s z_t_k) - (z_s z^*)≥ϵ_i/2√(d)≤4dC_T,δ/ϵ_i^2,
and summing across allk∈2,…,m_ithen yields
s,t∈ S_i:
s<t
and (z_s z_t) - (z_s z^*)≥ϵ_i/2√(d)≤
m_i4dC_T,δ/ϵ_i^2.
We next construct an undirected graph without self-loops,G_i=(V_i,E_i). The vertex set of the graph isV_i=S_i. The edge set is defined to be
E_i=t,s⊆ S_i:
s<t
and (z_s z_t) - (z_s z^*)
<
ϵ_i/2√(d).
By comparing with Eq. (<ref>), we obtain
E_i ≥m_i(m_i-1)/2
-
m_i4dC_T,δ/ϵ_i^2.
Note that any pair of verticess<tconnected by an edge corresponds to a dissimilar pair of queries:
(z_s z_t) - c_i ≤(z_s z_t) - (z_s z^*)
+
(z_s z^*)-c_i
<
ϵ_i/2√(d)
+
ϵ_i/2√(d)
≤ϵ_i/√(d_i),
where the first inequality is the triangular inequality, the second inequality follows by
combining the definition ofE_iand Eq. (<ref>), and the final one is from the fact thatd_i≤ d. From the definition of the dissimilarity coefficient, the largest clique inG_iis of size at mostd_i. Using Turán's Theorem,
we thus must have
E_i≤1-1/d_i·m_i^2/2≤1-1/d·m_i^2/2.
Combining with the lower bound onE_ifrom Eq. (<ref>), we obtain
m_i(m_i-1)/2
-
m_i4dC_T,δ/ϵ_i^2≤1-1/d·m_i^2/2.
Dividing bym_i, multiplying by2d, and rearranging then yields
m_i
≤
2d1/2+4dC_T,δ/ϵ_i^2
=
d+8d^2C_T,δ/ϵ_i^2.
We have originally assumed thatm_i≥ 2, but the bound that we have just derived also holds whenm_i≤ 1(becaused≥ 1).
To complete the proof it suffices to sum up the upper bounds onm_iacrossi=1,…,n:
∑_i=1^n m_i
=
∑_i=1^n d+8d^2C_T,δ/ϵ^2·1/q^2^i-1
≤
nd + 8d^2C_T,δ/ϵ^2·1/1-(1/q^2).
To boundn, we use the fact thatα≤ 1, the inequalityln(1+x)≥x/1+x(which holds forx≥ 0), and the fact thatd≥ 1:
n
≤ 1 +log_q(2/ϵ)
= 1+ln(2/ϵ)/ln1+1/√(d)≤ 1+[ln(2/ϵ)]·1+1/√(d)/1/√(d)
= 1+(√(d)+1)ln(2/ϵ)
≤ 2ln 2 + 2√(d)ln(2/ϵ)
≤ 2√(d)ln(4/ϵ).
Also,
1-1/q^2
=
1-1/1+2/√(d)+1/d≥
1-1/1+2/√(d)
=
2/√(d)/1+2/√(d)
=
2/√(d)+2≥2/3√(d).
Plugging these back in Eq. (<ref>) yields
∑_i=1^n m_i
≤
2d^1.5ln(4/ϵ)
+
12d^2.5C_T,δ/ϵ^2,
completing the proof of the main claim of the lemma.
The second claim holds vacuously whenT=0, so assume thatT≥ 1. IfC_T,δ≥ln(2T), and using the fact that(ln x)≤ xand2≤3ln2, we can write
2d^1.5ln(4/ϵ)
=
d^1.5ln(16/ϵ^2)
≤16d^1.5/ϵ^2≤24(ln 2)d^1.5/ϵ^2≤24d^2.5/ϵ^2·ln(2T)
≤24d^2.5C_T,δ/ϵ^2,
which yields the first part of the second claim. The second part is immediate by plugging inC_T,δ=0in the main claim.
§.§ Useful lemmas
In this subsection we prove two lemmas that will be needed for the proofs of the main results (Theorems <ref> and <ref>) in Appendices <ref> and <ref>. They both rely on a standard technique of bounding a sum by a definite integral:
Let f:→ be a non-increasing function and T≥ 1. Then
∑_t=1^T f(t)
≤
f(1)+∫_1^T f(t)dt.
The proof is immediate by noting that f(t)≤∫_t-1^t f(t)dt.
In the lemmas below we write_+to denote[0,+∞).
Let q_1, …, q_T be a sequence in _+,
and let κ: _+→_+ be a non-increasing function such that
for all ϵ > 0,
∑_t=1^T 1(q_t ≥ϵ ) ≤κ(ϵ)/ϵ^2.
Then, for any τ≥ 0,
∑_t=1^T q_t ≤ Tτ + 2√(κ(τ) T).
First, since we are only concerned with bounding the sum ∑_t q_t,
we assume without loss of generality that the sequence is in descending order, i.e., q_1 ≥…≥ q_T. Then, for any τ≥ 0,
∑_t=1^T q_t = ∑_t=1^T q_t1( q_t≤τ )+ ∑_t=1^T q_t1( q_t > τ) ≤ Tτ + ∑_t=1^T q_t1( q_t > τ).
Consider any k such that q_k>τ. Since the sequence q_1,…, q_T is non-increasing, we have
k
≤∑_t=1^T 1( q_t≥ q_k) ≤κ(q_k)/q^2_k≤κ(τ)/q^2_k,
where the last inequality follows by the monotonicity of κ. This in turn implies that q_k≤√(κ(τ)/k). Therefore,
∑_t=1^T q_t 1(q_t > τ) ≤∑_t=1^T √(κ(τ)/t).
By Proposition <ref>,
∑_t=1^T1/√(t)≤
1+2√(T)-2√(1)
<2√(T).
Combining Eqs. (<ref>), (<ref>) and (<ref>),
we get
∑_t=1^T q_t ≤ Tτ + 2√(κ(τ)T).
which concludes the proof.
Let a>0, let q_1, …, q_T be a sequence of reals in [0,a],
and let κ: ℝ_+ →ℝ_+ be a non-increasing function such that
for all ϵ∈(0,a],
∑_t=1^T 1(q_t ≥ϵ ) ≤κ(ϵ) ln(a/ϵ).
Then, for any τ≥ 0,
∑_t=1^T q_t ≤ Tτ + a[1+κ(τ)]exp( - 1/κ(τ)).
We follow a similar proof strategy as in Lemma <ref> and start by bounding the sum ∑_t q_t. We assume without loss of generality that the sequence is in descending order, i.e., q_1 ≥…≥ q_T. Then, for any τ≥ 0,
∑_t=1^T q_t = ∑_t=1^T q_t1( q_t≤τ )+ ∑_t=1^T q_t1( q_t > τ) ≤ Tτ + ∑_t=1^T q_t1( q_t > τ).
Consider any k such that q_k>τ. Then
k ≤∑_t=1^T 1( q_t≥ q_k) ≤κ(q_k) ln( a/q_k) ≤κ(τ) ln( a/q_k),
where the last inequality follows by the monotonicity of κ and the fact that ln(a/q_k)≥ 0. This in turn implies that
q_k≤ aexp -k/κ(τ).
Therefore,
∑_t=1^T q_t 1(q_t > τ) ≤∑_t=1^T aexp( -t/κ(τ)).
By Proposition <ref>,
∑_t=1^Texp( -t/κ(τ))
≤exp-1/κ(τ)
-κ(τ)
exp-T/κ(τ)
-
exp-1/κ(τ)
≤
[1+κ(τ)]exp-1/κ(τ).
Combining Eqs. (<ref>), (<ref>) and (<ref>),
we get
∑_t=1^T q_t ≤ Tτ + a[1+κ(τ)]exp( - 1/κ(τ)),
which concludes the proof.
§.§ Proof of Theorem <ref>
Throughout the proof we use the shorthandd_ϵ=_(, α,ϵ),
sod=d_1/T.
The proof proceeds by applying Lemmas <ref> and <ref> to the bounds on the number of bad queries from Lemma <ref>. Specifically, letq_t = [α - ρ(z_t z^*)]_+denote the suboptimality of each queryz_tmade by the algorithm. Then, for anyϵ>0, the number ofϵ-bad queries can be written as∑_t=1^T 1(q_t ≥ϵ).
First consider the caseC_T,δ≥ln(2T). By Lemma <ref>, with probability at least1-δ, the number ofϵ-bad queries is at most36d_ϵ^2.5C_T,δ/ϵ^2. Settingκ(ϵ)=36d_ϵ^2.5C_T,δ, we apply Lemma <ref>, withτ=1/T, to obtain that with probability at least1-δ,
Regret(T,α)
≤∑_t=1^T q_t
≤
1+12 d^1.25√(C_T,δT).
IfC_T,δ=0, then by Lemma <ref>, the number ofϵ-bad queries is at most2d_ϵ^1.5ln(4/ϵ). Settinga=4andκ(ϵ)=2d_ϵ^1.5, we apply Lemma <ref>, withτ=1/T, to obtain
Regret(T,α)
≤∑_t=1^T q_t
≤
1+
4(1+2d^1.5)exp-1/2d^1.5≤
1+12d^1.5,
completing the proof.
§.§ Proof of Theorem <ref>
First, we consider the deterministic setting.
By Lemma <ref>, at most2d^1.5ln(4/ϵ)of queries issued byareϵ-bad. SettingT>2d^1.5ln(4/ϵ)implies that at least one query is notϵ-bad. Thus, returningẑfor which the observed reward is the largest guarantees that( z^*) ≥α - ϵ, as needed.
Next, we prove the result for the caseC_T,δ≥ln(2T).
By Lemma <ref>, with probability at least1-δ/2, there are at most16/9ϵ^2·36d^2.5(C_T,δ/2)queries that are3ϵ/4-bad. SettingT≥64d^2.5(C_T,δ/2)/ϵ^2, implies that at least half of the queries are not3ϵ/4-bad. In the remainder of the proof, we only consider the high-probability event in which this is the case.
Forn_1=⌈log_2(4/δ)⌉the probability that alln_1samples are3ϵ/4-bad is at most(1/2)^n_1≤δ/4.
Forn_2 = ⌈ 128ln(8n_1/δ)/ϵ^2⌉, by applying Hoeffding's inequality and union bound over each of then_1rounds we get that with probability at mostδ/4there is some indexℓ≤ n_1for which|r_t_ℓ - ρ(z_t_ℓ z^*)| > ϵ/8.
Overall, with probability at least1-δwe get that there is at least one indexjof then_1sampled indices that is not3ϵ/4-bad, and that|r_t_ℓ - ρ(z_t_ℓ z^*)| ≤ϵ/8for allℓ=1,…,n_1. Therefore,
r_t_j≥ρ(z_t_j z^*) -ϵ/8 ≥α -3ϵ/4-ϵ/8=α - 7ϵ/8.
For all indiceskthat areϵ-bad we have
r_t_k≤ρ(z_t_k z^*) + ϵ/8 < α - ϵ + ϵ/8 = α - 7ϵ/8.
Thus, for all of theϵ-bad queries we haver_t_k<r_t_j, and so
Algorithm <ref> will not return any of theϵ-bad queries, because it is choosing the index with maximum value ofr_t_ℓ. In other words, the returned queryz_t_ℓ̂satisfies
ρ(z_t_ℓ̂ z^*) ≥α - ϵ.
§ MISSING PROOFS OF SECTION <REF>
First we discuss the connection between our SQ setting and the SQ model of <cit.>. We focus on two aspects in which they appear to differ and explain why these models are equivalent.
Correlational vs general statistical queries.
The restriction of the SQ model in which the oracle may only output the approximate correlation between a query and the target function, termed correlational statistical query (CSQ), was studied by <cit.>. The CSQ oracle can be viewed as providing something akin to a negative distance between the query and the target.
This is equivalent to the learning by distances framework of <cit.>, who defined their model independently of <cit.>.
<cit.> showed that an arbitrary statistical query can be answered by asking two SQs that are independent of the target and two CSQs. That is, in the distribution-dependent learning model (i.e., when the learner has access to the distribution over), correlational queries can simulate general queries.
Adversarial vs statistical noise.
The setting we consider in this work assumes stochastic query responses, similar to
several previous works <cit.>.
On the other hand, the original SQ model <cit.> assumed that
the query oracle can respond with an adversarial (rather than statistical) noise, up to a pre-specified tolerance parameterτ > 0. The previous works have shown that the two noise models are equivalent <cit.>
§.§ Proof of Proposition <ref>
We first prove the first inequality of Eq. (<ref>).
Letϵ>0and letd=(ϵ).
Then there exists a sequenceh_1,…,h_d ∈$̋ satisfying both conditions of Definition <ref>.
Let d' be equal to the leftmost expression
of Eq. (<ref>).
We aim to show (ϵ)≥ d'.
Note that d'≤ d.
Let c be the midpoint between c_min = min_i<j⟨ h_i, h_j ⟩ and c_max = max_i<j⟨ h_i, h_j ⟩. Then c ≤ 1 - ϵ. Moreover, for all i≠ j,
|⟨ h_i, h_j ⟩ - c| ≤1/2 | c_max - c_min| ≤1/2d≤ϵ/√(d')
where the last inequality follows from our choice of d'
(which ensures d'≤ 4(dϵ)^2).
Thus, h_1,…,h_d', the first d' elements of the original sequence of hypotheses, satisfy
Definition <ref>, proving
the claim.
We prove the first inequality of Eq. (<ref>) in a similar way.
Let us re-define d = (4ϵ)
and let d' be equal to the leftmost expression of
Eq. (<ref>).
As before, d'≤ d.
Then there exists a sequence h_1,…,h_d ∈$̋
satisfying the conditions of Definition <ref>
for somec ≤ 1 - 4ϵ.
Then for alli≠ j,⟨ h_i, h_j ⟩≤ c + 4ϵ/√(d)≤ 1 - ϵ, sinced ≥ 2.
Moreover, for alli≠ jandi'≠ j',
|⟨ h_i, h_j ⟩ - ⟨ h_i', h_j'⟩| = |⟨ h_i, h_j ⟩ - c + c - ⟨ h_i', h_j'⟩|
≤8ϵ/√(d)≤1/d',
with the last inequality following from our choice ofd'.
Thus,h_1,…,h_d', the firstd'hypotheses in the original sequence, satisfy
Definition <ref>.
The second inequality of
Eq. (<ref>),
now follows from the first inequality of
Eq. (<ref>),
since if
the second inequality of
Eq. (<ref>)
does not hold then the leftmost expression of
Eq. (<ref>)
must be at least(ϵ), a contradiction.
Likewise,
the second inequality of
Eq. (<ref>),
now follows from the first inequality of
Eq. (<ref>).
§.§ Lower bound setting
Let D be the input distribution over the domain X. For a tolerance parameter τ > 0, 𝒪^adv(τ):= 𝒪^adv_D,h^*(τ) oracle is the oracle that for any query function h ∈$̋, returns a valuev ∈[ μ - τ, μ+ τ], whereμ = _x∼ D[h(x)h^*(x)].
Let D be the input distribution over the domain X. The Sample oracle 𝒪:= 𝒪_D,h^* oracle is the oracle that given any function h ∈$̋,
takes an independent random samplexfromDand returns the valuev = h(x)h^*(x).
We will need the following results for our proof. The first is a reduction from an adversarial noise oracle to a statistical one. Specifically, consider the learning setting defined in Section <ref>,
for a sample oracle𝒪. Letbe a (possibly randomized) algorithm for that setting. The following theorem shows a simulation of𝒪via𝒪^advthe SQ oracle.
Assume that outputs a ϵ-approximation to h^* with probability at least δ, using m samples from
Ø. Then, for any δ' ∈ (0,1/4], there exists a SQ algorithm ' that uses at most m queries to 𝒪^adv(δ'^2/m) and outputs an ϵ-approximation to h^* with probability at least
δ - δ'.
Their result is obtained by simulatingusingØ^advas follows: for any query oftoØ, the response ofØ^advto that query is used as bias for a coin flip, which is then given to the learner as the simulated outcome ofØ. They then prove that the truemsamples ofØand the simulated coin flips are statistically
close by bounding their distributional distance. This implies that the success probability of', the simulated algorithm, is not much worse than that of, the original algorithm.
We note that the result originally stated in <cit.> differs from Theorem <ref> above in two ways. First, it reduces to a variant of𝒪^adv(τ)with a toleranceτ' ∈ [τ, √(τ)]. Thus, it holds for𝒪^adv(τ)as well. Second, it is phrased in a more general setting of search problems over distributions, which captures the SQ model, as detailed in <cit.>, Section 6.
The second result that is needed for our proof is the following lower bound due to <cit.>.
Let ϵ>0, and let ⊆̋{± 1}^ be a hypothesis space with strong SQ dimension (,̋ 2ϵ) ≥ 3 (see Definition <ref>).
Then for any SQ algorithm using m queries to Ø^adv(τ) with tolerance
τ≥ 2/√(),
there exist h^* ∈$̋ such that ifoutputs anϵ-approximation toh^*, thenm > τ^2/3.
§.§.§ Proof of Theorem <ref>
Setδ = 2/3. LetDbe a distribution over. Assume towards contradiction that
there exists a learning algorithmsuch that for anyh^* ∈$̋, given oracle access to 𝒪:= 𝒪_D,h^* and using m≤√()/12 samples from
Ø, the algorithm outputs an ϵ-approximation to h^* with probability at least δ.
We then apply Theorem <ref> for δ' = δ/2 to simulate the
algorithm using Ø^adv:= Ø^adv_D,h^*. The resulting algorithm uses m ≤√()/12 queries to Ø^adv(τ)
for τ =δ'^2/m > 4/(3√()) ≥ 2/√() and has success probability of at least δ - δ' = δ/2 > 1/3. By Theorem <ref> we obtain a contradiction, as m > τ^2/3 > √()/2.
§ MISSING PROOFS OF SECTION <REF>
§.§ Proof of Theorem <ref>
Let d=_( , α, 3ϵ/2). Note that the eluder dimension is always at least 1, so the theorem trivially holds if d≤ 9. In the remainder of the proof assume that d≥10.
From the definition of the monotonic dissimilarity dimension, there exists τ≥3ϵ/2 such that d=(̣,α,τ). Let (f_1, a_1),…,(f_d, a_d) be a sequence satisfying the dimension conditions for τ. We will show that
the first d/9 elements of this sequence also satisfy the conditions of the eluder dimension for some ϵ'≥ϵ.
Specifically, we will show that there is some ϵ' ≥ϵ such that every element a_j with j≤d/9 in the sequence above is ϵ'-independent of its predecessors. That is, we will show that for every such element a_j, there exists a pair of functions f,f' ∈ that satisfy
√(∑_i=1^j-1f(a_i) - f'(a_i)^2 )≤ϵ',
yet it also holds that f(a_j) - f'(a_j) > ϵ'.
By definition of the dissimilarity dimension, there exists c ≤α -τ
such that for all i<j,
f_j(a_i) - c = (f_i,a_i)(f_j,a_j) - c≤τ/√(d).
Then, by the triangle inequality,
f_j(a_i) - f_j+1(a_i)
=
f_j(a_i) - c +c - f_j+1(a_i)≤f_j(a_i) - c + f_j+1(a_i) - c≤2τ/√(d).
Therefore,
f_j(a_i) - f_j+1(a_i)^2 ≤4τ^2/d,
and so for all j ≤d/9 it holds that,
∑_i=1^j-1(f_j(a_i) - f_j+1(a_i))^2 <4τ^2/9.
Next, recall that for all j ≤ d we have
f_j(a_j) ≥α≥ c+τ and f_j+1(a_j) ≤ c + τ/√(d).
Thus,
f_j(a_j) - f_j+1(a_j)≥ c+τ - c - τ/√(d) >2τ/3,
where the last inequality holds for d ≥ 10.
Overall, Eqs. (<ref>) and (<ref>) then demonstrate that for ϵ'=2τ/3≥ϵ, the element a_j is ϵ'-independent of its predecessors, finishing the proof.
§.§ Proof of Theorem <ref>
Our proof uses the following result on ranks of perturbed identity matrices (see <cit.>):
Let ∈^d× d be a symmetric matrix such that A_ii=1 for all i and A_ij≤1/√(d) for all i j. Then ()>d/2.
The proof begins by constructing a matrix whose entries are derived from the evaluation values of elements that satisfy the dimension condition. Then we bound the rank of from above as well as from below. The lower bound is expressed in terms of the dimension d = _̣(, α, ϵ) while the upper bound is expressed in terms of n. Combining the bounds then yields the result of the theorem.
Construction of .
Let (f_θ_1, _1),…,(f_θ_d, _d) denote the alternatives that satisfy the dimension conditions, with respect to some value
c such that c ≤α - ϵ (see Definition <ref>). Define to be the d × d matrix with entries M_ij = ⟨θ_i, _j ⟩-c for i,j ≤ d. Note that all diagonal entries of are at least α-c≥ϵ, and all other entries are in -ϵ/√(d), ϵ/√(d).
Upper bound on ().
Let ∈^d× d be the matrix of inner products, K_ij=⟨θ_i, _j ⟩, and let be the Gram matrix for the set of vectors θ_1,…,θ_d,_1,…,_d. Then is a 2d × 2d matrix
of the rank at most n, because the vectors are of the dimension n (see, e.g., <cit.>), and is a submatrix of , so () ≤() ≤ n. Moreover,
=-c^⊤, where is the all-ones vector in ^d. Therefore, by subadditivity of rank,
()
=
(-c^⊤)
≤() + (-c^⊤)
≤
n+1.
Lower bound on ().
Let ∈^d× d be the diagonal matrix
with entries D_ii=1/√(M_ii). Since M_ii≥ϵ>0, we have 0<D_ii≤1/√(ϵ). Consider the matrix
'=.
Then
(')=()=(),
because the matrix is non-singular (see <cit.>).
Furthermore, matrix ' satisfies M'_ii=1 for all i and
[t]M'_ij=D_ii M_ij D_jj≤1/√(ϵ)·ϵ/√(d)·1/√(ϵ)
=
1/√(d)
for all i j.
Consider the symmetric matrix =('+(')^⊤)/2. Then, we also have S_ii=1 for all i, and S_ij≤1/√(d) for all i j. Thus, by Lemma <ref>, ()>d/2. Moreover, by the subadditivity of the rank
d/2<()≤('/2)+(')^⊤/2=2(').
Combining Eqs. (<ref>), (<ref>) and (<ref>), we therefore obtain
d/2<2(')=2()≤2n+2,
and so d<4n+4. Since d is an integer, we must have d≤ 4n+3.
In the special case that α=1, we have ⟨θ_i, _i ⟩≥ 1 for all i≤ d, which is only possible when θ_i=_i for all i≤ d. As a result, the matrices and ' are both symmetric, and thus
=' and
d/2<()=(')=()≤ n+1,
implying that d≤ 2n+1.
§.§ Proof of Theorem <ref>
Using an existing bound on the eluder dimension for GLM bandits (<cit.>, Proposition 7), and the fact that our dimension is bounded by the eluder dimension (Theorem <ref>) the result follows.
§.§ Proof of Theorem <ref>
Denote d = _̣(^_b, 1-b, ϵ). Notice that since b < 1, for any , ∈_n such that ≠ it holds that f_,b() < f_,b() = 1-b. Let (f__1,b, _1), …, (f__d,b, _d) be a sequence of elements satisfying the dimension definition, with respect to a corresponding scalar c ≤ 1-b-ϵ. Since the evaluation is symmetric for ReLU functions, we can view this sequence as a set, and denote U = {_1,…_d}. In addition, note that by the dimension definition, for all ∈ U, = 1.
We start by proving an upper bound on d. Assume d ≥ 9.
First, consider the case c ≤ϵ/3. Let U_0 be any subset of the unit sphere such that for all ≠' in U_0 it holds that ⟨,' ⟩≤ b+ 2ϵ/3. Observe that for all such ≠' we have f_,b(') = f_',b() ∈ [0,2ϵ/3]. Thus, we get that d ≤ |U_0|.
A standard sphere covering argument shows that the size of such a set is upper bounded as follows. The δ-covering number of the unit sphere is at most (3/δ)^n (<cit.>, Cor. 4.2.13). Thus, there are at most (3/δ)^n points such that each pair ≠' satisfies - '≥δ, or equivalently
⟨,' ⟩≤ 1 - δ^2/2.
By setting δ = √(2(1-b-2ϵ/3)) we get that |U_0| ≤ (3/δ)^n ≤ (3/2√(2(ϵ-2ϵ/3)))^n ≤ (4/√(ϵ))^n, which yields the desired bound.
Now, consider the case c > ϵ/3. In this case, for all i≠ j, we have that f__j,b(_i) ≥ c - ϵ/√(d)≥ c - ϵ/3 > 0, and so ⟨_i, _j ⟩ > b. Let c' = c+b. Thus, it must also hold that, |⟨_i, _j ⟩ -c'| ≤ϵ/√(d) for all i≠ j. Note that c' < 1-ϵ. Then, by applying Lemma <ref> we get that
d is upper bounded by 2n+4. Overall, the bound in the claim holds.
Next, we show a lower bound on d = _̣(^_1-ϵ, ϵ, ϵ), by lower bounding the size of the set U defined above. We now apply a sphere packing argument, which shows that there exists such a set U with size |U| ≥ (1/2ϵ)^n/2. We follow a similar argument as above. Specifically, the δ-packing number of the unit sphere is at most (1/δ)^n (<cit.>, Cor. 4.2.13). By plugging in δ = √(2ϵ), yields the desired bound.
§.§ Proof of Proposition <ref>
We start with an auxiliary lemma:
Let _1 and _2 be two sets, and let α∈ and ϵ > 0. Denote = _1 ∪_2 and let : ×→ be an evaluation function. Then,
_̣(, α, ϵ) ≤_̣(_1, α, ϵ) + _̣( _2, α, ϵ)
and
_(, α, ϵ) ≤_(_1, α, ϵ) + _( _2, α, ϵ).
Let z_1, …, z_d ⊆ such that there exists c≤α-ϵ with |(z_i | z_j) -c | ≤ϵ/√(d) for all i < j, and (z_i|z_i) ≥α. Let I_1, I_2 ⊆ [d] be disjoint sets of indices with { z_i }_i ∈ I_1⊆_1 and I_2 = [d] ∖ I_1. Consider the sub-sequence z_ℓ_1, …, z_ℓ_|I_1| ordered by appearance in z_1, …, z_d of elements in I_1. By definition for all i<j,
|( z_ℓ_i| z_ℓ_j) - c | ≤ϵ/√(d)
Therefore the sequence z_ℓ_1, … ,z_ℓ_|I_1| satisfies |( z_ℓ_i| z_ℓ_j) - c | ≤ϵ/√(|I_1|) for all i<j and therefore |I_1| ≤_̣(_1, α, ϵ). The same logic implies |I_2| ≤_̣(_2, α, ϵ).
To get the monotonic version, note that if ϵ^* = max_ϵ' ≥ϵ_̣( , α, ϵ'),
_( , α, ϵ) = _̣( , α, ϵ^*)
≤_̣( _1, α, ϵ^*) +_̣( _2, α, ϵ^*)
≤_( _1, α, ϵ) +_( _2, α, ϵ)
where the first inequality is by the first statement of the lemma which was proved above, the next inequality is by definition of the monotonic dimension, and the last inequality follows by ϵ≤ϵ^*.
We can now construct the classes that demonstrate the separation of the eluder and dimension.
We consider two overlapping semicircles, indexed by j∈{0,1}, and defined as
U_0 = (cos x,sin x):
x∈(-π/2,π/2),
and
U_1 = (cos x,sin x):
x∈(0,π).
For each j∈{0,1}, and any N∈ℕ and ϵ > 0, we define the function class
_j,N,ϵf_,S,σ: ∈∖ U_j,
S⊆ U_j, S=N,
σ∈{±ϵ}^S,
containing functions
f_,S, σ () =
0 if ∈ U_j ∖ S,
σ() if ∈ S,
⟨, ⟩ if ∈∖ U_j.
In words, the functions in the class _j,N,ϵ are linear outside of the semicircle U_j,
and zero in the semicircle U_j, except for a set of size N, where they can take any combination of values +ϵ and -ϵ. For any N∈ℕ and ϵ>0, we define
the class _N,ϵ⋃_j∈{0,1}_j,N,ϵ and show that
this class has a constant dissimilarity dimension but its eluder dimension is at least N.
Finally, consider the action set = and the function class _N,ϵ as defined above. Let _N, ϵ=_N, ϵ×, = and ϵ∈ (0,1/2). We how show that (_N,ϵ,ϵ)≥ N, but
_̣(_N, ϵ, 1, ϵ) ≤ 16.
First we prove the following lower bound on the eluder dimension of _j, N, ϵ.
The eluder dimension of _j, N, ϵ satisfies (_j, N, ϵ, ϵ) ≥ N for all j ∈{0,1}.
Let _1, …, _N be an arbitrary set of points in U_j and consider the functions {f_i}_i=1^N+1⊆_j, N, ϵ, S, that for all i, i' ≤ N satisfy:
f_i(_i') = ϵ if i'≠ i
- ϵ if i' = i,
and f_N+1(_i) = ϵ for all i ≤ N. We now show that for all i ≤ N, the action _i is ϵ-independent of _1, …, _i-1 with respect to _j, N, ϵ, S,. This holds since √(∑_j=1^i-1 (f_i(_j) - f_N+1(_j))^2 ) = 0 while |f_i(_i) - f_N+1(_i)| = 2ϵ > ϵ. This finalizes the proof.
Denote _N, ϵ = _N, ϵ× and let ϵ∈ (0,1/2). Then, _̣ρ(_N, ϵ, 1, ϵ) ≤ 16.
Denote by _j,N, ϵ = _j,N, ϵ× for all j ∈{0,1}. We start by showing that for all j ∈{ 0, 1}, _̣ρ(_j,N, ϵ, 1, ϵ) ≤ 8. Let z_1, …, z_d be a maximal sequence certifying the dissimilarity dimension _̣ρ(_N, ϵ, 1, ϵ) ≥ d, i.e., it holds that,
| ( z_i | z_i') -c | ≤ϵ/√(d) for all i <i', while ( z_i | z_i ) ≥ 1.
Since (z_i | z_i ) ≥ 1, it must be the case that z_i = (f__i, S_i, σ_i, _i) with _i ∉U_j (since otherwise the self evaluations would be strictly less than 1). This implies that ( z_i |z_j ) = ⟨_i, _j⟩ for all i< j. Consequently, the score evaluations of all z_1, …, z_d are equivalent to the score evaluations of the linear problem defined by _1, …, _d. Thus Theorem <ref> implies the maximum length of such a sequence can be of size at most 2× 2 + 4 = 8. Finally the sub-additivity of the dissimilarity dimension (see Lemma <ref>) implies,
_̣ρ(_N, ϵ, 1, ϵ) ≤∑_j=0^1 _̣ρ(_j,N, ϵ, 1, ϵ) ≤ 16.
Combining the results of Lemmas <ref> and <ref> finalizes the proof of Proposition <ref>.
§ MULTI-ARMED BANDITS
In this section we explore the dissimilarity dimension of the K-armed bandit problem. In this setting the learner interacts with a set of K arms and at every step of a sequential interaction pulls an arm a_t ∈ [K] and receives a reward r_t such that [r_t] = μ_i_t where μ_a_t is the mean reward of arm a_t. For simplicity we will assume μ_a ∈ [0,1] for all a ∈ [K] and that |r_t | ≤ 1.
The K-armed bandit problem is an instance of structured bandits where = [K] and = [0,1]^K. The dissimilarity dimension of the K-armed bandit problem satisfies,
Consider the action set =[K] and the function class = [0,1]^K as defined above. Let α∈ [0,1] and =×, = and ϵ∈ (0,1/2). Then _̣(, α, ϵ) ≤ K.
Let c ≤α-ϵ and z_1, …, z_d ∈ with z_i = (f_i, a_i) be a maximal sequence such that, ρ(z_i | z_i) ≥α while
| ρ(z_i | z_j) - c| ≤ϵ/√(d).
For i < j. Substituting the definition of ρ, this implies f_i(a_i) ≥α for all i ∈ [K] while | f_j(a_i) - c| ≤ϵ/√(d) for all i <j. By definition of c, if d ≥ 2
f_j(a_i ) ≤α - ϵ + ϵ/√(d)≤α - ( 1-1/√(2))ϵ < α - ϵ/4, for all i <j.
Let I_i = { a_ℓ}_ℓ=1^i be the set of actions up to index i in the tuple sequence z_1, …, z_i. Equation <ref> implies that f_j(a) ≤α-ϵ/4 for all j > i. Since a_j satisfies f_j(a_j) ≥α > α - ϵ/4 this implies a_j ∉I_i. We conclude that a_i ≠ a_j for all i <j. Since there are at most K different arm values, this implies d ≤ K.
§.§ Structured Bandits
We will now explain in detail how what Algorithm <ref> reduces to in the structured bandits setting from Example <ref>. We write z_i = (f_i, a_i) for all i ∈ [T]. The large evaluation set _α can be reduced to the following set of functions,
_α = { f ∈s t.max_a ∈ f(a) ≥α}.
Since the function ( z_i | z ) is independent on a for z = (f,a), the least squares equation _z∈_α∑_i=1^t-1(z_i z) - _i^2 reduces to,
f_t = _f ∈_α∑_i=1^t-1( f(a_i) - r_i)^2.
finally, to ensure the action query has a self evaluation of at least α, we output a_t = _a ∈ f_t(a).
We will now explain in detail what the Optimistic Interactive Estimation Algorithm <ref> reduces to in the structured bandits setting from Example <ref>. We write z_i = (f_i, a_i) for all i ∈ [T].
The least squares objective (z_t = _z∈_α∑_i=1^t-1(z_i z) - _i^2)
can be written as,
f̂_t = _f ∈∑_i=1^t-1( f(a_i ) - r_i)^2.
The action component of the z element in this objective can be ignored since the evaluation function does not depend on it. The confidence ball _t = z∈: [b]∑_i=1^t-1(z_i z) - (z_iẑ_t )^2 ≤ R reduces to,
_t = f∈: [b]∑_i=1^t-1 f(a_i) - f̂_t(a_i)^2 ≤ R.
The query can be reduced to the action component of z_t,
a_t = _f,a ∈_t × f(a).
Algorithm <ref> summarizes this reduction and corresponds to the standard optimistic least squares for sturctured bandit problems from <cit.>.
|
http://arxiv.org/abs/2306.02598v1
|
20230605050611
|
Medial packing, frustration and competing network phases in strongly-segregated block copolymers
|
[
"Michael S. Dimitriyev",
"Abhiram Reddy",
"Gregory M. Grason"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.mtrl-sci"
] |
Self-consistent field theory (SCFT) has established that for cubic network phases in diblock copolymer melts, the double-gyroid (DG) is thermodynamically stable relative to the competitor double-diamond (DD) and double-primitive (DP) phases, and exhibits a window of stability intermediate to the classical lamellar and columnar phases. This competition is widely thought to be controlled by “packing frustration” – the incompatibility of uniformly filling melts with a locally preferred chain packing motif. Here, we reassess the thermodynamics of cubic network formation in strongly-segregated diblock melts, based on a recently developed medial strong segregation theory (“mSST”) approach that directly connects the shape and thermodynamics of chain packing environments to the medial geometry of tubular network surfaces. We first show that medial packing significantly relaxes prior SST upper bounds on the free energy of network phases, which we attribute to the spreading of terminal chain ends within network nodal regions. Exploring geometric and thermodynamic metrics of chain packing in network phases, we show that mSST reproduces effects dependent on the elastic asymmetry of the blocks that are consistent with SCFT at large χ N.
We then characterize geometric frustration in terms of the spatially-variant distributions of local entropic and enthalpic costs throughout the morphologies, extracted from mSST predictions.
Analyzing these distributions, we find that the DG morphology, due to its unique medial geometry in the nodal regions, is stabilized by the incorporation of favorable, quasi-lamellar packing over much of its morphology, motifs which are inaccessible to DD and DP morphologies due to “interior corners” in their medial geometries.
Finally, we use our results to analyze “hot spots” of chain stretching and discuss implications for network susceptibility to the uptake of guest molecules.
§ INTRODUCTION
Amphiphilic molecules, from lyotropic liquid crystals to macromolecular block copolymer (BCP) analogs, are known to assemble into a wide range of morphologies upon microphase separation, from the classical lamellar, columnar, and spherical phases, to a variety of intercatenated, triply-periodic network phases <cit.>.
While the propensity of such molecular building blocks to assemble into the classical phases can be largely understood from simple packing arguments relating interfacial curvature to asymmetry of molecular architecture, the stability of network phases over the classical phases for certain molecular architectures is difficult to rationalize using such arguments.
Such networks are “in-between” phases that typically form in conditions where layers and columns are in close competition, exhibiting interfaces that are almost cylindrical in some regions, almost flat in others.
Initial heuristic pictures of bicontinuous phase structure formation focused on the likely role of area optimization at the intermaterial dividing surface (IMDS) and the geometric connection to triply-periodic, area-minimizing surfaces <cit.>.
By far, the most commonly considered network phases are those related to the associated family of cubic minimal surfaces: Primitive (P), Diamond (D), and Gyroid (G), which are infinite surfaces of every zero mean curvature and negative Gaussian curvature that divide space into two inter-catenated, equal-volume regions<cit.>.
Unlike in lyotropic or solvated systems, understanding network formation in BCP melts presents a particular challenge due to the requirement of molecules to fill all of space, with chains stretching to fill to the center of the network domains, as well as the entire matrix domain.
The basic antagonism between the usual rules that shape the intermaterial dividing surface (IMDS) – the enthalpy of mixing and entropic rigidity of different blocks – and the constraint that chains must fill all of space is loosely referred to as “packing frustration.”<cit.>
That is, the thermodynamic competition selects for a preferred local arrangement of molecules, which can be described as occupying a certain geometric motif.
These motifs, which can be approximated as wedge-like volumes of particular sizes and shapes, cannot, in general, tile space at uniform density.
Hence, physical BCP assemblies must be composed of ensembles of spatially distorted variants of the preferred motif, and consequently incur an additional free energy penalty associated with that distortion.
Elementary considerations show that all BCP morphologies, with the exception of lamellae, are subject to some measure of frustration<cit.>.
Perhaps the most obvious example derives from the fact that spheres and cylinders do not fill space without gaps.
Hence convex, quasi-spherical or cylindrical domains arranged in 3D crystalline or 2D columnar packings are distorted away from perfect rotational symmetry.
Packing frustration has long been cited to be particularly vexing for the formation of bicontinuous networks in BCP melts<cit.>, leading to narrow equilibrium windows in the diblock copolymer phase diagram (if they are predicted at all), as well as the stability of the double-gyroid (DG) network phase (shown in Fig. <ref>) over competitor double-diamond (DD) and double-primitive (DP) phases, whose structures are displayed in Fig. <ref>.
Efforts to understand the connections between the thermodynamics and complex geometries of bicontinuous networks were stimulated by experiments that reported apparently equilibrium cubic networks in diblock melts at compositions intermediate to where columnar and lamellar morphologies are observed.
Early observations of star diblocks suggested the formation of a DD structure<cit.>, but it was later concluded that in linear diblock melts, under conditions approaching equilibrium, cubic network formation is predominantly DG<cit.>.
The thermodynamic stability of bicontinuous networks in BCP has a complex history, owing in part to a long-standing and apparent discrepancy between strong segregation theory (SST) and numerical studies of self-consistent field theory (SCFT).
One one hand, early SST models of network phases in linear diblocks by Olmsted and Milner <cit.>, and separately by Lihktman and Semenov <cit.>, predicted that network phases are never thermodynamically stable in the χ N →∞ limit due to a large free-energy gap relative to competitor hexagonal cylinder (Hex) and lamellar (Lam) phases.
This was consistent with initial SCFT calculations which suggested that the DG phase might become unstable at very large segregation<cit.>.
However, as resolution of SCFT algorithms improved sufficiently to address very strong segregation regimes, calculations showed that DG phase retains a finite, albeit narrowing window of equilibrium up to at least χ N ≳ 100 <cit.>, which is also consistent with experimental studies of this regime <cit.>.
These original SST models of chain packing in network phases are based on what might be called a “skeletal ansatz," which assumes that the 1D skeletal graph that connects the nodal centers of the tubular network domains constitutes what has been dubbed a terminal boundary<cit.>, i.e. the region of maximal extension of chain trajectories away from the IMDS within a brush-like subdomain composed of one polymer block.
While the skeletal ansatz is a seemingly intuitive proxy for the tubular network regions<cit.>, it has recently been understood that this approximation severely overestimates the entropic penalty of stretching in those domains.
Recently, we<cit.> showed that a more realistic and thermodynamically favorable packing ansatz for DG networks formed by diblocks in the SST limit considers terminal boundaries that spread well away from the 1D skeletal graph, in web-like surface patches derived from the medial map of gyroid morphologies.
In short, the medial map provides the shortest-distance map of points within a volume onto points on its bounding surface as well as a corresponding medial set, which is the set of maximally-distant points at the general “center” of a domain of arbitrary shape<cit.>.
Importantly, we showed that the stability of the DG phase relies on the ability of chain ends to spread out over a web-like surface (shown in Fig. <ref>) in each tubular domain, rather than being forced to stretch to the 1D skeletal graph, thus lowering the entropic penalty for tubular-domain filling to the point where the free-energy gap between competitor morphologies is eliminated.
As a result, we predicted DG stability windows, between Hex and Lam phases for diblocks, that open up and widen as the conformational asymmetry between blocks is increased.
In this article, we apply this medial strong segregation theory (mSST) approach to a comparative study of DG, DD, and DP phases.
In neat BCP systems, DG is almost always the thermodynamically favored network structure.
This has long begged the question, what aspects of the DG morphology make it favorable over its apparently structurally-similar DD and DP cousins?
Long-standing heuristic pictures point to costs of packing frustration associated with the center of the nodes<cit.>.
The DG, DD and DP networks differ in terms of their functionality (i.e. the number of nearest-neighbor nodes within a contiguous tubular domain): 3, 4 and 6, respectively.
It has been suggested that the relatively high free energy of DD and DP relative to DG can be attributed to a frustration cost that increases with node functionality, due to an increasing distance of the domain center from the IDMS for nodal regions with additional tubular interconnections.
While seemingly intuitive, this heuristic picture makes it unclear why any network morphology composed of such “problematic” nodal regions should be favored over the classical cylindrical or layered structures, and whether and when conditions exist such that the competition between network phases can be shifted to higher-functionality structures (e.g. DD over DG).
Here we employ the mSST framework to assess the role of chain packing and its connection to complex domain shape in detail.
We analyze the specific relationships between the shapes of the IMDS and terminal boundaries and the thermodynamic costs of variable stretching and IMDS enthalpy in the competing networks.
Additionally, this medial packing perspective reveals what is uniquely advantageous about DG among its competitor morphologies, a feature that can be associated with terminal boundary geometry, shown in Figs. <ref> and <ref> for all three network phases.
The DG (Fig. <ref>A,B) terminal boundaries are composed of twisted, ribbon-like webs that thread through tubular subdomains and a Gyroid-like surface that subdivides the matrix domain.
While the DD and DP (Fig. <ref>) matrix regions are also divided by minimal surface-like terminal boundaries, their tubular domains possess multiple leaves, or “fins,” that meet in abrupt angles.
This can intuitively be understood as a consequence of the fact that the terminal boundaries are composed of web-like surfaces that span between portions of the network skeletons.
Hence, the terminal boundaries of DD and DP possess corners, which are analogous to singular geometric features that are typically found in the boundaries between matrix brushes for cylinder- and sphere-like domain assemblies (i.e. the corners of Voronoi cells).
While the struts of individual DG nodes are coplanar and can be spanned by a single, roughly triangular surface (appropriately twisting between nodes), the respective tetrahedral and octahedral geometries spanned by the DD and DP struts are not coplanar, and pairs of struts must be joined by distinct webs.
The special “corner-free” nature of DG terminal boundaries facilitates a hybridized morphology, as shown in Fig. <ref>C, which is composted of local regions of quasi-lamellar packing coexistent with curved, cylinder-like regions.
As we demonstrate below, this sharp distinction in terminal boundary geometry between DG relative to DD and DP significantly impacts the nature and thermodynamics of chain packing underlying network phases.
While “packing frustration” is a seemingly intuitive and widely invoked notion in amphiphile self-assembly<cit.>, a central perspective of this article is that it is difficult, if not impossible, to characterize it by a single quantitative measure of the morphology<cit.>, beyond possibly the free energy of a morphology.
On one hand, packing frustration is associated with the variability of BCP packing and the resulting entropic and enthalpic costs<cit.>.
Alternatively, packing frustration is often connected to certain locations in a morphology that can be identified as the “sources" of frustration.
Such a perspective focuses on where is frustration coming from in the packing, and in turn, how this relates to especially costly “hot spots" in the resulting morphology.
Related to this latter concept is a third notion of the susceptibility of a morphology to filling the hot spots by blending in guest molecules (e.g. homopolymers, nanoparticles, or solvent) that can can relieve some, if not all, of the frustration<cit.>.
These distinct, yet interrelated, facets of packing frustration challenge a simplified, overarching theoretical understanding of what it is, and more importantly, the development of rational principles for manipulating it via chemical design of supramolecular systems.
In what follows, we make no particular attempt to resolve these multiple, at at times countervailing, perspectives on packing frustration.
Instead, we exploit medial SST to translate fine features of complex morphology into explicit thermodynamic terms in order to assess formation of distinct cubic double-networks in diblock melts.
The primary focus of our analysis will be on the first two facets of frustration: the statistical variability of molecular packing and its spatial correlations with geometric aspects of the morphology.
While blended systems are beyond the primary scope of this article, we include a brief analysis of the “hot-spot" distributions within the distinct tubular vs. matrix domains of networks and discuss the likely implications of hot-spot distributions for susceptibility of frustrated network assembly to incorporate “guest” molecules (e.g. homopolymers) in blends.
The remainder of this article is outlined as follows.
We first introduce the medial strong segregation theory (mSST) and our variational approach to finding optimal network morphologies.
Next we explore the geometry of each of the cubic network phases and identify ways in which the packing environments of the DG phase are optimal in comparison with the DD and DP.
We then examine how these variable packing environments give rise to significant spatial variations in the free energy per chain, with optimal morphologies forming regions of low chain entropy at the cost of high interfacial enthalpy, and vice versa.
Building on our explorations of packing geometry and variations in free energy per chain, we present a multifaceted and generalized picture of packing frustration.
Finally, we consider the distribution of chain stretching free energy in tubular and matrix brush subdomains.
We analyze the shapes and locations of so-called “hot spots” in the chain stretching and discuss potential implications for systems blended with guest molecules (e.g. homopolymers).
§ MEDIAL SST METHODOLOGY
§.§ Strong Segregation Theory
Expanding our attention beyond linear diblocks, we consider the more general situation of A_n_ AB_n_ B equal-arm starblocks, where n_ A and n_ B are the number of branches.
Each branch of a single block has the same number of segments, N_ A/n_ A and N_ B/n_ B, where N_ A and N_ B are the total numbers of A-block and B-block segments, respectively, and N = N_ A + N_ B is the total number of segments in the chain.
It has been shown by Milner <cit.> that the entropic stiffness of such starblock copolymers can be encoded in a single elastic asymmetry parameter ϵ = n_Ba_A/n_Aa_B, where a_ A and a_ B are the block segment lengths.
Note that this elastic asymmetry parameter encodes both architectural asymmetry (n_ A≠ n_ B) as well as conformational asymmetry (a_ A≠ a_ B), and that a starblock can be represented as an equivalent linear diblock with asymmetric segment lengths.
Thus, we consider a parameter space of chains whose properties are encoded in a pair of parameters – the elastic asymmetry ϵ and the A-block fraction f = N_ A/N.
In the SST limit, χ N →∞, the total free energy F is given by<cit.>
F = H + S_ A + S_ B ,
where H is the interfacial enthalpy and S_α for α∈{ A, B} are the free energy contributions arising from reductions in the entropy of stretched chain conformations.
The enthalpy is given by H = γ A, where A is the area of the IMDS and the interfacial energy density γ is expressed as<cit.>
γ = k_ BTρa√(χ/6)(2/3ϵ_0^3/2 - ϵ_0^-3/2/ϵ_0 - ϵ_0^-1) = k_ BTρa√(χ/6)γ ,
where ρ is the segment density, a≡√(a_ A a_ B) is the geometric mean of the two statistical segment length a_ A and a_ B, ϵ_0 = a_ A/a_ B is a parameter describing conformational asymmetry, which is folded into a dimensionless number γ.
The stretching free energy of each subdomain α is given by
S_α = 1/2κ_αI_α ,
where the entropic stiffness parameters κ_α are given by
κ_ A = 3π^2 ρ k_ BT/4 N^2 a^2n^2/f^2 ϵ = 3π^2 ρ k_ BT/4 N^2 a^2κ_ A ,
κ_ B = 3π^2 ρ k_ BT/4 N^2 a^2n^2ϵ/(1-f)^2 = 3π^2 ρ k_ BT/4 N^2 a^2κ_ B ,
where n≡√(n_ A n_ B) is the geometric mean of A- and B-block branch numbers n_ A and n_ B, ϵ = (n_ B/n_ A)ϵ_0 is the elastic asymmetry parameter, and κ_α are numerical parameters that contain all dependence on architectural and conformational asymmetry between the two blocks.
The geometric cost of chain stretching is well-approximated by the results of parabolic brush theory<cit.> and encoded in the total second moments of volume
I_α≡∫_V_α dV z^2
for each subdomain α, where z is a brush height coordinate extending from the IMDS at z = 0.
While parabolic brush theory accounts for the statistics of chains whose free ends are allowed to exist anywhere within a brush in a manner consistent with melt conditions, it fails to be self-consistent for brushes on surfaces whose curvatures force the free chain ends to splay apart.<cit.>
However, we have previously shown<cit.> that the necessary “end-exclusion zone” corrections to parabolic brush theory are typically negligible for network phases, increasing the free energy per chain by ≲ 0.01%, with appreciable increases in free energy predicted for highly-curved sphere phases, justifying our use of parabolic brush theory for the results presented in this manuscript.
Next, we develop an ansatz for how chains fill space.
We imagine that the average chain conformations follow trajectories that join points 𝐩_ A on the terminal surface 𝐓_ A in the tubular A-block subdomain to points 𝐩_ B on the terminal surface 𝐓_ B in the matrix B-block subdomain.
Since all of space is occupied by chains, any point in space can be mapped to a pair of terminal points on each of the terminal surfaces along such trajectories, the collection of which foliates space and defines the association map.
In SST, chains are in a strongly-stretched limit, which suggests that straight-line association maps 𝐩(t) = (1-t)𝐩_ A + t𝐩_ B for t ∈ [0,1] are reasonable approximations of chain trajectories.
Triplets of points on each terminal surface, {𝐩_α,1,𝐩_α,2,𝐩_α,3} (for α = A, B), form triangular faces and lines joining corresponding pairs of these points 𝐩_i(t) = (1-t)𝐩_ A,i + t𝐩_ B,i (for i = 1, 2, 3) provide the long edges of slender, pentahedral “wedge” volumes that approximate local packing environments for collections of chains, as depicted in Fig. <ref>A.
Each wedge can be constructed as a stack of triangular regions for fixed values of t, terminating at the triangular facets on the two terminal surfaces, so that an arbitrary point 𝐗 within any triangular slice can be represented as
𝐗(u,v,t) = (1-v)𝐩_1(t) + (1 - u)v𝐩_2(t) + u v 𝐩_3(t) ,
where u,v ∈ [0,1] provide local coordinates for each triangle at fixed t.
If the wedge is sufficiently thin then the centroidal vector 𝐂≡∑_i=1^3Δ𝐩_i/3, where Δ𝐩_i ≡𝐩_ B,i - 𝐩_ A,i, provides an average trajectory of chains within the wedge and a definition of wedge height, h ≡ |𝐂| and a local “wedge height” coordinate ζ≡ ht.
The area A(ζ) of each triangular cross-section of the wedge is given by a version of Steiner's formula<cit.>,
A(ζ) = 𝐂̂·𝐀_ A(1 + 2ℋζ + 𝒦ζ^2) ,
where 𝐀_ A = (𝐩_ A,3 - 𝐩_ A,2)×(𝐩_ A,1 - 𝐩_ A,2)/2 is the area vector of the triangular facet on the terminal surface 𝐓_ A and
ℋ ≡𝐂̂/4h𝐂̂·𝐀_ A·[(Δ𝐩_3 - Δ𝐩_2)×(𝐩_ A,1 - 𝐩_ A,2) + (𝐩_ A,3 - 𝐩_ A,2)×(Δ𝐩_1 - Δ𝐩_2)]
𝒦 ≡𝐂̂/2h^2𝐂̂·𝐀_ A·[Δ𝐩_3×Δ𝐩_1 + Δ𝐩_2×Δ𝐩_3 + Δ𝐩_1×Δ𝐩_2]
are analogous to the mean and Gaussian curvatures, respectively.
In the strong segregation limit, each wedge is sharply divided into two sub-wedges by a dividing triangle at height ζ = h_b, representing a small surface patch of the IMDS.
The location of the IMDS patch is determined by local volume balance, V(h_b) = fV(h), where the volume of a sub-wedge with one end on 𝐓_ A and the other at height ζ is given by
V(ζ) = 𝐂̂·𝐀_ A ζ(1 + ℋζ+𝒦/3ζ^2) ,
so that h_b is determined by solving a cubic equation for each wedge.
It is important to emphasize that while this representation of local packing environments allows for local volume balance to be satisfied everywhere by solving for the distance h_b of the IMDS from the A-block terminal surface 𝐓_ A, the parametrization that we employ fixes the local surface normal 𝐍̂ of the IMDS to lie along the centroidal direction 𝐂̂.
However, this enforcement of IMDS orientation over-constrains local packing and since two neighboring wedges generally have different balanced heights h_b and centroidal directions 𝐂̂, the resulting IMDS is generally discontinuous.
To fix this, we individually adjust the vertices of each IMDS patch so that neighboring patches are continuous across each shared edge.
This is done by averaging over the vertex positions IMDS patches at shared edges, ensuring that any error in local volume balance due to this adjustment is kept minimal; in our calculations, the error in volume balance is typically less than 1% of the total volume.
As a result of these local adjustments between neighboring wedges, there is a local tilt between chain orientations and IMDS normal (illustrated in Fig. <ref>B), the implications of which are discussed in a later section.
In terms of wedge geometry, the second moments of volume of a single wedge (indexed by μ) are given by
I_ A,μ = h_b,μ^3𝐂̂_μ·𝐀_ A,μ/3[1 + ℋ_μ/2h_b,μ + 𝒦_μ/10h_b,μ^2]
I_ B,μ = 𝐂̂_μ·𝐀_ A,μ(h_μ - h_b,μ)^3/3[1 + ℋ_μ/2(h_b,μ + 3h_μ) + 𝒦_μ/10(h_b,μ^2 + 3h_b,μh_μ+6h_μ^2)]
,
where the subscript μ indicates that the corresponding parameters are defined according to individual wedges.
The total free energy F is then given as a sum over individual wedge contributions, i.e. F = ∑_μ F_μ; similarly, the total enthalpy H = ∑_μ H_μ and the total costs of stretching each block are S_α = ∑_μ S_μ, α.
§.§ Medial ansatz and variational calculation
We propose that the terminal surfaces 𝐓_ A and 𝐓_ B are well-approximated by the medial sets of a suitable family of generating surfaces.
This is based on the observation that the cost of stretching chains in a domain is related to the thickness of the domain.
Thus, this cost is minimized when the thickness of the domain is minimized.
Given a certain surface 𝐆 (which we shall call a “generating surface”), any point 𝐱 has a corresponding point 𝐩 on 𝐆 that minimizes the Euclidean distance, which is found via the medial map 𝐦(𝐱) <cit.>.
The medial map can be determined from the geometry of 𝐆: the Euclidean distance |𝐩 - 𝐱| is minimized when 𝐩 - 𝐱∝𝐍̂(𝐩), where 𝐍̂(𝐩) is the surface normal evaluated at 𝐩; since 𝐱 may in general lie on many different lines that lie along different surface normals of 𝐆, the medial map 𝐦(𝐱) selects the global minimizer of the Euclidean distance from all of these possibilities.
If 𝐱 is near the center of the region bounded by 𝐆 then it is potentially equidistant from multiple points on 𝐆: the collection of these points whose medial map 𝐦 has multiple solutions defines the medial set and gives a rigorous definition of the “center” of a region.
Since the medial map lies along local surface normals, it falls into the family of straight-line association maps.
To generate medial surfaces that are consistent with the symmetries of the three network phases, we specify generating surfaces 𝐆 as level sets of the form Ψ(𝐫) = ±1, where the sign selects a generating surface for one of the two bicontinuous domains.
These level sets have symmetries given by the non-centrosymmetric subgroups I4_1 3 2 (DG), F d 3 m (DD), and P m 3 m (DP) of the network phases' crystallographic groups, I a 3 d (DG), P n 3 m (DD), and I m 3 m (DP) <cit.>.
As such, the level sets can be represented by a set of basis functions that are adapted to each space group, i.e. Ψ = ∑_n c_nψ_n(𝐫/D), where ψ_n(𝐫/D) represent the symmetry-adapted basis functions, labeled by n, and c_n are expansion coefficients, and D represents the periodicity of the unit cell.
We truncate this expansion at the first four modes; for a list of the basis functions, please refer to the supplementary text.
It is important to emphasize that the generating surface 𝐆 is in fact distinct from the volume-balanced IMDS: while we propose that medial packing optimizes the stretching cost of chains, it generally fails to yield environments that satisfy the local volume balance constraint.
Therefore, we use generating surfaces that are close to a reasonable IMDS shape as an initial guess, with the expectation that the corresponding volume-balanced IMDS will be distinct while preserving features of the generating surface, such as its symmetries, topology, and coarse features.
Given a generating surface specified by coefficients {c_n} and a fixed set of symmetry-adapted basis functions, we calculate the corresponding medial sets and, treating them as suitable terminal surfaces, construct volume-balanced wedges.
The total free energy of a given network structure depends on the value of the D-spacing.
Instead of fixing the D-spacing, we allow the network to adjust in size, changing the total number of chains within a unit cell.
Under isotropic scaling, the IMDS area is A = D^2 Ã, the total volume is V = D^3 Ṽ, and the total second moments of volume are I_α = D^5 Ĩ_α, where Ã, Ṽ, and Ĩ_α are the nondimensional forms of IMDS area, total volume, and total second moments of volume.
For a fixed morphology, the D-spacing can then be found by minimizing the free energy per chain F̂≡ F/n_ ch (where the number of chains is given by n_ ch = ρ V/N) with respect to D, yielding
D = √(2/3π^4/3)(χ N)^1/6 N^1/2aλ
where
λ≡ (γÃ/(∑_ακ_αĨ_α))^1/3
is a dimensionless scale factor that depends only on the wedge geometries and chain architecture.
The free energy per chain is then given by
F̂ = 3 π^2/4 Ṽ(κ_ AĨ_ A + κ_ BĨ_ B)^1/3(γÃ)^2/3[(χ N)^1/3k_ B T] ,
recovering the expected ∼ k_ BT(χ N)^1/3 scaling as χ N →∞.
Finally, we minimize the free energy per chain (Eq. <ref>) over the set of generating surfaces.
This is done by choosing an initial set of basis coefficients c_n, meshing the resulting generating surface 𝐆 (we used ∼ 10^4 facets per nodal IMDS), calculating the free energy per chain F̂, and then performing a search for values of c_n that minimize F̂ using a Nelder-Mead algorithm .
Since changes in A-block fraction f and elastic asymmetry ϵ result in different equilibrium morphologies, this minimization is performed for each fixed value of the two parameters (f,ϵ).
Our choice of minimizing the free energy over a set of four basis functions rather than the two of our previous study<cit.> was in order to assess (i) whether the addition of further Fourier modes to the generating surfaces had a significant impact on the results of the calculation and (ii) the generalizability of the variational calculation to further degrees of freedom.
As shown in SI Fig. S1, these additional modes lead to slight reductions (on the order of 10^-5 for DG, 10^-3 for DD and DP) in the calculated free energy (within the constraints of the convergence criteria supplied to the Nelder-Mead algorithm) that seemingly diminish with each added mode.
Nonetheless, this demonstrates the potential applicability of the variational form of mSST to calculations involving additional degrees of freedom.
§.§ Large χ N, self-consistent field theory
We use self-consistent field theory (SCFT) calculations<cit.> as a point of comparison for the equilibrium network morphologies predicted by mSST and to demonstrate how our results at χ N →∞ translate to finite (but large) χ N.
The SCFT calculations were performed using the open-source PSCF software <cit.> at χ N = 40, 50, 60, and 75.
To simplify comparisons, we chose parameters relative to unit chain length, N = 1, and B block statistical segment length, a_ B = 1.
In these units, we tuned the elastic asymmetry ϵ via the A block statistical segment length, a_ A = ϵ.
Finally, in order to extract information about chain trajectories, we computed the polar order parameter field 𝐩(𝐱) from the segment flux<cit.>.
§ THERMODYNAMICS OF COMPETING NETWORKS
We first analyze the relative free energies of DG, DD and DP network phases in comparison to competitor Lam and Hex phases, and their dependence on composition and elastic asymmetry between the two blocks.
As shown in Fig. <ref>A, the four-mode mSST calculations of the free energy for each network is consistently smaller than predictions using the skeletal terminal surface model (here referred to as sSST) of Olmsted and Milner <cit.> for the case of elastically symmetric diblocks (ϵ = 1).
Here, the SST calculation of the the hexagonal cylinder (Hex) phase is performed using the kinked path ansatz <cit.>.
Once again, we find that the free energy of the mSST-constructed DG (mDG) coincides with that of the lamellar (Lam) and Hex phases at f ≈ 0.29, dramatically relaxing the conditions for stability compared with previous sSST predictions (sDG).
Moreover, the mDG is consistently ∼ 2-3% lower in free energy to sDG, compared with the ∼ 1% of mDD to sDD and ∼ 3% of mDP to sDP.
While those changes in free energy for DD and DP are significant on the scale of SST thermodynamics in general, they are nevertheless small compared to the much larger gaps of those networks relative to Lam and Hex (of order 3% and 12% for DD and DP respectively at f = 0.29).
This demonstrates that relaxation from skeletal to medial terminal boundary geometry accounts for a much greater, and more significant, reduction of the entropic penalty associated with nodal packing for DG relative to its DD and DP competitors.
We discuss geometrical interpretations of this “more effective” medial packing for DG in the following section.
In Fig. <ref>B and C, we show the free energy comparison of these same competing phases for elastically asymmetry diblocks: ϵ = 0.3, relatively stiffer tubular A block; and ϵ = 3.0 relatively stiffer matrix B block.
We observe windows of mDG stability over Lam and Hex phases for both limits of elastic asymmetry, consistent with our previously reported results <cit.>.
However, while both mDD and mDP structures are both far from stable for ϵ = 0.3 (stiffer A block), mDD is in close competition with Lam and Hex phases for ϵ = 3.0 (stiffer B block) in the window where DG is stable.
This indicates an asymmetry how medial packing relaxes entropic penalties in the two blocks: the cost of stretching for mDD and mDP is much larger than mDG when the A block is stiffer, whereas the three phases are have comparable stretching costs when the B block is stiffer.
To analyze how the relative thermodynamic stability of cubic network phases varies with elastic asymmetry, we compare their free energies at the compositions where Lam and Hex have equal free energy in Fig. <ref>.
These points fall on the dashed line in Fig. <ref>A in the f-ϵ plane, falling within the DG stability windows at high and low ϵ.
We compare both the mSST predictions which model the asymptotic χ N →∞ limit as well as finite-χ N SCFT calculations for increasingly large values of segregation (χ N = 40, 50, 60 and 75), shown in Fig. <ref>B.
As previously reported, both mSST and SCFT show that the free energy of DG relative to Lam and Hex competitors varies non-monotonically with ϵ, decreasing both as ϵ gets larger and smaller than ϵ≈ 1.
This non-monotonic behavior suggests that the nature of sub-domain chain packing in DG adjusts to accommodate regimes of both relatively stiffer matrix and minority block regimes, while maintaining thermodynamically favorable aspects of the morphology, which we discuss in more detail below.
In comparison, SCFT at these finite χ N predicts a lower free energy for DG than Hex or Lam for the full range of ϵ, indicating that it retains an equilibrium stability window up through χ N = 75.
In comparison, mSST predicts that the free energy of DG exceeds that of Lam and Hex at intermediate 1.0 ≲ϵ≲ 2.0, corresponding to the vanishing equilibrium stability window, as previously reported <cit.>.
While we note some basic consistency in the non-monotonic variation of the DG to Lam free energy with ϵ, mSST shows greater range in relative free energy variation.
In part, we can attribute some of this difference to finite-χ N corrections to the free energy that are absent from the mSST calculation.
Notably, the magnitudes of relative SCFT free energies increase considerably with χ N over this fairly modest range, suggesting that extending SCFT to the asymptotic χ N →∞ would likely result in relative free energies more comparable to the mSST predictions.
Prior studies <cit.> suggest such finite-χ N corrections may be significant (on the scale of the few % differences) for χ N large as 10^3-10^4.
Fig. <ref>B shows that the mSST predicts that the relative free energies of DD and DP decrease monotonically with ϵ.
This behavior is somewhat more intuitive than the non-monotonic variation of DG, as it is consistent with the interpretation that the tubular A blocks are the most costly regions in the morphology for chain stretching so that decreasing the entropic penalty for stretching in those blocks, by increasing ϵ, diminishes the relative free energy gap of those phases relative to competitors.
Furthermore, unlike the case of DG, which decreases for small ϵ, mSST predictions imply that the medial packing of DD and DP structures is unable to reorganize sufficiently in the regime of relatively stiffer A blocks to mitigate the presumably large entropic costs of tubular node packing.
Somewhat surprising in this context is the fact that SCFT for DD and DP show a non-monotonic variation of relative free energy, decreasing from a maximal value at intermediate elastic asymmetry for both ϵ≳ 1 and ϵ≲ 1.
As we describe and analyze in the next sections, this latter trend suggests that additional packing motifs, outside of the scope of strictly medial packing, are likely important for DD and DP, at least in this low-ϵ regime, whereas the overall consistency between mSST and SCFT predictions for DG over the full ϵ regime suggests that this morphology more closely follows medial packing in the strong-segregation limit.
§ SUBDOMAIN PACKING GEOMETRY
The thermodynamic comparisons of the previous section imply that medial packing accounts for a significant relaxation of the free energy of all networks relative to the skeletal packing ansatz, although that relaxation is less prominent for DD and DP than for DG, which gains thermodynamic stability for modest values of elastic asymmetry[Prior SST models did not predict stability window for neat diblocks for any network phase for ϵ≲ 9.].
Moreover, large-χ N SCFT predictions for the free energy dependence on elastic asymmetry depart significantly from mSST predictions in the low-ϵ for DD and DP, suggesting that optimal modes of packing are likely to deviate from medial motifs.
In this section, we analyze geometric features of the competing packing as predicted by both mSST and large-χ N SCFT in order to quantify the morphological signatures of packing frustration and understand the role of medial packing in the formation of different networks.
§.§ IMDS shapes
We first consider the geometry of network morphologies by comparing the IDMS shapes in regimes where networks compete for stability (i.e. at compositions where Lam and Hex are degenerate) for three different values of elastic asymmetry, spanning from stiffer tubular A-block to stiffer matrix B-block.
In Fig. <ref>A, we show the IMDS shapes for DG, DD and DP predicted by mSST, superposed with the respective terminal boundaries for A- and B-blocks.
Foremost, these show a clear effect of the variable tubular domain fraction, with morphologies varying from relatively “slender” tubular domains of f≈ 0.07 for ϵ = 0.3 to “majority tubular” networks at f≈ 0.55 for ϵ = 3.0.
In these examples, and most obviously for small f, it can be observed that the A domain sheaths the A-block terminal webs, leading to an IMDS that maintains roughly constant distance from the closest span of the web.
For the cases of DD and DP, which have “interior corners” in their terminal webs, this sheathing leads then to inward puckering of the IMDS at positions where three leaves of the terminal web meet (i.e. above nodal centers along ⟨ 111⟩ directions).
The smooth, corner-free terminal web of DG requires no inward curvature of the IMDS above the node centers.
For comparison, we show also the IMDS shapes from χ N =75 SCFT predictions in Fig. <ref>B (for other values of χ N, see SI Figs. S2-S4).
These show that the gross features of the IMDS shape, and its variation with elastic asymmetry and composition, are well-captured by mSST for DG, as well as DD at least for cases of stiffer matrix blocks (i.e. ϵ > 1).
SCFT calculations of IMDS shapes of DP show the greatest departure from mSST predictions.
On one hand, the degree of inward pucker of the IMDS above the node centers is much less, if not outwardly curved, in SCFT solutions than is predicted by mSST.
Additionally, SCFT solutions show a greater variation of the local thickness of A-block domains along the struts, with distance from strut to IMDS dropping substantially more in SCFT solutions than mSST predictions.
As discussed below, we attribute this difference to the extreme thermodynamic costs of packing the interior corners of the DP node, which are likely sub-optimally resolved by a strictly medial arrangement and exhibit additional degrees of freedom outside of the variational class of mSST morphologies studied here.
Indeed we note that the bicontinuous network topology of DP actually becomes unstable for sufficiently low elastic asymmetry.
The interconnected 6-valent network A-domains pinch off into disjoint and highly non-convex closed shapes (with a BCC symmetry) for ϵ≲ 0.8 (see SI Fig. S4).
§.§ Terminal boundary geometry
We next analyze the shapes of terminal boundaries predicted by mSST.
As shown in Fig. <ref>A above, the terminal boundaries of the B-block matrix domain maintain shapes that largely conform the corresponding triply-periodic minimal surface shapes, changing little over the large range of composition and elastic asymmetry explored.
Hence, in Fig. <ref>, we focus our attention on the previously unexplored shape aspects of the web-like, tubular A-block domain terminal surfaces, and their more obvious variations with BCP structure.
The DG terminal sets are single sheets that twist as they connect between neighboring nodes, whereas the DD and DP medal sets have fins that intersect along the skeletal graph.
Since the skeletal graph lies within each terminal set, the distribution of terminal set widths can provide a measure of departure from skeletal packing.
We focus on the terminal set width measured at two locations of interest: about the node center and at the midpoint of a strut.
Here, the width is defined as the minimum radius of a sphere, centered at either the node or mid-strut (w_ cen and w_ cen, respectively), that intersects with the boundary of the terminal set, as shown in Fig. <ref>A-C.
In Fig. <ref>D, we plot these two widths in comparison with a characteristic length of the IMDS (given by √(A_ IMDS)) for each mSST equilibrium structure along the Lam-Hex boundary.
First, we note that DG has the largest terminal web in terms of its relative area compared to the IMDS.
This is consistent with the observation from Fig. <ref>A above that the medial packing benefits DG relative to skeletal packing more than for DD and DP.
That is, spreading of terminal ends away from the 1D skeletal graphs lowers the entropic costs of filling space, and crudely stated, the larger the terminal web, the greater the free energy relaxation from the skeletal ansatz.
The larger terminal web for DG is somewhat counter intuitive, since the additional fins of the DD and DP webs would presumably raise their total area.
The reduced area of the terminal webs of DD and DP relative to DG can therefore be attributed to a much narrower widths.
The width w_ cen characterizes the width of the terminal webs that meet at the center of the A-block terminal surface.
We see that the DG terminal surface, which consists of a single contiguous flat piece, is the widest of the three, whereas the DP terminal surface, which has eight planar fins per node, has the smallest width (the inner terminal web for DD has four fins per node).
As we discuss below, the corner-free surface of DG allows for quasi-planar packing of chains on either side of the 3-fold junction, whereas the interior corners of the DD and DP terminal boundaries require chain trajectories to tilt relative to the boundaries in order to fill those inner central points.
Tilted chains in these “multi-finned” webs are relatively extended and the degree of tilt at the center can be expected to increase as the interior angle between fins decreases.
Hence, we expect an entropic penalty for widening the central portion of the terminal web that progressively grows with the number fins meeting at the node, and substantially suppresses the relative size of w_ cen for DD and DP relative to DG.
The strut width w_ strut decreases as the minority domain is made stiffer for both DG and DP, which is likely due to imprinting of rotational symmetry of the web on the sheathed IMDS.
At low ϵ, when the domain thickness is small, this results in faceting of IDMS shapes, apparent in the mis-strut cross-sections, as shown in Fig. <ref>A-C, reflecting the n-fold symmetry of terminal webs.
Rapid undulations in the IMDS curvature caused by these facets introduce unfavorable interfacial cost, which in turn favors narrowing of the terminal web mid-strut.
Notably, the DD phase always has the narrowest medial surface along the struts.
This is due to the inversion symmetry about its strut center, which results in the disappearance of the dominant 3-fold rotational symmetry of the generating surface, leaving a generating surface that has a lower-order 6-fold rotational symmetry that is seen in the 6-fold symmetry of the medial surface.
As previously noted, for DG, w_ strut drops substantially (a nearly 2-3 fold reduction), for low ϵ, while w_ cen retains a large width relative to the IMDS size.
This transition suggests that the smooth, single-leaf geometries of the inner web of DG permits a simple adaptation to the prerogatives of both stiffer matrix and stiffer tubular block chains, which is not available for the more complex, multi-leaf webs of DD and DP.
Furthermore, we attribute the re-entrant stability of DG relative to Lam and Hex for low ϵ to the mitigation of the otherwise destabilizing effects of variable stretching of tubular blocks, facilitated by the narrowing of w_ strut when the A block becomes shorter and stiffer.
§.§ Variable IMDS curvature
We next consider packing frustration as measured through the mean curvature H of the IMDS.
A long-standing heuristic for understanding self-assembled amphiphilic phases focus on area-minimizing interfaces, which are favored in situations dominated by interfacial free energy.
Ignoring, to a first approximation, the additional costs of chain packing and simply considering shapes that optimize IMDS area subject to a global volume constraint on adjoining subdomains would suggest that the optimal IMDS shapes are in the family of constant-mean curvature (CMC) surfaces <cit.>.
However, it has been recognized that for BCP melts in general and for complex structures like bicontinuous networks in particular, variations in the local packing environments involve variations in the balance between chain entropy and interfacial enthalpy, which result in local departures of the IMDS from strictly area-minimizing (i.e. CMC) shapes.
Thus, the variance in mean curvature, ⟨Δ H^2 ⟩≡⟨ H^2 ⟩ - ⟨ H ⟩^2, can be regarded as a measure of packing frustration.
This variance of mean-curvature was first quantified by Matsen and Bates based on intermediate-χ N SCFT predictions <cit.>.
More recently, both experimental tomographic reconstructions of DG assemblies, as well as SCFT calculations over a larger range of composition, segregation and conformational asymmetry, suggest the deviation from or agreement with CMC-like curvature itself varies considerably with interaction and structural parameters of the chains <cit.>.
As shown in Fig. <ref>, the mean curvature variance ⟨Δ H^2 ⟩ of the DG phase is consistently smaller than that of the DD and DP phases along the Lam-Hex phase boundary.
Here, we compare the results of mSST calculations to SCFT calculations at finite segregation χ N and find good agreement, particularly for large values of ϵ.
The most significant differences appear in the DP phase, for which the IMDS calculated using mSST is qualitatively different from that calculated using SCFT, which exhibits a topological transition for ϵ≲ 0.8.
Nevertheless, since mSST involves a direct construction of the variable packing environments, it provides a clear picture of the sources of ⟨Δ H^2 ⟩.
These variations arise from the major geometric differences between the smooth matrix medial surface and tubular medial surface, which has edges and corners.
The elastic asymmetry parameter ϵ adjusts which medial surface plays the dominant role in determining the geometry of the IMDS.
When the tubular domain is stiffer (ϵ < 1), the chains conform to the tubular medial surface, forming a layer with minimal thickness variations, as shown in Fig. <ref>A, and thickness variations are relegated to the matrix domain.
Conversely, when the matrix domain is stiffer (ϵ > 1), chains conform to the matrix medial surface.
Since the matrix medial surface is a CMC surface (the H = 0 surface), the IMDS is closer to CMC when the matrix phase is stiffer, but develops larger curvature variations due to the singular features of the tubular medial surface as the tubular domain grows stiffer (i.e. decreasing ϵ); this is clearly seen in the mean and Gaussian curvature distributions in SI Figs. S5 and S6.
This suggests a simple heuristic for the stability of DG over DD and DP based on how singular the tubular medial surfaces are.
The DG tubular medial surface is a twisting ribbon that promotes the development of lamellar-like packing environments and reduced curvature of the IMDS, with edges that lead to saddle-like curvature of the IMDS (as illustrated in Fig. <ref>B).
Meanwhile, the DD and DP tubular medial surfaces have multiple flat, planar patches that intersect in creases and corners, which are regions of singular curvature that lead to strong curvature variations on the IMDS.
Due to the tetrahedral coordination of the DD nodal region, the planar patches intersect at a larger angle than those of the DP, and thus the resulting curvature variation of the IMDS is reduced in comparison with the DP.
Note that the SCFT calculations show a non-monotonic decrease in curvature variations for decreasing ϵ that is not captured by mSST.
This highlights a regime in which the medial map is not an accurate representation of chain trajectories, which we discuss later.
§.§ Chain tilt
Here, we analyze distributions of chain tilt with respect to the IMDS.
On one hand, non-zero local tilt implies that chain stretching and IMDS area per chain are locally enhanced relative to strictly normal packing; hence, variable tilt in a complex packing is an elementary signature of packing frustration.
Beyond this, local tilt is a basic consequence of local volume balance in the medial construction underlying mSST for diblock melts.
Notably, variable chain tilt has been well-appreciated as a mechanism to relax frustration in bilayer models of lyotropic surfactant assemblies, as a means of optimizing the compromise between favorable curvature and thickness subject to constraints of filling space in regions occupied by solvophobic chains.<cit.>
The mSST approximates chain trajectories as lying along a collection of straight-line paths joining the two terminal surfaces, with those terminal surfaces deriving from the medial map of the generating surface, which ultimately differs, at least slightly, from the IDMS.
The geometry of chain trajectories is modeled by the tessellation of wedge-like volumes spanning between the A and B terminal boundaries.
As depicted in Fig. <ref>, each wedge is a thin bundle of these straight-line paths with orientations that vary gently about a local mean orientation, supplied by centroidal vector 𝐂 joining the centroid of the wedge's triangular facet on the tubular terminal surface to that of the matrix terminal surface, with the mean chain trajectories in the wedge extending along 𝐂̂.
As described above, due to the local volume balance constraint requiring the relative volume of A to B portions to satisfy f:(1-f) at all points, the ultimate IMDS patch within each wedge is typically tilted relative to the average chain trajectory.
This tilt, quantified in Fig. <ref>A is characterized by an angle θ between the local IMDS normal unit vector 𝐍̂ and the corresponding wedge centroidal unit vector 𝐂̂, such that cosθ = 𝐍̂·𝐂̂.
Since generating surfaces that yield a specific pair of medial surfaces have normal vectors that lie along the collection of centroidal vectors 𝐂, the terminal surfaces of a tilted IMDS are generally not medial surfaces of that IMDS.
Therefore, while the medial map leads to polymer trajectories that minimize the cost of stretching from any given generating surface, these are not trajectories that minimize the cost of stretching to the IMDS.
In this sense, enforcing the local volume balance constraint frustrates the ability of chains to lie along trajectories that optimize the cost of stretching, and tilt is a measure of this form of packing frustration (i.e. deviation from strictly medial packing).
A similar notion of tilt can be computed at finite-χ N using results of SCFT.
Here, tilt is given by the the average chain orientation near the IMDS, which is supplied by the polar order parameter 𝐩; this is computed using the method developed by Prasad, et al<cit.>.
The tilt angle θ is then the angle between 𝐩̂ evaluated at the IMDS and the local surface normal 𝐍̂ via tanθ = 𝐍̂·𝐩̂.
We show the mean tilt at the IMDS (relative to the local normal) for all three competitor networks in Fig. <ref>A for both mSST and χ N =75 SCFT.
The mean tilt angle θ decreases monotonically with increasing ϵ along the Lam-Hex degeneracy line, which indicates a larger effect of packing frustration for lower A-block fraction f.
Chain packing, on average, tends more towards being strictly medial in the regime when matrix domains have relatively stiffer segments and larger domain fractions.
Out of the three network morphologies, DG is least tilted (≲ 10^∘) and DP is most tilted (≈ 15-25^∘), particularly for low ϵ, suggesting that this packing frustration effect is particularly sensitive to the geometry of the tubular medial surface for low f.
Notably, the degree of tilt measured from SCFT is always at least slightly less than mSST predictions, suggesting a measure of relaxation relative to the mSST packing, particularly for the low-ϵ regime.
As illustrated in Fig. <ref>A above, in this narrow-tube regime, mSST predicts that the narrow A-block domains tightly wrap the terminal webs.
We note that the magnitudes of variable tilt, as well as the much larger value for DP, for these diblock melt predictions are generally consistent with elastic bilayer model calculations of so-called “normal” lytropic cubic phases, where the hydrocarbons occupy the tubular regions (predicted tilts are considerably lower for the inverse phases<cit.>).
Local distributions of tilt, as shown in Fig. <ref>B, show the tilt texture on the IDMS, highlighting where this frustration of medial packing is the largest.
Notably we find basic agreement in the spatial patterns between mSST and SCFT (shown in Fig. <ref>C), as least for DG and DD structures, particularly for larger values of ϵ.
For each structure, the tilt is maximal for chains that terminate on the flat portions of the tubular medial surface surrounding the nodal center, but reaches local minima near the rotational symmetry axes along ⟨ 111⟩, including the regions on the IMDS where chains stretch to the nodal center of the tubular medial surface.
These high symmetry points are sources and sinks of the vector field 𝐂_⊥, obtained by projecting the molecular orientation onto the surface via 𝐂_⊥≡𝐂 - (𝐍̂·𝐂)𝐍̂.
The qualitative features of the tilt patterns and the differences between tilt magnitudes between competing cubic networks derive from features of the underlying medial map that serves as template for chain packing.
In particular, chain trajectories are, in general, inclined with respect to the inner (A block) terminal webs that derive from the medial surfaces of the tubular network generating surfaces.
Since the IMDS tends to envelop these medial webs, the tilt pattern relative to the webs tends to imprint onto tilt at the IMDS, particularly as the A-block composition (and thus the sub-domain thickness) decreases (see e.g. SI Fig. S7).
The respective magnitudes of tilt can then be understood in terms of the inclination of the trajectories with respect to the medial webs, which is minimal along the “monkey saddle’’ directions, i.e. the ⟨ 111 ⟩ axes that reach the node centers.
While the medial web of DG is normal to this axis, the fact that webs of DD and DP are composed of multiple leaves implies non-zero inclination between the ⟨ 111 ⟩ and the web normals where those leaves join at the node center.
Those inclination angles are arccos (√(2/3)) ≃ 35.3^∘ and arccos (√(1/3)) ≃ 54.7^∘ for DD and DP, respectively.
Correspondingly, extent of chain tilt imprinted on the IMDS in mSST models, particularly near to the boundary between Lam and Hex at low ϵ, is increasingly larger for DD, and then DP, relative to DG.
This observation, combined with the fact that SCFT predictions show significant deviations for DD and DP morphologies from medial packing in this regime, confirm the connection between the optimal thermodynamics of DG and the smooth geometry of its inner (tubular) medial webs.
§.§ Chain bending
While the medial packing ansatz represents a significantly lower upper bound on the SST free energy for cubic network phases relative to the prior skeletal ansatz, several aspects of the comparisons between mSST and large-χ N SCFT predictions suggest that the straight-line trajectories are, at times, far from optimal.
This is most notable for the cases of stiffer tubular domains for DD and DP which have more complex inner terminal webs and show departures from even qualitative dependence of free energies and IMDS curvature on ϵ in these regimes.
One alternative motif that has been proposed and previously explored is the possibility of chain “kinking,” in which chain trajectories bend sharply at the IMDS, which provides additional degrees of freedom for the morphology to relax its free energy at uniform filling.
One way of directly quantifying the expected departure of chain trajectories from straight (i.e. medial) paths is by computing the average bend b(𝐫) ≡ |(𝐩̂·∇) 𝐩̂| of the trajectories predicted from SCFT, where 𝐩 is polar order parameter.
Notably, strictly straight-line trajectories correspond to b(𝐫)=0.
As a measure of potential kinking, we analyze the distributions of b(𝐫) at the IMDS, as the integrated bend through the IMDS (along chain trajectories) gives the bend angle of polar order parameter from A- to B-side of the subdomain.
The average molecular bend at the IMDS is shown in Fig. <ref>A.
We observe that the average bend of the DG is always less than that of the DD and DP phases, suggesting that among competing networks it most closely follows the straight path ansatz of mSST.
Furthermore, the bend quickly increases as the tubular domain becomes stiffer than the matrix domain (ϵ < 1) particularly for DD and DP, confirming our expectation that the IMDS warping due to the medial packing ansatz is partially relieved by highly curved chain trajectories.
To further explore how chains bend at the IMDS, we map the magnitude b and direction 𝐛̂ of the bend on the IMDS for ϵ = 1 in Fig. <ref>B.
These bend maps reveal distinct patterns of minimal and maximal bend, with bend reaching a minimum along the ⟨ 111 ⟩ directions and chains generally curving away as they pass from the A-block subdomain to the B-block subdomain.
The enormous bend near the mid-strut region of the IMDS for DP, along with the bend direction, suggests that the A-block chain ends are drawn towards the node even when the junctions are closer to the strut regions of the IMDS.
This is further supported by the substantial difference in A-block domain thickness near the node as compared with the thickness of the mid-strut and may provide a rationale for the change in topology of the DP phase when the A-block chains are increased in stiffness.
Indeed, we see a dramatically different distribution of bend for ϵ = 0.3, as shown in SI Fig. S8.
Incorporating chain bending into SST remains challenging and has been studied in most detail for 2D phases, using the so-called “kinked path” ansatz for computing the free energy of cylinder phases <cit.>.
These results show that packing frustration introduced by the corners of Voronoi cells in the cylinder lattice can be accommodated by tilting the chain trajectories in the matrix domain while maintaining purely radial paths in the cylinder domain.
An effectively kinked path construction was used by Likhtman and Semenov for network phases, but with A trajectories extending skeletal terminal boundary <cit.>.
Comparing these results for ϵ=1 it would appear that this ansatz led to free energies that are almost identical to the straight-path, skeletal ansatz of Omsted and Milner, and still 2-3% larger than mSST, which suggests that relaxation of the A block terminal boundary may be much more important for at least the DD morphology under the conditions reported in ref. <cit.>.
The large values of bend for cases of elastic asymmetry, particularly for DP, suggest that incorporation of kinked paths into mSST would allow for significant structural relaxation.
From the bend, we can estimate an effective “kinking angle” at the IMDS, approximated by the bend b times the thickness w of the IMDS, which is approximated by the gradient of the A-block density field, evaluated at the IMDS, i.e. w∼ |∇ϕ|^-1_ IMDS.
Averaging the product b/|∇ϕ|_ IMDS, we find a lower range estimate of ∼ 0.3^∘ for DG at ϵ = 1.0 and an upper range estimate of ∼ 20^∘ for DP at ϵ = 0.3 (both at χ N = 75).
In general, a kinked path can relax the cost of tilting chains at the IMDS, allowing a single block to be less tilted at the cost of making the other block more tilted.
In the case of elastic asymmetry, it is more favorable for IMDS to be oriented such that the tilt of the stiffer block is minimized, leading to increased tilt of the softer block which effectively “absorbs" the more of consequences of packing frustration, resulting in greater kinking of the chain path.
This is supported by an observed reversal in the bend direction as the B-block becomes stiffer than the A-block (see SI Fig. S8).
The scale of this effect depends on the ratio of the entropic stiffnesses of the two blocks and is somewhat analogous to the refraction of light, arising from differences in the phase velocity of electromagnetic waves passing through different media.
Indeed, we find that the bend is minimal in the case of elastic symmetry (ϵ = 1), but the persistence of non-zero bend points towards further dependence on chain length constraints as well as non-local space filling requirements.
§ SUBDOMAIN CHAIN THERMODYNAMICS
We now turn to the distribution of thermodynamic costs of BCP chain filling based on mSST models of cubic networks.
Here, the goal is to analyze the optimal free energy compromise between (interfacial) enthapic and (stretching) entropic driving forces in each morphology, how those distinct thermodynamic costs are distributed spatially in the structure, correlating with geometrical features of packings, and how to assess the role of variance in these quantities in their thermodynamic stability relative to other morphologies.
§.§ Co-variation of local chain enthalpy and entropy
Here, we analyze the distribution of free energies per chain, to understand the relative variations of enthalpy and entropy in each structure.
In essence, we consider the BCPs that are associated to particular point on the IMDS, i.e. the chains whose junctions lie at a given point.
Since each area element of the IMDS corresponds to a local density of chains, the enthalpic costs are proportional the IMDS area per chain.
Additionally, the A and B blocks departing from a given area element contribute to the strongly-stretched brush regions that extend up to the terminal boundaries of the associated subdomain, and are characterized by entropic costs that are proportional to the stretching per chain.
While notions of packing frustration are often discussed and diagnosed in terms of particularly large costs for either areal or entropic (stretching) free energy, these are inextricably linked for any complex morphology.
Regions that are locally extended, and therefore have pronounced stretching cost, also tend to have a correspondingly low area per chain.
This is most obvious when considering locally flat (i.e. lamellar) geometries with a given subdomain thickness, h.
Stretching costs per chain grow as h^2, while area per chain falls off with extension as 1/h.
While this special relationship is altered somewhat in non-flat geometries (e.g. cylinder, spherical or saddle-like volume elements), it generally holds that, locally, the area per chain and stretching cost will be anti-correlated in a given structure.
However, at the same time, as we illustrate below, thermodynamic equilibrium in the strong-segregation limit <cit.> requires that on average (over any an entire morphology) the mean costs of stretching and interfacial enthalpy maintain exact proportion.
Taken together, as we shall show, these two features require that the local entropic and enthalpic costs exhibit a complex co-variation within a given morphology, leading to a nuanced picture of inhomogeneous packing thermodynamics and its interpretation in the context of packing frustration.
Our mSST construction decomposes space into narrow wedge-like packing environments, the μth wedge corresponding to a chains associated to a specific IMDS point.
The (per chain) free energy F̂ in a given morphology is a sum over free energy contributions from each wedge, i.e. F̂ = n^-1_ ch∑_μ n_ ch,μF̂_μ, where F̂_μ is the free energy per chain (addressed to IMDS location μ) and n_ ch,μ is the mean number of chains in the wedge (i.e. its volume times ρ_0/N) and n_ ch=∑_μ n_ ch,μ is the total chain number.
The IMDS-addressed free energy per chain is F̂_μ = Ŝ_μ + Ĥ_μ and has two parts: a stretching free energy per chain,
Ŝ_μ = π^2/3/4λ^2 κ_ AĨ_ A,μ + κ_ BĨ_ B,μ/Ṽ_μ[(χ N)^1/3k_ B T] ,
and an enthalpy per chain,
Ĥ_μ = π^2/3/2λγÃ_μ/Ṽ_μ[(χ N)^1/3k_ B T] ,
where Ã_μ is the area of the IMDS surface patch, Ṽ_μ is the corresponding wedge's total volume, and Ĩ_ A,μ and Ĩ_ B,μ are the second moments of volume of each subdomain of the wedge.
While the IMDS-addressed free energy involves the local geometry of each wedge, information about the full structure is encoded in the equilibrated value of the scale factor λ, given by Eq. <ref>.
Note that we may consider free energy per chain as a function of size, by rescaling its dimensions by a factor λ/λ, which leads to
F̂(λ/λ) = (λ/λ)Ĥ + (λ/λ)^2Ŝ ,
where Ŝ = n^-1_ ch∑_μ n_ ch,μŜ_μ is average entropy per chain and Ĥ = n^-1_ ch∑_μ n_ ch,μĤ_μ average enthalpy per chain.
Since equilibrium requires optimality with respect to size, in this case occurring for λ=λ by construction, it is straightforward to see from the minimization of eq. <ref> that average enthalpy, entropic cost and total free energy per chain maintain the relationship
Ĥ = 2Ŝ = 2/3F̂ .
Notice that this strict proportionality on the averages holds notwithstanding the spread of local entropic and enthalpic costs throughout the assembly, facilitated by the adjustment of the equilibrium dimensions.
According to λ∝ (Ĥ /Ŝ)^1/3, structures with particularly pronounced stretching or interfacial costs adjust equilibrium dimensions accordingly to maintain the thermodynamically optimal balance between enthalpy and entropy in each morphology.
The full distribution of interfacial and stretching costs for each network phase, along with the lamellar and hexagonal cylinder phases are shown in Fig. <ref> for three values of elastic asymmetry along the points of Lam-Hex degeneracy.
These are 2D histograms in the stretching-enthalpy plane, with diagonal grey contours highlighting value the total free energy of each point (i.e. F̂_μ = Ŝ_μ + Ĥ_μ).
Since the Lam phase consists of a single repeated packing environment, its distribution consists of a single point, whereas the Hex phase occupies a thin band with nearly constant interfacial free energy (for ϵ = 1 and ϵ = 0.3), due to a nearly circular shape of the IMDS, and a variable stretching free energy due primarily to the variable stretching of the matrix domain.
Note that for case of stiffer matrices (ϵ = 0.3) the IMDS for Hex becomes fairly faceted, leading to a more obvious tilt of the distribution in the stretching-enthalpy plane, as well as larger horizontal spread that is due to the amplified effect of B-block chains stretching from the curved IMDS to the polygonal terminal boundary, which lies on the Voronoi cells generated by the cylindrical domains.
For the three network phases, we see a more prominent anti-correlation between local enthalpic and entropic free energy costs, with regions of relatively low stretching cost corresponding to relatively large enthalpy (i.e. area per chain) and vice versa.
The DG phase has the narrowest distribution of free energy per chain among the network phases, and thus the least variability in thermodynamic costs of different packing environments, whereas the DP phase has the broadest.
In SI Fig. S9, we also compare the individual histograms of A- and B-block stretching free energy.
Moreover, despite the variation in proportion of stretching to interfacial free energy costs, the free energy per chain of DG closely conforms to the F̂_μ = const. contours for ϵ≳ 1, indicating a general uniformity of the total free energy per chain of each packing environment.
Both DD and DP show notable tails of particularly low stretching and high enthalpy that lead to a much larger spread in the total free energy per chain (F̂_μ) in these networks than in DG.
In fact, the DG phase is even more uniform than the hexagonal cylinder phase in this regime of elastic asymmetry, which has a broad distribution of free energy per chain.
By comparison, the DD and DP phases have regions of large enthalpic cost where chain stretching is minimal.
These extreme enthalpy regions cross multiple F̂_μ = const. lines and keep the DD and DP phases from competing with the DG phase.
For ϵ = 0.3, the DG phase has larger variation in free energy per chain, but importantly has an excess of regions where the entropic and enthalpic costs fall below lamellar, promoting its stability for low ϵ.
Interestingly, in this same regime, the DD and DP phases develop regions of simultaneous large entropic and enthalpic cost.
These free energy distributions show that enthalpy plays a significant role the variability of chain free energy in networks.
While there are broad variations in chain stretching costs, the largest contributions to the average free energy tend to correspond to smallest stretching (i.e. upper left regions of the Ŝ_μ - Ĥ_μ plane), with exceptions when the tubular domain is stiffer than the matrix domain, which shows large enthalpy contributions for DD and DP at both large and small Ŝ_μ.
Additionally, we note that DG is unique among the networks in that its lowest enthalpy chains are not the maximally stretched chains.
Next, we consider the spatial distributions of these thermodynamic packing environments for DG, DD and DP structures.
§.§ Spatial maps of chain free energy
Fig. <ref> shows a complementary view of the free energy per chain distribution and its components, mapped onto corresponding regions of the IMDS for each network structure, focusing on the case of ϵ = 1 (see SI Figs. S10 and S11 for elastically asymmetric cases).
We find that the regions of maximal enthalpic cost are typically separated from regions of maximal entropic cost, which is consistent with their reciprocal relationship described above.
For DG, the regions of maximal enthalpic cost are located in the saddle-like “elbow” regions, whereas the regions of maximal stretch are along the flatter region, distributed around the 3-fold rotational symmetry axis, but notably away from the minimal enthalpy region centered above the 3-fold node.
Hence, in contrast to the heuristic view that chains reaching to the node center suffer the largest stretching, we observe that maximal stretch for DG is associated with regions of high tilt in the quasi-lamellar packing in the 3-fold plane of the node.
The distribution of free energy per chain shows that the enthalpic component dominates and the largest free energy per chain is in the elbow regions of the node.
By contrast, the DD and DP nodes have maximal enthalpy per chain halfway along the strut joining two nodal regions and maximal stretching entropy cost per chain on the monkey saddles of the IMDS, corresponding to chain trajectories that extend into the inner corners of the terminal webs at the node center (DP also has relatively large stretching costs in its elbow regions).
Due to the narrowing of the tubular portions of the IMDS, chains halfway along the strut are least stretched and thus, due to the incompressibility constraint, they occupy the largest area on the IMDS, leading to larger enthalpic costs.
Quite unlike DG, the net free energy for DD and DP is smallest in the elbow regions.
Also, the local maxima of total free energy is more complex, with large packing costs originating from both high entropic cost (on monkey saddles) as well as high enthalpic cost (on tubular struts).
While the normalized distributions shown in these maps depict high free energy costs of packing in the elbow regions of DG relative to other regions of the same morphology, it should be noted on absolute terms, packing is this region is comparable to the median value from chains in DD and the minimal value for chains in DP.
§ DISCUSSION
§.§ Medial strong-segregation picture of frustration in networks
We have explored packing frustration in network phases through the medial SST model of diblock melt assembly, in comparison to SCFT predictions at finite, but large χ N.
The mSST construction assumes chain trajectories to be straight-line paths based on a prescribed association map between two terminal boundaries.
Distinct from prior SST approaches that assumed the inner terminal boundaries to be 1D skeletal graphs of the networks, we explored the ansatz that the terminal surfaces are well-approximated by the medial surfaces of a class of generating surfaces whose symmetries are consistent with the network phase.
Essentially, this construction is based on the premise that most efficient mode of chain packing in tubular network domains is to spread out termini in a geometrically optimal way, while maintaining co-linear trajectories in both blocks.
This construction of the terminal surfaces is motivated by the fact that the medial map provides a distance-minimizing way of mapping all points the volumes on the generating surface.
Assuming then that the generating surfaces roughly approximate the ultimate IMDS shapes, these maps would then provide minimal chain stretching with each subdomain.
However, as we mentioned in our discussion of tilt, the condition of local volume balance to either side of the IMDS general requires some adjustment of this interface relative to the generating surface, so that the ultimate packing evaluated by SST is not strictly medial (i.e. chain paths, in general, are not normal to the IMDS).
Despite the “frustration" of medial packing by local volume balance, we find in general that the upper bound on the free energy provided by mSST is substantially smaller (by several %) than the SST calculations for all three cubic network phases based on the skeletal packing ansatz.
Additionally, we find free energy predictions that are overall consistent with SCFT predictions (albeit at fairly modest range of finite χ N ≤ 75).
Namely, the relative ratios of DG, DD and DP free energies predicted by mSST are reasonably comparable to SCFT.
Moreover, we find that DG enjoys thermodynamic stability between Hex and Lam for values of elastic asymmetry close to 1 and mSST captures the non-trivial, non-monotonic dependence of the free energy gap between DG and its competitor phases with elastic asymmetry.
More careful comparison of both the free energy trends and the detailed geometric features of the morphologies predicted by mSST and SCFT suggest a level of disagreement that is both dependent on the network phase as well as the chain parameters.
Notably, mSST appears to significantly depart from SCFT predictions for both DD and DP phases in the regime of ϵ≲ 1 when tubular (minority) blocks are relatively stiff.
Moreover, DD (for ϵ < 1) and particularly DP show obvious differences in IMDS shapes between mSST and SCFT.
These departures suggest that more complex packing motifs, such as kinked or bent chain trajectories, are needed to capture the full details of packing frustration and thermodynamics, at least for certain networks in certain parameter regimes.
Indeed, SCFT shows that at least some measure of path bending for large χ N for all networks, but this is most significant for DD and DP and most prominent for the stiff minority regimes.
All together, we understand this comparison to suggest that chain packing in DG is “most medial” among the cubic competitors.
As such, the mSST model provides currently the most accurate and detailed picture of the underlying coupling between domain shapes, chain packing and thermodynamics that governs the formation of DG and its stability in diblocks, and to a large extent, its competition with sub-optimal DD and DP phases.
In this context we largely attribute the stability of DG among cubic networks to the basic geometry of its terminal boundaries.
In particular, the DG tubular medial surface, which approximates the terminal surface in the tubular subdomain, is a single twisting sheet without the singular features that plague the DD and DP tubular medial surfaces, namely corners.
These singular “interior corners” result in pronounced variations in the mean curvature of the IMDS of DD and DP, which increases the interfacial contribution to the free energy.
Moreover, the local volume balance constraint results in an IMDS that is generally tilted relative to the chains, the consequence of which is that the medial map does not, in fact, minimize the distance from the IMDS to the terminal surfaces.
The degree of tilt is larger for medial surfaces with singular features and is consequently largest for the DP phase.
This frustration of medial packing consequently leads to substantially larger stretching free energy.
In fact, we find that at finite segregation, chains will curve from straight paths, better optimizing the compromise between thermodynamics and the space filling constraint in the highly frustrated DP.
In this light, mSST suggests that chain packing in the interior nodal volumes of DD and DP are indeed problematic, tending to destabilize these structures in neat diblock systems.
This is consistent with predictions that DD and DP only become stable (if ever) in blend systems, presumably tailored to mitigate the costs of filling “hot spots,” which we discuss below.
On the other hand, the smooth terminal geometry of DG, as well as its stability in the ϵ <1 regime, actually suggests that chain packing in the nodal centers is not particularly frustrated, or at least not in qualitatively different ways than its close, non-network competitors.
That is, in contrast to some widely held notions, the geometry of chain packing in DG on both sides of the IMDS would appear to be favorable, facilitating a generically optimal compromise intermediate to columnar and lamellar packing.
The thermodynamic cost of chain packing is far from homogeneous, even in DG, as revealed by spatial distributions of the free energy per chain predicted by mSST.
We found that while each of the canonical network phases has regions where the entropic cost of stretching dominates over interfacial enthalpy, and vice versa, there are regions of pronounced free energy cost.
The lamellar-like regions at the three-fold symmetry axes of the DG phase are regions of particularly low free energy (see Fig. <ref>), possessing simultaneously low enthalpic and entropic contributions to the free energy.
The DD and DP phases, by comparison, have regions of pronounced enthalpic costs in regions of relatively low stretching free energy costs, elevating the average free energy and keeping either structure from competing with the DG.
In contrast, these regions of elevated enthalpic cost occur in the cylinder-like packing regions along the struts of the mSST models of DD and DP phases.
§.§ Localized hot spots of stretching
We conclude our discussion by exploiting the mSST predictions for networks to explore “hot spots” in the chain packing, which are often conceptually linked with notions of packing frustration in BCP and other surfactant systems<cit.>.
These are regions of particularly high free-energy density, representing the regions in a given morphology where monomers experience the highest entropic cost.
Heuristically, “hot spots” are regions that are expected to relax upon blending with some other “guest” molecules, for example nanoparticles<cit.> or homopolymers of the same chain chemistry as block where the hot spot is located<cit.>, which have been known to promote the stability of typically unstable ordered phases in neat diblock melts in both experiments<cit.> and simulation studies<cit.>.
While we do not consider an extension of the mSST framework to account for the thermodynamics of such blended systems here, we nevertheless might consider the distributions of these hot spots as useful proxies for understanding the susceptibility of neat morphologies to the uptake of guests in blends.
Specifically, we analyze the volume fractions of A and B subdomains that account for the largest portions of the stretching energy in those brush-like regions of each network.
This information is encoded in the stretching contributions to the free energy: the contribution to the stretching free energy δ S_α(z) (for α∈{ A, B}) from chains in a small volume δ V a distance z from the IMDS is given by δ S_α(z) = (κ_α/2)z^2δ V.
Therefore, the stretching free energy per volume is
𝒮_α(z) ≡ (κ_α/2)z^2 = π^2/3/4λ^2κ_αz̃^2 [ρ/N(χ N)^1/3k_ BT] ,
where z̃ = z/D is the distance from the IMDS relative to the unit cell size.
Subdividing each wedge into small sub-wedge voxels, we determine a distribution of 𝒮_α over the entire nodal region of each network morphology, what has been dubbed the “mesoatomic” unit in <cit.>.
We then construct a cumulative distribution Ω_α(𝒮), consisting of the fraction Φ of the volume of subdomain α that has a stretching free energy cost greater than 𝒮; since 𝒮_α(z) → 0 as z approaches the IMDS and every other point in the subdomain has a positive stretching free energy density, Ω is normalized such that Ω(0) = 1.
Using the cumulative distribution, we then compute residual stretching energy ⟨𝒮_α⟩ of the α subdomain associated with the lowest 1-Φ fraction of that subdomain.
In other words, ⟨𝒮_α⟩ is the remaining stretching energy associated with “filling in" the upper Φ proportional of the α region and removing its stretching costing.
This residual stretching free energy ⟨𝒮_α⟩, as a function of Φ, is given by
⟨𝒮_α⟩(Φ) ≡∫_Φ^1 dΩ_α 𝒮_α ,
which is a function that reaches its maximum at Φ = 0, when the average is taken over the entire subdomain (i.e. the pure melt), and decays to 0 as Φ→ 1, corresponding to the case where the entire region is filled in.
In Fig. <ref>A and B, we show this residual stretching free energy of each of the network structures at ϵ = 1, f ≈ 0.29 as a function of the “fill fraction” Φ for A and B subdomains, respectively.
The convergence of the residual stretching free energy for larger Φ reflects a uniformity of the packing environments of the three structures near the IMDS, whereas the differences as Φ→ 0 is due to the differences in the stretching thermodynamics near to the terminal boundaries, where the chains are have the largest stretch within the subdomain.
Inset graphics (right) show the regions occupied at 1% infill of the subdomain (Φ = 0.01), highlighting the maximal stretching free energy density regions of both A and B blocks, with the figures below illustrating isocontours of increasingly large Φ.
Notably, whereas the largest A-block stretching free energy density lies at the node of the DD and DP terminal surfaces and the corners around the node, the most expensive parts of the DG subdomain volume lie along the edges of the terminal surface, away from the central node.
This re-enforces the conclusion that the medial geometry of the DG morphology is uniquely optimal amongst the cubic network phases, as it supports an extended lamellar-like packing environment without singular folds or corners.
Additionally, we note that while the A-block stretching free energy is lowest for DG at Φ=0, the slopes of the stretching free energy with fill fraction are larger from DD and DP, suggesting that these structures are plausibly more susceptible to in-filling by guest molecules that might relax large fractions of their entropic cost.
Indeed, the curves for ⟨𝒮_A⟩ tend to converge around Φ≈ 0.2, consistent with the range of homopolymer blending at which DD (or DP) have been reported to gain equilibrium stability in diblock/homopolymer blend systems<cit.>.
Comparing this to Fig. <ref>B we note that the overall scale of stretching free energy density in B blocks is roughly a factor of 3 smaller, suggesting that “hot spots" of the matrix domains are somewhat “cooler” than in the A domains.
As might be anticipated, these are distributed around CMC-like surfaces that constitute the matrix terminal boundaries; however, for sufficiently small Φ, these maximal stretching regions localized to patchy regions.
For DD and DP, maximal stretching hot spots concentrate over the monkey saddle positions, whereas for DG, maximal matrix stretching occurs away from this directions in bands that flank the elbow portions of the trihedral mesoatom.
Taken together, these hot spot distributions present a possible way to interpret the stability of distinct morphologies in blended systems.
Based on distinct features of the terminal boundaries, particularly the inner tubular subdomains, it is perhaps intuitive to imagine that the total volumes of hot spots which might be filled by guest molecules could increase with the total cost of packing frustration.
What is less clear is what controls the ability of distinct morphologies to “host” these guest molecules.
That is, in order to understand why DD can overtake DG when blended with A-block homopolymer, it is not only necessary to understand why the free energy of DD is lowered by hot spot filling, but it must also be understood why is not comparably lowered in DG.
In other words, what makes DG (not to mention competitor Lam and Hex) a relatively bad host in those blends where DD or DP become the equilibrium phase?
Answering this question requires a model of how chain packing adapts progressively as hot spots are filled, collectively relaxing the both enthalpic and entropic contributions to the free energy over the entire morphology<cit.>.
Finally, we note from Fig. <ref> that hot spots cover and conform to these quasi-2D regions for Φ of a few percent.
Provided that guest molecules are highly localized to hot spots, say for the case of weakly-interaction high-molecular weight homopolymers, imaging their 3D distributions (say be selective labeling of high-contrast guests<cit.>) at low-blend fractions may provide an indirect means to observe the shapes of the terminal boundaries of the host morphologies.
§ CONCLUSIONS AND OUTLOOK
Using the medial strong segregation theory (mSST) formalism, we have explored the multiple facets of packing frustration, linking geometric features of packing to their associated thermodynamic costs.
In doing so, we have demonstrated that packing frustration presents in a combination of heightened chain stretching costs, but also high interfacial costs, which are typically correlated with regions of low stretching.
In particular, we have shown that the terminal surface geometry of the DG morphology, as modeled by a set of medial surfaces, promotes stability over the competitor DD and DP network phases, uniquely taking advantage of packing environments that combine aspects of lamellar and cylindrical morphologies.
Nevertheless, a number of open questions remain regarding optimal packing in network phases.
We have provided evidence that chain bending or kinking can play a significant role in relaxing the thermodynamic costs of network structures.
The differences between IMDS morphologies predicted with SCFT calculations and our mSST calculations can be attributed to our straight-path ansatz for chain trajectories.
Indeed, SST calculations that appropriately capture IMDS faceting for cylindrical phases rely on a kinked-path ansatz; developing a similar kinked-path formalism that works in tandem with medial surfaces remains an open challenge.
Moreover, while we have provided a plausible prediction of the susceptibility of each network phase to the uptake of guest molecules, the incorporation of guests into mSST (and the resulting structural relaxations) remains a challenge.
The authors thank to E. Thomas and B. Greenvall for stimulating discussions and valuable comments on this work. We also thank P. Olmsted for useful discussions about prior SST calculations. This research was
supported by the US Department of Energy (DOE), Office of Basic Energy Sciences, Division
of Materials Sciences and Engineering under award DE-SC0022229.
Medial SST and SCF calculations were performed on the UMass Cluster at the Massachusetts
Green High Performance Computing Center.
Additional details for the mSST calculation methods and analysis results for the variable morphology predictions from both mSST and SCFT models.
|
http://arxiv.org/abs/2306.03212v2
|
20230605194506
|
StabJGL: a stability approach to sparsity and similarity selection in multiple network reconstruction
|
[
"Camilla Lingjærde",
"Sylvia Richardson"
] |
stat.ME
|
[
"stat.ME"
] |
[
[
July 31, 2023
=================
In recent years, network models have gained prominence for their ability to capture complex associations. In statistical omics, networks can be used to model and study the functional relationships between genes, proteins, and other types of omics data. If a Gaussian graphical model is assumed, a gene association network can be determined from the non-zero entries of the inverse covariance matrix of the data. Due to the high-dimensional nature of such problems, integrative methods that leverage similarities between multiple graphical structures have become increasingly popular. The joint graphical lasso is a powerful tool for this purpose, however, the current AIC-based selection criterion used to tune the network sparsities and similarities leads to poor performance in high-dimensional settings.
We propose stabJGL, which equips the joint graphical lasso with a stable and accurate penalty parameter selection approach that combines the notion of model stability with likelihood-based similarity selection. The resulting method makes the powerful joint graphical lasso available for use in omics settings, and outperforms the standard joint graphical lasso, as well as state-of-the-art joint methods, in terms of all performance measures we consider. Applying stabJGL to proteomic data from a pan-cancer study, we demonstrate the potential for novel discoveries the method brings. A user-friendly R package for stabJGL with tutorials is available on Github: <https://github.com/Camiling/stabJGL>.
§ INTRODUCTION
Network models have in recent years gained great popularity in many areas. In statistical omics, networks can be used to decode aspects of unknown structures, and hence study the relationships between genes, proteins, and other types of omics data. In health data sciences, rich data sets are more and more frequently encountered, enabling the development of models integrating a variety of biological resources. In the high-dimensional setting commonly found in omics, sharing information between data sources with shared structures – which could be different tissues, conditions, patient subgroups, or different omics types – can give a valuable increase in statistical power while elucidating shared biological function. A key question is how to combine the different data sources into a single model.
If a Gaussian graphical model is assumed, a conditional (in)dependence network can be estimated by determining the non-zero entries of the inverse covariance (precision) matrix of the data. With its good performance in numerical studies, the graphical lasso of <cit.> is a state-of-the-art method for precision matrix estimation in the setting of Gaussian graphical models. The method combines L_1 regularization with maximum likelihood estimation. Other notable methods include the neighborhood selection approach of <cit.> and the graphical SCAD <cit.>. Notable Bayesian methods include the Bayesian graphical lasso <cit.>, Bayesian spike-and-slab approaches <cit.> and the graphical horseshoe <cit.>. If multiple related data sets are available, there are several ways to leverage common network structures. If focusing on one data type's network structure, data from other types can enhance inference via weighted graphical lasso methods <cit.>. However, to compare network structures across data sets, such as patient subgroups, a joint approach that leverages common information while preserving the differences can increase statistical power and provide interpretable insight.
In the area of multiple Gaussian graphical models, existing methods include the group extension of the graphical lasso to multiple networks of <cit.>, the Bayesian spike-and-slab joint graphical lasso <cit.> and the Markov random field approach of <cit.>. The widely used joint graphical lasso (JGL) of <cit.> extends the graphical lasso to a multiple network setting and provides a powerful tool for inferring graphs with common traits. It employs two different penalty functions – group (GGL) and fused (FGL) – with the latter recommended for most applications. From this point forward, any mention of the joint graphical lasso will imply the fused version, unless otherwise specified. The method needs tuning of two regularization parameters for controlling (i) the number of non-zero effects, and (ii) the similarity between networks, respectively. However, the default parameter selection routine based on the AIC <cit.> often results in severe over-selection in high-dimensional data, potentially impacting performance negatively <cit.>. In such settings, selection approaches based on model stability have demonstrated competitive performance <cit.>.
We propose a stable and accurate penalty parameter selection method for the joint graphical lasso, combining the model stability principle of <cit.> with likelihood-based selection for high-dimensional data <cit.>. The resulting method inherits the powerful traits of the joint graphical lasso while mitigating the risk of severe under- or over-selection of edges in high-dimensional settings. We provide an R package, (stable sparsity and similarity selection for the joint graphical lasso), which implements the method.
The paper is organized as follows. In Section <ref>, we first describe the Gaussian graphical model framework and the penalized log-likelihood problem we aim to solve. We then describe our proposed algorithm. In Section <ref>, we demonstrate the performance of our proposed method on simulated data and apply it proteomic data from a pan-cancer study of hormonally responsive cancers. Finally, we highlight possible extensions in Section <ref>.
§ MATERIALS AND METHODS
§.§ Gaussian graphical models
In a gene network model, genes are represented by nodes and associations between them are represented by edges. Given measurable molecular units each corresponding to one gene (e.g., the encoded protein or mRNA), a network, or graph, can be constructed from their observed values. Consider n observed values of the multivariate random vector x = (X_1, …, X_p)^T of node attributes, with each entry corresponding to one of p nodes. If we assume multivariate Gaussian node attributes, with an n× p observation matrix X with i.i.d. rows x_1, …, x_n∼𝒩(0,Σ), a partial correlation network can be determined by estimating the inverse covariance matrix, or precision matrix, Θ=Σ^-1. Specifically, the partial correlation between nodes i and j, conditional upon the rest of the graph, is given by
ρ_ij| V\{i,j} = - θ_ij/√(θ_iiθ_jj),
where the θ_ij's are the entries of Θ and V the set of all node pairs <cit.>. The partial correlations coincide with the conditional correlations in the Gaussian setting. Because correlation (resp. partial correlation) equal to zero is equivalent to independence (resp. conditional independence) for Gaussian variables, a conditional independence graph can thus be constructed by determining the non-zero entries of the precision matrix. To ensure invertibility, the precision matrix also required to be positive definite, Θ≻ 0.
In high-dimensional settings, the sample covariance matrix S = 1/n-1X^T X is rarely of full rank and thus its inverse cannot be estimated directly. It is common to assume sparse network, meaning the number of edges in the edge set E is small relative to the number of potential edges in the graph (i.e., the sparsity measure 2| E|/(p^2-p) is small). Penalized methods such as the graphical lasso <cit.> are well established for sparse Gaussian graphical model estimation. In the case of there being multiple (related) data sets available, such as from different tissue types, rather than estimating each network separately much statistical power could be gained by sharing information across networks through a joint approach.
§.§ Penalized log-likelihood problem
Assume a network inference problem with K groups. We let {Θ} = (Θ^(1),…, Θ^(K)) be the set of their (unknown) precision matrices, and assume that the set of ∑_k=1^K n_k observations are independent. We aim to solve the penalized log-likelihood problem <cit.>
{Θ} = _{Θ≻ 0}{∑_k=1^K n_k [log ( (Θ^(k) )) - tr (S^(k)Θ^(k) )]
- P({Θ})}
where S^(k) is the sample covariance matrix of group k and P(·) is a penalty function. In (<ref>), (·) denotes the determinant and tr(·) denotes the trace. The joint graphical lasso employs the fused penalty function
P({Θ}) = λ_1∑_k=1^K∑_i≠ jabs(θ_ij^(k)) + λ_2 ∑_k<k'Θ^(k) - Θ^(k')_1
where λ_1 and λ_2 are positive penalty parameters, abs(·) denotes the absolute value function and ·_1 denotes the L_1 penalty. This penalty applies L_1 penalties to each off-diagonal element of the K precision matrices as well as to the differences between corresponding elements of each pair of precision matrices. As for the graphical lasso, the parameter λ_1 controls the sparsity. The similarity parameter λ_2 controls the degree to which the K precision matrices are forced towards each other, encouraging not only similar network structures but also similar precision matrix entries. The current penalty parameter selection approach for λ_1 and λ_2 is based on the AIC <cit.>. While suitable for determining network similarities, likelihood-based selection criteria can lead to severe under- or over-selection and thus poor performance in high-dimensional settings <cit.>.
§.§ The stabJGL algorithm
To improve the performance of the joint graphical lasso with the fused penalty for omics applications and other high-dimensional problems, we propose the stabJGL algorithm for stable sparsity and similarity selection in multiple network reconstruction. Below we outline the algorithm, where we first select the sparsity parameter λ_1 in the fused penalty (<ref>) based on the notion of model stability, and then the similarity parameter λ_2 based on model likelihood. StabJGL jointly estimates multiple networks by leveraging their common information, and gives a basis for deeper exploration of their differences, as shown in Figure <ref>.
§.§.§ Selecting λ_1
We select λ_1 by extending the framework introduced by <cit.> in their Stability Approach to regularization Criterion (StARS) to a multiple network setting. The aim is to select the least amount of penalization that makes graphs sparse as well as reproducible under random subsampling. This is done by drawing many random subsamples from each of the K data types and using them to construct joint graphical lasso graphs over a range of λ_1 values. The smallest parameter value for which a given graph estimation variability measure does not surpass a specified threshold is then selected. We use a measure of edge assignment instability across subsamples to quantify the variability.
Specifically, we consider a grid of λ_1 values in a suitable interval, i.e., (0,1] and keep the similarity parameter λ_2 fixed to some small value such as 0.01 in the first instance. For η=1,…,N_sample, we draw a random subsample from each group k's set of n_k observations without replacement, each of size b_k < n_k. <cit.> show that in a single network setting, b_k = ⌊ 10 √(n_k)⌋ maintains theoretical properties for containing the true graph with high probability as well as high empirical performance, and this is the value we use. For each value of λ_1 to consider, we next construct the corresponding set of joint graphical lasso graphs {G_(k)^η(λ_1)}_k=1^K from these K sets of subsamples, using the fused penalty (<ref>).
The following is then done for each value of λ_1 we consider. For each group k=1,…,K and all possible node pairs (i, j) we estimate the probability of an edge between the nodes over the N_sample inferred sets of graphs
ψ_ij^(k)(λ_1) = 1/N_sample∑_η=1^N_sample1[(i,j) ∈ G^η_(k)(λ_1)],
where 1[·] is the indicator function. Using this estimated probability, we find
ξ_ij^(k)(λ_1) =2 ψ_ij^(k)(λ_1)(1-ψ_ij^(k)(λ_1)),
which is an estimate of two times the variance of the Bernoulli indicator of the edge (i,j) in group k. It lies in [0,0.5] and can be regarded as an estimate of the fraction of times two inferred graphs for group k found with the joint graphical lasso with the given λ_1 value will disagree on the presence of the edge (i,j). Due to the L_1 penalty in (<ref>), the number of inferred edges will decrease as λ_1 is increased. For a given λ_1, ξ_ij^(k)(λ_1) can be regarded as a measure of the variability of the edge (i,j) in group k across subsamples, and the total variability of graph k can be measured by averaging over all edges, yielding the estimate
D_(k)(λ_1) = 1/p2∑_i<jξ_ij^(k)(λ_1).
For each value of λ_1, the total variability of the whole set of graphs found by the joint graphical lasso is then found by averaging the variability over all K networks
D(λ_1) = 1/K∑_k=1^KD_(k)(λ_1).
For sufficiently large λ_1, all edges are excluded from the model and so the variability D(λ_1) will be 0. The variability will in general increase as the penalty λ_1 decreases, however, for small enough λ_1 the graphs will become so dense that the variability starts to decrease again. As sparse network inference is the aim, we therefore monotonize the variability function by letting D̅(λ_1) = sup_0≤ t ≤λ_1D(t). Finally, for a given variability threshold β_1, the optimal penalty is chosen to be λ_1 = sup{λ_1 : D̅(λ_1) ≤β_1 }. As opposed to λ_1, β_1 is an interpretable quantity and we propose a default threshold of β_1=0.1 as suggested by for the original StARS algorithm, which reflects an acceptance of 10% variability in the edge assignments.
§.§.§ Selecting λ_2
After λ_1 has been selected, we select λ_2 with a multiple-network version of the extended BIC (eBIC or BIC_γ) of <cit.>. The eBIC is an extension of the Bayesian Information Criterion of <cit.>, where the prior is reformulated to account for high-dimensional graphical settings. We propose an adaptation the eBIC to a multiple-network setting,
BIC_γ(λ_1,λ_2) = ∑_k=1^K [ n_k tr(S^(k)Θ^(k)_λ_1,λ_2)
- n_k log((Θ^(k)_λ_1,λ_2))
+ | E_k |logn_k + 4 | E_k |γlogp],
where Θ^(k)_λ_1,λ_2 is the estimated precision matrix of network k obtained with the penalty parameters λ_1 and λ_2, and | E_k| is the size of the corresponding edge set. A grid of λ_2 values is considered, with λ_1 fixed to the value selected in the previous step. The value of λ_2 that minimizes (<ref>) is selected.
Like for the standard eBIC, the additional edge penalty parameter γ∈ [0,1] must be chosen. However, since we are using the eBIC for similarity selection rather than sparsity selection, the choice of γ is not as important because we are comparing graphs with the same value of λ_1 and hence similar levels of sparsity. We typically use γ=0, which corresponds to the ordinary BIC, for most applications. Our implementation includes the eBIC generalization to give the user the option of additional penalization in extremely high-dimensional cases.
§.§.§ Algorithm
The full stabJGL algorithm is given in Algorithm <ref>. JGL(·) indicates that the joint graphical lasso function with the fused penalty is applied. The output of the JGL function can either be a set of graphs, a set of precision matrices or an edge set, depending on what is required Algorithm <ref>.
§.§.§ Implementation details
StabJGL is implemented in R, and available as an R package at <https://github.com/Camiling/stabJGL>. The subsampling routine is implemented so it can be done in parallel.
The joint graphical lasso fittings are done as in <cit.>, using an ADMM (Alternating Direction Method of Multipliers) algorithm <cit.> for general penalty functions to solve the penalized log-likelihood problem (<ref>), By default, 20 subsamples are used and we evaluate 20 values each of λ_1∈ [0.01,1] and λ_2 ∈ [0,0.1]. As in StARS, we use a subsample size of ⌊ 10√(n_k)⌋ for group k=1,…,K <cit.>. The additional penalty parameter γ in the eBIC for similarity selection is set to 0 by default, corresponding to the standard BIC. We found this value to be suitable in most applications but leave the option to increase the penalization.
We employ a default variability threshold of β_1=0.1.
§ RESULTS
§.§ Simulated data
We first assess the performance of stabJGL on simulated data. We compare the network reconstruction ability of stabJGL to that of state-of-the-art methods, including the joint graphical lasso with the fused penalty (FGL) and group penalty (GGL) with penalty parameters selected with the default AIC-based criterion <cit.>. To assess the performance of another selection criterion specifically designed for high-dimensional graph selection, we also consider FGL with penalty parameters tuned by the extended BIC for multiple graphs (<ref>) with a moderate value of γ=0.2 <cit.>.
We further include the Bayesian spike-and-slab joint graphical lasso (SSJGL) of <cit.>, as well as the graphical lasso (Glasso) of <cit.> tuned by StARS <cit.>. The latter estimates each network separately. We generate data that closely resembles our omics application of interest, featuring partial correlations between 0.1 and 0.2 in absolute value, while also exhibiting the scale-free property - a typical assumption for omics data where the degree distribution (i.e., the distribution of the number of edges that are connected to the nodes) adheres to a power-law distribution <cit.>. In the main simulation scenario, we simulate K=3 networks with p=100 nodes, manipulating the degree of similarity in their “true” graphical structures to assess the performance of the method over a wide range of scenarios. We maintain a sparsity of 0.02 across all networks and generate data sets from the corresponding multivariate Gaussian distributions with n_1=150, n_2=200 and n_3=300 observations. We then apply different network reconstruction techniques to determine the networks from the data. For FGL and GGL, the two penalty parameters are chosen in a sequential fashion with the default AIC-based criterion proposed by <cit.>, with 20 values of λ_1∈[0.01,1] and λ_2∈[0,0.1] respectively being evaluated. We consider the eBIC criterion on the same grid of values for FGL. We consider the same set of λ_1 and λ_2 values in the stabJGL algorithm and let γ=0 in the eBIC criterion for similarity selection. For stabJGL and the graphical lasso tuned by StARS, we use a variability threshold of 0.1 and use 20 subsamples. For the Bayesian spike-and-slab joint graphical lasso all parameter specifications are as suggested by <cit.>. In addition to the above setup, we consider additional settings with K∈{2, 4} graphs and p=100 nodes. We only show a summarizing plot of these additional results, but the full tables for these simulations, as well as from additional scenarios with other values of K and p, are given in the Supplement. We also investigate the effect of the variability threshold β_1 in stabJGL on the results in a setting with p=100 nodes and K=2 networks. Finally, to compare the scalability of the respective methods we consider the time needed to infer networks for various p and K. Further details and code for the simulation study can be found at <https://github.com/Camiling/stabJGL_simulations>.
Estimation accuracy is assessed with the precision (positive predictive value), and the recall (sensitivity). The precision gives the fraction of predicted edges that were correct, while the recall is the fraction of edges in the true graph that were identified by the inference. Because the sparsity of estimated networks will vary between methods, the precision-recall trade-off should be taken into consideration. In general, the recall will increase with the number of selected edges while the precision will decrease. Since sparsity selection is a main feature of our proposed method, we do not consider threshold-free comparison metrics such as the AUC. We therefore put emphasis on the following characteristics in our comparative simulation study; (i) suitable sparsity level selection, (ii) utilization of common information at any level of network similarity, i.e., inference improves with increased network similarity, and (iii) a suitable precision-recall trade-off that overly favours either measure.
§.§ Simulation results
The results are summarized in Table <ref>. First, we observe that the fused and group joint graphical lasso with the default AIC-based penalty parameter selection strongly over-select edges in all cases. This leads to high recall, but very low precision. Second, they do not appear to sufficiently utilize network similarities; the performance of the two methods, particularly GGL, differs little between completely unrelated and identical networks. Notably, in all cases the selected value of λ_2 is smaller for FGL and GGL tuned by AIC than it is for stabJGL. Consequently, similarity is not sufficiently encouraged even in settings where the networks are identical. The AIC criterion does not seem to provide sufficient penalization to encourage suitable sparsity and similarity. On the other hand, we observe that the alternative eBIC criterion gives extremely sparse FGL estimates, resulting in high precision but very low recall. In half of the cases, it selects an empty graph, i.e., no edges. Although the extended BIC is developed specifically for graphical model selection, likelihood-based criteria for sparsity selection tend to perform poorly in high-dimensional settings and risk both severe under- and over-selection <cit.>. This issue is avoided in the stabJGL algorithm as the eBIC only is used to select similarity and not sparsity.
The Bayesian spike-and-slab joint graphical lasso tends to select very few edges, leading to high precision but low recall. Its performance deteriorates drastically as the network differences increase, leading to extremely low recall. This implies a lack of flexibility to adapt to varying network similarity levels, as has previously been observed <cit.>. Out of all the joint methods, stabJGL gives the most accurate sparsity estimate. This ensures that we neither get very low precision like FGL and GGL tuned by AIC, nor very low recall like SSJGL and FGL tuned by eBIC. StabJGL also appears to adapt well to the similarity between networks, with the prediction accuracy increasing with the number of shared edges. As a result, the method either outperforms the graphical lasso tuned by StARS for highly similar networks or performs comparably to it for unrelated networks. The similar performance for unrelated networks can be explained by the fact that the sparsity controlling penalty parameter of both methods are tuned with a stability-based approach. The results suggest that stabJGL can be used agnostically in settings where there is no prior knowledge about the level of network similarity and does not run any risk of decreased accuracy should the networks have nothing in common.
The results for K=2 and K=4 networks are summarized in Figure <ref>. The results for FGL tuned with eBIC are not shown as it did not select any edges in any of the settings. The findings from the K=3 case are echoed here, with FGL and GGL having high recall but very low precision and particularly GGL exhibiting a lack of adaption to increased network similarity. On the contrary, SSJGL selects very few edges and thus has high precision but very low recall, with its performance quickly deteriorating for less similar networks. StabJGL achieves a balanced precision-recall trade-off and adapts well to the level of network similarity. Consequently, stabJGL performs comparably or better than the graphical lasso depending on the degree of similarity between the networks.
A key question is whether stabJGL can achieve as high precision as the methods that give sparser networks (i.e., SSJGL) by using a lower variability threshold. Similarly, we want to see if stabJGL can achieve as high recall as the methods that infer more edges (i.e., FGL and GGL). To investigate this, we consider the same setting as in Figure <ref> with K=2 networks, focusing specifically on the case where the two networks have 20% edge agreement. Table <ref> compares the performance of stabJGL for different values of the variability threshold β_1 to the other methods. For β_1=0.01, stabJGL gives very sparse estimates and obtains comparable precision and recall to SSJGL. For the higher threshold β_1=0.2, stabJGL selects a large number and obtains comparable recall to FGL and GGL while retaining a higher precision level. A complete comparison for all levels of edge agreement is given in the Supplement (Figure S3), where we similarly find that by varying the variability threshold β_1 we can obtain at least as high precision and/or recall as the other methods at any level of similarity. The fact that stabJGL allows the user to obtain higher or lower sparsity by changing the variability threshold means that the method can be adapted to reflect the priorities of the user (i.e., concern for false positives versus false negatives). For most applications, a middle-ground value such as 0.1 yields a good balance between false positives and false negatives as demonstrated in the simulations.
§.§ Runtime profiling
Figure <ref> shows the CPU time used to jointly infer networks for K ∈{2, 3, 4} networks and various numbers of nodes p, with n∈{ 100,150} observations, for the joint graphical lasso with the fused penalty (FGL) with penalty parameters tuned with the AIC and stabJGL with the same parameter specifications as in the previously described simulations. Due to an efficient parallelized implementation, stabJGL has an almost identical run time to FGL when the same number of λ_1 and λ_2 values are considered. Thus, the increased estimation accuracy of stabJGL does not come at a computational cost. It is important to note that due to the generalized fused lasso problem having a closed-form solution in the special case of K=2 <cit.>, stabJGL is substantially faster for only two networks than for K>2. As stabJGL uses the fused penalty this comparison is the most relevant, but a run time comparison of all methods considered in our simulation study can be found in the Supplement (Figure S1). In the Supplement, we also demonstrate that stabJGL can be applied to problems with p>1,000 nodes and K>2 networks within reasonable time (Figure S2).
§.§ Pan-cancer data
We perform a proteomic network analysis of Reverse Phase Protein Array (RPPA) data from The Cancer Genome Atlas (TCGA) across different pan-Cancer tumor types <cit.>. In a large proteomic pan-Cancer study of 11 TCGA tumor types, <cit.> identified a major tumor super cluster consisting of hormonally responsive “women’s cancers”. (Luminal breast cancer, ovarian cystadenocarcinoma, and uterine corpus endometrial carcinoma). Our objective is to map the proteomic network structure of the respective tumor types, so that we can get a better grasp of the common mechanisms at play in the hormonally responsive tumors. We are also interested in highlighting the differences.
We consider mature RPPA data from Luminal breast cancer (BRCA, n=273), high-grade serous ovarian cystadenocarcinoma (OVCA, n=412), and uterine corpus endometrial carcinoma (UCEC, n=404). All data is downloaded from the UCSC Xena Browser <cit.>. The data is measured with p=131 high-quality antibodies that target (phospho)-proteins. To alleviate batch effects, the RPPA data is normalized with replicate-base normalization <cit.>. We use stabJGL to jointly estimate the proteomic networks of the respective tumor types and interpret the results and their implications. We compare the output with that obtained with the fused joint graphical lasso (FGL) of <cit.> with the default penalty parameter tuning with AIC as described in Subsection <ref>. Further details and code for the analysis is given at <https://github.com/Camiling/stabJGL_analysis>.
§.§ Pan-cancer analysis results
§.§.§ Estimated proteomic networks
The resulting stabJGL proteomic networks of the three tumor types are shown in Figure <ref>, where we observe plenty of common edges as well as network-specific ones. The sparsity as well as the selected penalty parameter values in the resulting stabJGL and FGL networks is shown in Table <ref>. The tendency as observed in the simulations of FGL tuned by the AIC to over-select edges appears to be consistent with the findings in this context. With more than two thirds of all potential edges being determined as present by FGL, the results are challenging to interpret and derive meaningful conclusions from. From a biological standpoint, we would not expect a proteomic network to be this saturated in terms of associations due to the expected scale-free property of the degree distribution <cit.>. While the degree distributions of the sparse stabJGL networks all follow a power-law with many low-degree nodes and fewer high-degree ones (hubs), an expected trait for omics data <cit.>, the degree distributions of the FGL networks do not.
The full degree distributions are shown in the Supplement (Figure S4).
In terms of penalty parameters, we see that just like for the simulated data the AIC selects very small penalty parameters for FGL, resulting in little sparsity and similarity encouragement. Given the findings of <cit.> about the presence of a super cluster consisting of the three hormonally responsive cancer types, it is not unreasonable to expect at least some proteomic network similarity to be encouraged by a joint method. This is achieved by stabJGL, which selects a large enough value of λ_2 to encourage similarity. A comparison of the pairwise similarities of the proteomic networks is given in Figure <ref>, where similarity is measured by Matthew's Correlation Coefficient (MCC), a discretized Pearson correlation coefficient that can be used to quantify pairwise network similarities (<cit.>). StabJGL finds the networks of the three tumor types to be more similar than FGL, in accordance with the findings of <cit.>.
§.§.§ Edge validation in STRING
To compare the level of evidence supporting the edges detected by stabJGL and FGL tuned by the AIC in the literature, we conduct edge validation using the STRING database of known and predicted protein-protein interactions <cit.>. To ensure the reliability of the validation process, we only consider the experimentally validated interactions in STRING as evidence, with default confidence score threshold ≥ 0.4. The fraction of edges with supporting evidence in the STRING database is computed for the respective stabJGL and FGL networks and shown in Table <ref>. The analysis reveals that for all three tumor types investigated, a higher proportion of the edges detected by stabJGL had supporting evidence in the STRING database compared to those identified by FGL.
§.§.§ Findings consistent with literature
StabJGL successfully identifies protein-protein interactions known from literature. To highlight the findings of the proposed methodology, we only discuss edges and central proteins identified by stabJGL but not FGL. One example is the edge between activated (S345-phosphorylated) Checkpoint kinase 1 (Chk1) and DNA repair protein RAD51 homolog 1 (Rad51) in ovarian and breast cancer. The complex between the tumor suppressor BRCA2, which manifests predominantly in ovarian and breast cancer, and Rad51, is mediated by the DNA damage checkpoint Chk1 through Rad51 phosphorylation <cit.>. It is also reassuring that stabJGL identifies many relevant tumor type-specific proteins as hubs in the relevant tumor type only, such as mammalian target of rapamycin (mTOR), Tuberous Sclerosis Complex 2 (Tuberin) and Ribosomal protein S6 in BRCA, all of which are involved or up/downstream of the PI3K/AKT/mTOR pathway known to frequently be deregulated in Luminal breast cancer <cit.>. Lists of the top hubs in the respective stabJGL and FGL networks of the different tumor types, and their node degree, are given in the Supplement (Tables S5 and S6).
StabJGL also captures edges that we expect to be present in all three tumor types, such as the known interaction between the transcription factor Forkhead box O3 (FOXO3a) and 14-3-3-epsilon which facilitates cancer cell proliferation <cit.>. This common interaction is documented in the STRING database. Figure <ref> shows the network structure identified by stabJGL that is common to all three tumor types. Central proteins in this common network structure include Oncoprotein 18 (Stathmin), which is known to be relevant in all three hormonally responsive cancers due to its role in the regulation of cell growth and motility <cit.>.
§.§.§ Potential candidate hubs
The recovery of documented links in the protein networks estimated by stabJGL highlights its capability to detect numerous relevant proteins and interactions. The potential for new discoveries is however an important aspect of stabJGL, as suggested by its good performance on simulated data. For example, stabJGL identifies phosphorylated epidermal growth factor receptor (EGFR) as a central hub protein in all three tumor types. While known to be relevant in ovarian cancer <cit.>, the role of activated EGFR in uterine corpus endometrial carcinoma and Luminal breast cancer and is not yet clarified. Our findings suggest it could be relevant in all three hormonally responsive tumor types. Further, Platelet endothelial cell adhesion molecule (CD31) is found to be a central protein in UCEC only. The protein is important for angiogenesis and has been implicated in other tumor types such as haemangioma <cit.>. Its prominence in the proteomic UCEC network suggests it may play a crucial role in this tumor type as well. Overall, these results showcase how stabJGL can aid in generating hypotheses by identifying central proteins and associations.
§ DISCUSSION
Suitable sparsity and similarity selection is key for capturing and studying multiple related biological networks. We have proposed the stabJGL algorithm, which determines the penalty parameters in the fused graphical lasso for multiple networks based on the principle of model stability. StabJGL demonstrably avoids the under- or over-selection of edges observed in state-of-the-art selection methods based on information criteria,
and succeeds at leveraging network similarities to a suitable degree.
Consequently, the method can be employed in situations where the actual degree of similarity is uncertain, resulting in marked benefits with minimal risks associated with its use. StabJGL offers a fast parallelized implementation, particularly for K=2 networks as a closed-form solution exists. We successfully apply the method to problems with p>1,000 nodes and K>2 networks.
With our novel approach, we can identify both common and distinct mechanisms in the proteomic networks of different types of hormonally responsive women's cancers. The results obtained with stabJGL are in line with known biology and compliment those of <cit.> by offering additional understanding of the underlying mechanisms in action. By recognizing various proteins as highly critical in the proteomic networks, stabJGL suggests their possible involvement in driving the diseases. The method both identifies proteins that are central in all three hormonally responsive cancers (e.g., phosphorylated EGFR) and proteins of tumor-specific relevance (e.g., CD31 in UCEC).
Future extensions of the method can include alternative measures of variability, such as the entropy (see, e.g., <cit.>). Further, while the method is formulated specifically for the joint graphical lasso with the fused penalty, it can in principle be used for any joint network approach requiring the tuning of sparsity- and similarity-controlling parameters. One potential method of application is JCGL <cit.>, which is based on a group lasso penalty and currently fixes the penalty parameters according to theoretical results.
To conclude, stabJGL provides a reliable approach to joint network inference of omics data. The output can provide a better understanding of both common and data type-specific mechanisms, which can be used for hypothesis generation regarding potential therapeutic targets.
§ SOFTWARE
A user-friendly R package for stabJGL with tutorials is available on Github: <https://github.com/Camiling/stabJGL>. R code for the simulations and data analyses in this paper is available at <https://github.com/Camiling/stabJGL_simulations> and <https://github.com/Camiling/stabJGL_analysis>.
§ FUNDING
This research is funded by the UK Medical Research Council programme MRC MC UU 00002/10 (C.L. and S.R.) and Aker Scholarship (C.L.).
unsrtnat
§ APPENDIX
§ RUNTIME PROFILING
Figure <ref> compares the time used to infer K ∈{2, 4} networks with various numbers of nodes p, for the different network reconstruction methods. We only consider the AIC selection for FGL as the eBIC considers the same grid of values and hence has identical running time. All methods are run with the same parameter specifications as in the main simulation study. The simulated networks are set to have 50% of their edges in common, generated with the same approach as in the main simulation study. As discussed by <cit.>, the group joint graphical lasso is faster that its fused counterpart. The Bayesian spike-and-slab joint graphical lasso is substantially slower than the other methods, taking around ten times longer than the fused joint graphical lasso and stabJGL.
Figure <ref> shows the time used by stabJGL to infer K ∈{2, 3} networks with various numbers of nodes p and 50% of their edges in common. We see that for K=2 networks, inference for p=1,400 nodes is feasible within half an hour, while for K=3 inference for p=1,000 nodes is feasible within about eight hours. As discussed by <cit.>, there is an explicit solution to the fused joint graphical lasso problem for K=2 and hence inference is much faster for stabJGL as well in that case.
§ ADDITIONAL SIMULATION SCENARIOS
Additional simulation studies are conducted to assess a wider range of scenarios and compare the performance of the graphical lasso (Glasso), the fused joint graphical lasso (FGL) and the group joint graphical lasso (GGL) tuned by AIC, the fused joint graphical lasso tuned by eBIC, the Bayesian spike-and-slab joint graphical lasso (SSJGL) and stabJGL. Table <ref> shows the network reconstruction performance of the methods in a K=3 network setting with p=200 nodes and various similarity of the true graph structures, averaged over N=100 simulations. Similarly, Table <ref> shows the results for a K=4 setting with p=100 nodes. In Table <ref>, the results are shown for a K=2 network setting with p=100 nodes. Finally, the results from a K=2 network setting with p=300 nodes are shown in Table <ref>. In the latter case, due to the longer run time of SSJGL as demonstrated in Section <ref>, this method is omitted to make the simulation study feasible within reasonable time (<48 hours).
The results from the additional simulation are in line with those from the main simulation study; stabJGL succeeds at capturing both the sparsity level and similarity between the networks to a better degree than FGL and GGl, while either outperforming the standard graphical lasso for highly similar networks or getting comparable results for unrelated networks. FGL with the alternative eBIC selection mostly selects empty graphs. Finally, SSJGL select very few edges, leading to high precision but very low recall in all cases.
Performance of the different graph reconstruction methods in simulations, reconstructing graphs with p=200 nodes from K=3 classes with various similarity of the true graph structures. The methods included are the graphical lasso (Glasso), the fused joint graphical lasso tuned by the AIC (FGL) and by the extended BIC (eBIC), the group joint graphical lasso (GGL), the Bayesian spike-and-slab joint graphical lasso (SSJGL) and stabJGL. The similarity (percentage of edges that are in common) of the graphs is shown. The results are averaged over N=100 simulations and shows the sparsity, precision, and recall of each of the K=3 estimated graphs. The corresponding standard deviations are shown as well. The graphs are reconstructed from n_1=150, n_2=200 and n_3=300 observations. All graphs have sparsity 0.01.
The average selected values of the penalty parameters λ_1 and λ_2 for the relevant methods is shown as well.
3@c@n_1=150 3@c@n_2=200 3@c@n_3=300
6-810-12 14-16
Similarity Method λ_1 λ_2 Sparsity Precision Recall Sparsity Precision Recall Sparsity Precision Recall
100 % Glasso 0.201 - 0.018 (0.007) 0.22 (0.05) 0.37 (0.05) 0.010 (0.004) 0.40 (0.09) 0.37 (0.05) 0.006 (0.002) 0.64 (0.09) 0.39 (0.05)
FGL 0.166 0.013 0.027 (0.012) 0.21 (0.11) 0.49 (0.04) 0.015 (0.006) 0.38 (0.15) 0.48 (0.03) 0.007 (0.002) 0.68 (0.14) 0.46 (0.04)
FGL (eBIC) 0.488 0.000 0.000 (0.000) - - 0.000 (0.000) - - 0.000 (0.000) - -
GGL 0.166 0.004 0.044 (0.006) 0.12 (0.02) 0.50 (0.04) 0.024 (0.004) 0.21 (0.04) 0.49 (0.03) 0.010 (0.002) 0.48 (0.06) 0.48 (0.04)
SSJGL - - 0.002 (0.000) 1.00 (0.00) 0.25 (0.03) 0.002 (0.000) 1.00 (0.00) 0.25 (0.03) 0.002 (0.000) 1.00 (0.00) 0.25 (0.03)
stabJGL 0.166 0.078 0.005 (0.000) 0.89 (0.05) 0.44 (0.02) 0.005 (0.000) 0.90 (0.03) 0.44 (0.02) 0.005 (0.000) 0.91 (0.03) 0.44 (0.02)
80 % Glasso 0.200 - 0.017 (0.006) 0.23 (0.06) 0.36 (0.05) 0.010 (0.003) 0.40 (0.09) 0.38 (0.05) 0.006 (0.002) 0.63 (0.10) 0.38 (0.05)
FGL 0.166 0.005 0.039 (0.011) 0.14 (0.05) 0.48 (0.04) 0.021 (0.006) 0.25 (0.08) 0.49 (0.04) 0.008 (0.002) 0.56 (0.13) 0.45 (0.04)
FGL (eBIC) 0.489 0.000 0.000 (0.000) - - 0.000 (0.000) - - 0.000 (0.000) - -
GGL 0.166 0.003 0.045 (0.005) 0.11 (0.01) 0.49 (0.03) 0.025 (0.003) 0.21 (0.03) 0.50 (0.04) 0.010 (0.001) 0.49 (0.06) 0.46 (0.03)
SSJGL - - 0.002 (0.000) 1.00 (0.00) 0.17 (0.02) 0.002 (0.000) 0.98 (0.02) 0.17 (0.02) 0.002 (0.000) 0.99 (0.01) 0.17 (0.02)
stabJGL 0.166 0.064 0.005 (0.001) 0.84 (0.07) 0.37 (0.03) 0.004 (0.000) 0.88 (0.04) 0.35 (0.03) 0.004 (0.000) 0.91 (0.04) 0.34 (0.03)
60 % Glasso 0.196 - 0.018 (0.006) 0.22 (0.06) 0.37 (0.05) 0.011 (0.004) 0.37 (0.10) 0.36 (0.05) 0.007 (0.002) 0.60 (0.10) 0.41 (0.06)
FGL 0.166 0.003 0.043 (0.008) 0.12 (0.03) 0.50 (0.04) 0.022 (0.005) 0.22 (0.05) 0.47 (0.04) 0.009 (0.002) 0.52 (0.08) 0.46 (0.04)
FGL (eBIC) 0.484 0.000 0.000 (0.000) - - 0.000 (0.000) - - 0.000 (0.000) - -
GGL 0.166 0.003 0.045 (0.005) 0.11 (0.01) 0.50 (0.04) 0.024 (0.003) 0.20 (0.02) 0.47 (0.03) 0.009 (0.001) 0.50 (0.05) 0.46 (0.04)
SSJGL - - 0.001 (0.000) 1.00 (0.01) 0.10 (0.02) 0.001 (0.000) 0.95 (0.05) 0.10 (0.02) 0.001 (0.000) 0.99 (0.02) 0.10 (0.02)
stabJGL 0.166 0.056 0.004 (0.001) 0.77 (0.07) 0.34 (0.03) 0.003 (0.000) 0.85 (0.04) 0.29 (0.03) 0.003 (0.000) 0.92 (0.03) 0.29 (0.03)
40 % Glasso 0.199 - 0.017 (0.006) 0.23 (0.05) 0.36 (0.05) 0.011 (0.004) 0.38 (0.11) 0.36 (0.06) 0.006 (0.002) 0.63 (0.13) 0.37 (0.07)
FGL 0.166 0.002 0.046 (0.008) 0.11 (0.02) 0.50 (0.04) 0.024 (0.005) 0.21 (0.04) 0.48 (0.04) 0.009 (0.002) 0.50 (0.07) 0.44 (0.04)
FGL (eBIC) 0.482 0.000 0.000 (0.000) - - 0.000 (0.000) - - 0.000 (0.000) - -
GGL 0.166 0.002 0.046 (0.005) 0.11 (0.01) 0.50 (0.03) 0.024 (0.003) 0.20 (0.02) 0.48 (0.04) 0.009 (0.001) 0.49 (0.05) 0.44 (0.03)
SSJGL - - 0.001 (0.000) 0.98 (0.04) 0.06 (0.01) 0.001 (0.000) 0.92 (0.07) 0.05 (0.01) 0.001 (0.000) 0.98 (0.04) 0.06 (0.01)
stabJGL 0.166 0.057 0.004 (0.001) 0.75 (0.09) 0.28 (0.03) 0.003 (0.000) 0.83 (0.05) 0.23 (0.03) 0.002 (0.000) 0.91 (0.04) 0.23 (0.03)
20 % Glasso 0.198 - 0.019 (0.007) 0.22 (0.06) 0.38 (0.05) 0.010 (0.004) 0.38 (0.11) 0.34 (0.07) 0.006 (0.002) 0.63 (0.11) 0.39 (0.06)
FGL 0.166 0.001 0.047 (0.005) 0.11 (0.01) 0.50 (0.03) 0.024 (0.003) 0.20 (0.03) 0.47 (0.04) 0.010 (0.001) 0.48 (0.06) 0.47 (0.03)
FGL (eBIC) 0.486 0.000 0.000 (0.000) - - 0.000 (0.000) - - 0.000 (0.000) - -
GGL 0.166 0.002 0.046 (0.005) 0.11 (0.01) 0.50 (0.03) 0.024 (0.003) 0.20 (0.03) 0.47 (0.04) 0.010 (0.001) 0.48 (0.05) 0.47 (0.03)
SSJGL - - 0.000 (0.000) 0.90 (0.12) 0.03 (0.01) 0.000 (0.000) 0.87 (0.12) 0.03 (0.01) 0.000 (0.000) 0.96 (0.07) 0.03 (0.01)
stabJGL 0.166 0.052 0.004 (0.001) 0.65 (0.09) 0.27 (0.03) 0.003 (0.000) 0.82 (0.05) 0.22 (0.03) 0.002 (0.000) 0.91 (0.04) 0.22 (0.03)
0 % Glasso 0.201 - 0.017 (0.005) 0.23 (0.05) 0.36 (0.04) 0.011 (0.004) 0.54 (0.12) 0.57 (0.08) 0.008 (0.003) 0.73 (0.12) 0.52 (0.09)
FGL 0.140 0.005 0.077 (0.031) 0.08 (0.02) 0.56 (0.07) 0.048 (0.023) 0.20 (0.08) 0.79 (0.06) 0.023 (0.013) 0.40 (0.16) 0.73 (0.09)
FGL (eBIC) 0.497 0.001 0.000 (0.000) - - 0.000 (0.000) - - 0.000 (0.000) - -
GGL 0.166 0.000 0.048 (0.002) 0.10 (0.01) 0.50 (0.03) 0.027 (0.001) 0.28 (0.01) 0.74 (0.03) 0.012 (0.001) 0.56 (0.03) 0.65 (0.03)
SSJGL - - 0.000 (0.000) 0.47 (0.21) 0.01 (0.01) 0.000 (0.000) 0.79 (0.18) 0.02 (0.01) 0.000 (0.000) 0.84 (0.17) 0.02 (0.01)
stabJGL 0.166 0.047 0.005 (0.001) 0.53 (0.09) 0.23 (0.04) 0.004 (0.001) 0.84 (0.05) 0.31 (0.06) 0.003 (0.000) 0.92 (0.04) 0.24 (0.04)
Performance of the different graph reconstruction methods in simulations, reconstructing graphs with p=100 nodes from K=4 classes with various similarity of the true graph structures. The methods included are the graphical lasso (Glasso), the fused joint graphical lasso tuned by the AIC (FGL) and by the extended BIC (eBIC), the group joint graphical lasso (GGL), the Bayesian spike-and-slab joint graphical lasso (SSJGL) and stabJGL. The similarity (percentage of edges that are in common) of the graphs is shown. The results are averaged over N=100 simulations and shows the sparsity, precision, and recall of each of the K=4 estimated graphs. The corresponding standard deviations are shown as well. The graphs are reconstructed from n_1=150, n_2=200, n_3=250 and n_4=300 observations. All graphs have sparsity 0.02.
The average selected values of the penalty parameters λ_1 and λ_2 for the relevant methods is shown as well.
1.1!
3@c@n_1=150 3@c@n_2=200 3@c@n_3=250 3@c@n_4=300
6-810-12 14-16 18-20
Similarity Method λ_1 λ_2 Sparsity Precision Recall Sparsity Precision Recall Sparsity Precision Recall Sparsity Precision Recall
100 % Glasso 0.206 - 0.026 (0.007) 0.40 (0.08) 0.51 (0.06) 0.018 (0.005) 0.57 (0.09) 0.51 (0.06) 0.016 (0.003) 0.69 (0.08) 0.53 (0.06) 0.015 (0.003) 0.75 (0.08) 0.55 (0.06)
FGL 0.114 0.02 0.065 (0.028) 0.33 (0.15) 0.90 (0.04) 0.047 (0.020) 0.44 (0.15) 0.92 (0.03) 0.038 (0.014) 0.53 (0.14) 0.92 (0.04) 0.033 (0.010) 0.61 (0.13) 0.93 (0.03)
FGL (eBIC) 0.232 0.036 0.008 (0.002) 0.98 (0.04) 0.38 (0.09) 0.008 (0.002) 0.99 (0.02) 0.38 (0.09) 0.008 (0.002) 1.00 (0.01) 0.38 (0.09) 0.007 (0.002) 1.00 (0.00) 0.37 (0.09)
GGL 0.114 0.002 0.165 (0.014) 0.10 (0.02) 0.80 (0.04) 0.123 (0.011) 0.14 (0.02) 0.84 (0.04) 0.095 (0.010) 0.18 (0.04) 0.86 (0.04) 0.075 (0.007) 0.24 (0.03) 0.88 (0.04)
SSJGL - - 0.012 (0.001) 1.00 (0.00) 0.61 (0.04) 0.012 (0.001) 1.00 (0.00) 0.61 (0.04) 0.012 (0.001) 1.00 (0.00) 0.61 (0.04) 0.012 (0.001) 1.00 (0.00) 0.61 (0.04)
stabJGL 0.166 0.042 0.014 (0.002) 0.92 (0.07) 0.66 (0.04) 0.014 (0.001) 0.95 (0.04) 0.66 (0.04) 0.014 (0.001) 0.97 (0.02) 0.66 (0.04) 0.014 (0.001) 0.97 (0.02) 0.66 (0.04)
80 % Glasso 0.201 - 0.026 (0.007) 0.41 (0.09) 0.50 (0.05) 0.018 (0.005) 0.57 (0.12) 0.49 (0.07) 0.016 (0.003) 0.70 (0.09) 0.54 (0.07) 0.015 (0.003) 0.75 (0.07) 0.55 (0.06)
FGL 0.114 0.012 0.091 (0.025) 0.20 (0.06) 0.85 (0.04) 0.061 (0.019) 0.30 (0.09) 0.83 (0.04) 0.047 (0.015) 0.40 (0.11) 0.86 (0.03) 0.038 (0.010) 0.48 (0.11) 0.87 (0.03)
FGL (eBIC) 0.407 0.008 0.003 (0.004) 0.97 (0.06) 0.13 (0.18) 0.002 (0.003) 0.99 (0.02) 0.11 (0.16) 0.002 (0.003) 1.00 (0.01) 0.12 (0.17) 0.002 (0.003) 1.00 (0.00) 0.12 (0.17)
GGL 0.114 0.003 0.161 (0.019) 0.10 (0.02) 0.80 (0.04) 0.117 (0.016) 0.14 (0.04) 0.81 (0.04) 0.090 (0.012) 0.19 (0.04) 0.84 (0.03) 0.072 (0.010) 0.24 (0.05) 0.86 (0.04)
SSJGL - - 0.010 (0.001) 1.00 (0.00) 0.49 (0.04) 0.010 (0.001) 0.94 (0.02) 0.45 (0.03) 0.010 (0.001) 0.96 (0.02) 0.46 (0.04) 0.010 (0.001) 0.96 (0.02) 0.47 (0.04)
stabJGL 0.166 0.031 0.015 (0.002) 0.81 (0.11) 0.58 (0.04) 0.012 (0.001) 0.92 (0.05) 0.54 (0.04) 0.012 (0.001) 0.96 (0.03) 0.56 (0.04) 0.011 (0.001) 0.97 (0.02) 0.55 (0.04)
60 % Glasso 0.203 - 0.026 (0.007) 0.40 (0.07) 0.51 (0.06) 0.018 (0.005) 0.57 (0.10) 0.49 (0.07) 0.015 (0.003) 0.70 (0.08) 0.53 (0.05) 0.014 (0.003) 0.75 (0.08) 0.53 (0.06)
FGL 0.114 0.006 0.127 (0.030) 0.14 (0.04) 0.82 (0.04) 0.087 (0.024) 0.20 (0.06) 0.80 (0.04) 0.068 (0.019) 0.27 (0.08) 0.83 (0.04) 0.052 (0.015) 0.35 (0.10) 0.83 (0.04)
FGL (eBIC) 0.435 0.005 0.002 (0.004) 0.97 (0.07) 0.08 (0.16) 0.002 (0.003) 0.99 (0.04) 0.07 (0.14) 0.002 (0.003) 0.99 (0.02) 0.08 (0.15) 0.001 (0.003) 1.00 (0.01) 0.07 (0.14)
GGL 0.114 0.000 0.169 (0.006) 0.10 (0.01) 0.81 (0.04) 0.122 (0.006) 0.13 (0.01) 0.81 (0.04) 0.095 (0.005) 0.18 (0.01) 0.85 (0.03) 0.074 (0.004) 0.23 (0.02) 0.85 (0.04)
SSJGL - - 0.007 (0.001) 0.98 (0.03) 0.32 (0.03) 0.007 (0.001) 0.90 (0.04) 0.30 (0.03) 0.007 (0.001) 0.89 (0.04) 0.29 (0.03) 0.007 (0.001) 0.86 (0.04) 0.28 (0.03)
stabJGL 0.166 0.025 0.017 (0.003) 0.68 (0.09) 0.55 (0.04) 0.012 (0.001) 0.87 (0.06) 0.50 (0.04) 0.011 (0.001) 0.92 (0.04) 0.52 (0.05) 0.010 (0.001) 0.95 (0.03) 0.49 (0.05)
40 % Glasso 0.202 - 0.026 (0.007) 0.41 (0.08) 0.50 (0.06) 0.017 (0.005) 0.59 (0.11) 0.47 (0.08) 0.015 (0.004) 0.71 (0.11) 0.52 (0.07) 0.014 (0.003) 0.78 (0.09) 0.52 (0.08)
FGL 0.114 0.003 0.146 (0.025) 0.11 (0.02) 0.80 (0.04) 0.104 (0.021) 0.16 (0.03) 0.79 (0.04) 0.078 (0.018) 0.23 (0.06) 0.83 (0.04) 0.059 (0.014) 0.30 (0.07) 0.83 (0.04)
FGL (eBIC) 0.489 0.000 0.000 (0.000) 1.00 (0.00) 0.00 (0.01) 0.000 (0.000) 1.00 (0.00) 0.00 (0.00) 0.000 (0.000) 1.00 (0.00) 0.00 (0.00) 0.000 (0.000) 1.00 (0.00) 0.00 (0.00)
GGL 0.114 0.000 0.169 (0.005) 0.10 (0.00) 0.81 (0.04) 0.123 (0.005) 0.13 (0.01) 0.81 (0.04) 0.093 (0.005) 0.18 (0.01) 0.84 (0.03) 0.071 (0.004) 0.24 (0.01) 0.85 (0.03)
SSJGL - - 0.004 (0.001) 0.93 (0.05) 0.19 (0.03) 0.004 (0.001) 0.69 (0.07) 0.14 (0.03) 0.004 (0.001) 0.77 (0.08) 0.15 (0.03) 0.004 (0.001) 0.81 (0.07) 0.16 (0.03)
stabJGL 0.166 0.025 0.017 (0.004) 0.63 (0.10) 0.50 (0.05) 0.011 (0.002) 0.83 (0.07) 0.43 (0.05) 0.010 (0.001) 0.91 (0.04) 0.44 (0.05) 0.009 (0.001) 0.96 (0.03) 0.41 (0.05)
20 % Glasso 0.203 - 0.026 (0.007) 0.41 (0.08) 0.50 (0.06) 0.018 (0.004) 0.58 (0.09) 0.51 (0.06) 0.016 (0.003) 0.67 (0.07) 0.54 (0.06) 0.015 (0.003) 0.76 (0.09) 0.54 (0.07)
FGL 0.114 0.002 0.154 (0.022) 0.11 (0.02) 0.80 (0.04) 0.113 (0.019) 0.15 (0.03) 0.82 (0.04) 0.085 (0.015) 0.21 (0.04) 0.84 (0.03) 0.065 (0.013) 0.27 (0.05) 0.85 (0.05)
FGL (eBIC) 0.450 0.003 0.002 (0.004) 0.95 (0.11) 0.06 (0.14) 0.001 (0.003) 0.98 (0.04) 0.06 (0.14) 0.001 (0.003) 0.99 (0.02) 0.06 (0.14) 0.001 (0.003) 1.00 (0.01) 0.06 (0.13)
GGL 0.114 0.000 0.169 (0.005) 0.10 (0.00) 0.81 (0.04) 0.125 (0.004) 0.13 (0.01) 0.84 (0.04) 0.095 (0.004) 0.18 (0.01) 0.86 (0.03) 0.074 (0.004) 0.23 (0.02) 0.86 (0.04)
SSJGL - - 0.004 (0.000) 0.79 (0.07) 0.15 (0.02) 0.004 (0.000) 0.68 (0.07) 0.13 (0.02) 0.004 (0.001) 0.79 (0.06) 0.15 (0.02) 0.004 (0.001) 0.82 (0.06) 0.16 (0.02)
stabJGL 0.166 0.023 0.017 (0.004) 0.59 (0.09) 0.48 (0.05) 0.012 (0.002) 0.78 (0.08) 0.46 (0.05) 0.010 (0.001) 0.89 (0.06) 0.46 (0.04) 0.009 (0.001) 0.96 (0.04) 0.42 (0.04)
0 % Glasso 0.207 - 0.027 (0.008) 0.40 (0.08) 0.51 (0.07) 0.019 (0.005) 0.70 (0.10) 0.65 (0.08) 0.018 (0.004) 0.80 (0.09) 0.70 (0.08) 0.015 (0.003) 0.85 (0.07) 0.63 (0.07)
FGL 0.114 0.000 0.170 (0.005) 0.10 (0.00) 0.81 (0.04) 0.122 (0.004) 0.16 (0.01) 0.94 (0.02) 0.092 (0.004) 0.21 (0.01) 0.96 (0.02) 0.072 (0.003) 0.26 (0.01) 0.95 (0.02)
FGL (eBIC) 0.380 0.011 0.004 (0.005) 0.91 (0.13) 0.14 (0.16) 0.003 (0.004) 0.98 (0.04) 0.15 (0.18) 0.003 (0.004) 0.99 (0.04) 0.15 (0.19) 0.002 (0.003) 0.99 (0.02) 0.12 (0.15)
GGL 0.114 0.000 0.170 (0.005) 0.10 (0.00) 0.81 (0.04) 0.122 (0.004) 0.16 (0.01) 0.94 (0.02) 0.092 (0.004) 0.21 (0.01) 0.96 (0.02) 0.072 (0.003) 0.26 (0.01) 0.95 (0.02)
SSJGL - - 0.003 (0.000) 0.59 (0.10) 0.07 (0.02) 0.002 (0.000) 0.63 (0.10) 0.08 (0.02) 0.002 (0.000) 0.48 (0.09) 0.06 (0.01) 0.003 (0.000) 0.69 (0.10) 0.09 (0.02)
stabJGL 0.166 0.015 0.026 (0.006) 0.42 (0.07) 0.52 (0.06) 0.018 (0.004) 0.73 (0.08) 0.65 (0.07) 0.016 (0.003) 0.85 (0.06) 0.66 (0.07) 0.013 (0.002) 0.89 (0.05) 0.57 (0.08)
§ CHOICE OF VARIABILITY THRESHOLD
Figure <ref> compares the performance of stabJGL for different values of the variability threshold β_1 to the the graphical lasso (Glasso), the fused joint graphical lasso (FGL), the group joint graphical lasso (GGL) and the Bayesian spike-and-slab joint graphical lasso (SSJGL). The results for FGL tuned with eBIC are not shown as it selected an empty graph in all settings. The settings considered have K=2 networks with p=100 nodes of various similarity. As in the setting considered in the main manuscript, we find that by varying the variability threshold β_1 we can obtain at least as high precision and/or recall as the other methods at any level of similarity.
§ ADDITIONAL PAN-CANCER ANALYSIS RESULTS
§.§ Degree distributions
Table <ref> shows the degree distribution of the proteomic networks identified by stabJGL and FGL. While the stabJGL networks all have degree distributions that follow clear power-law distributions, in line with biological expectations, the FGL networks have degree distributions that strongly contradict a power law with most nodes having node degree >60.
§.§ Top hubs
Table <ref> shows the node degree of the proteins with degree larger than the 90^th percentile in the respective stabJGL networks of the different tumor types. The same table for the FGL networks is shown in Table <ref>.
|
http://arxiv.org/abs/2306.11136v1
|
20230619193856
|
Chain-mapping methods for relativistic light-matter interactions
|
[
"Robert H. Jonsson",
"Johannes Knörzer"
] |
quant-ph
|
[
"quant-ph",
"gr-qc",
"hep-th"
] |
[email protected]
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Nordita, Stockholm University and KTH Royal Institute of Technology, Hannes Alfvéns väg 12, SE-106 91 Stockholm, Sweden
[email protected]
Institute for Theoretical Studies, ETH Zurich, 8092 Zurich, Switzerland
The interaction between localized emitters and quantum fields, both in relativistic settings and in the case of ultra-strong couplings, requires non-perturbative methods beyond the rotating-wave approximation.
In this work we employ chain-mapping methods to achieve a numerically exact treatment of the interaction between a localized emitter and a scalar quantum field.
We extend the application range of these methods beyond emitter observables and apply them to study field observables.
We first provide an overview of chain-mapping methods and their physical interpretation, and discuss the thermal double construction for systems coupled to thermal field states.
Modelling the emitter as an Unruh-DeWitt particle detector, we then calculate the energy density emitted by a detector coupling strongly to the field.
As a stimulating demonstration of the approach's potential, we calculate the radiation emitted from an accelerated detector in the Unruh effect, which is closely related to the thermal double construction as we discuss.
We comment on prospects and challenges of the method.
Chain-mapping methods for relativistic light-matter interactions
Johannes Knörzer
Jun 18, 2023
================================================================
§ INTRODUCTION
Interacting quantum systems are ubiquitous in nature.
Yet their dynamics is challenging to predict beyond simplifying approximations.
A versatile set of computational tools is offered by the theory of open quantum systems, in which a physical system of interest is described as being coupled to its environment <cit.>.
Common approaches in the study of open systems rely on an effective description of the system which may be obtained by tracing out the environmental degrees of freedom yielding a quantum master equation.
Its validity is usually restricted to weak system-bath couplings and short-lived bath correlations within the Born-Markov approximation.
Physically it describes scenarios of low entanglement between system and environment and, being an effective description of the reduced state of the system, yields access only to system and not to bath observables.
Physical systems do not necessarily satisfy the underlying assumptions of weak coupling and Markovianity as encountered in, quantum optics <cit.>, condensed-matter physics <cit.>, quantum chemistry and biology <cit.>, or acceleration-induced quantum effects <cit.>.
Under such conditions, predicting the time evolution is challenging.
While the Nakajima-Zwanzig generalized master equation <cit.> provides an exact framework for the simulation of quantum dynamics, it is usually hard to derive.
For special cases, if the environment may be described by independent quantum harmonic oscillators, there exist numerically convergent methods to calculate the dynamics within the non-Markovian and strong-coupling regimes <cit.>.
While most approaches aim at solving the dynamics of the reduced system only, some physical phenomena require a detailed analysis of bath observables.
Apart from exact diagonalization, which becomes intractable for moderate system sizes, the total system dynamics may be obtained by unitarily mapping the underlying model onto a one-dimensional chain Hamiltonian and performing time evolution with respect to this tight-binding chain.
Tracing back to the numerical renormalization group <cit.>, so-called star-to-chain transformations may even be performed analytically and without previous discretization of the environment <cit.>.
The resulting semi-infinite chain may be truncated and the thereby obtained model can be evolved efficiently by matrix-product state (MPS) simulations <cit.>.
For a large class of Gaussian bosonic environments, the validity of this truncation can be certified by appropriate error bounds <cit.>.
Chain-mapping approaches have recently been utilized to investigate a variety of different problems, for non-perturbative studies of light-matter interaction at strong couplings <cit.> and quantum impurity problems in structured environments <cit.>.
Other works have focused on the extension of this approach to thermal baths via a thermofield transformation <cit.>, which may be used to map an initially thermal chain to two empty chains <cit.>.
The latter case of two initially empty chains provides a useful starting point for MPS-based numerical simulations in the presence of thermal baths as demonstrated, in Refs. <cit.>.
Yet most of the previous works have focused on the reduced system's dynamics or explored only coarse-grained bath observables, while the approach's broad access to bath observables remains to be fully leveraged.
In this work, we demonstrate the potential of chain mappings for detailed, non-perturbative studies of field observables, such as the energy density radiated from an emitter interacting strongly with a quantum field.
As a physically interesting and stimulating first example, we consider the question what kind of radiation is emitted from a uniformly accelerated emitter in the context of the Unruh effect.
To model the emitter we use the Unruh-DeWitt (UDW) particle detector model <cit.>.
The UDW detector model is a central tool in relativistic quantum information, used to address a wide range of phenomena including the Unruh effect <cit.>, but also Hawking radiation, vacuum entanglement, relativistic communication, particle and radiation creation or even superposition of trajectories and temporal orders <cit.>.
It consists of a single emitter, modelled as a two-level system (TLS) or a harmonic oscillator (HO), which couples via its monopole operator to a scalar quantum field.
After being first posed, the question whether a uniformly accelerated detector emits radiation inspired various works and raised a discussion whose development is summarized, for example, both in <cit.> and <cit.>, with the latter addressing the question non-perturbatively.
Many questions in relativistic quantum information, in particular questions concerning the extraction and transmission of entanglement, require the non-perturbative treatment of the detector-field interaction.
For TLS detectors, whose Hamiltonian is a type of spin-boson model, non-perturbative solutions are challenging and only a limited number of solutions are known, as recently summarized in <cit.>.
Here we treat them using MPS-based approaches.
For HO detectors, Gaussian state methods can be employed for non-perturbative treatments <cit.> which we also build upon in our work.
Here we show that, employing star-to-chain transformations, it is possible to calculate non-perturbatively the time evolution of the joint detector-field state.
Most interestingly, the approach introduces no approximations to the model but allows for (i) a treatment which is numerically exact up to a time scale determined by the numerical resources available and (ii) a precise control over the simulation error.
Since the UDW model is prototypical for many models in quantum optics, these results lead the way to future applications, for example, in the treatment of ultra-strong matter-light couplings.
Specifically in the following we calculate the time evolution of the detector state and the energy density emitted by both resting and accelerated detectors into the Minkowski vacuum state of the field. We consider TLS and HO detectors, initialized in their ground or (first) excited states, and verify that the applied coupling strength is significantly beyond the regime of leading-order time-dependent perturbation theory.
This work is organized as follows:
Sec. <ref> summarizes the employed chain mapping combined with the thermofield approach.
Sec. <ref> discusses errors arising in the approach and how they restrict the maximal simulation times, for both a free field and an emitter coupled to the vacuum field.
Subsequently, thermal field states are considered and the relation to the Unruh effect is made explicit in Sec. <ref>.
The results are summarized in Sec. <ref>, where we also provide future perspectives.
§ THEORETICAL FRAMEWORK
In this section, we introduce our theoretical approach and numerical methods.
In Sec. <ref> we briefly review chain transformations and employ them to cast the UDW detector model into a form that can be studied efficiently using our numerical methods.
To account for the coupling to thermal field states we summarize the thermal double construction and subsequent chain transformation in Sec. <ref>.
This is followed by a brief discussion of the numerical methods we utilize in Sec. <ref>, and a proof-of-principle demonstration in Sec. <ref>, in which we calculate the energy density of a detector coupled to the vacuum.
§.§ Chain mapping
UDW model.—Chain mappings can be applied to systems coupled bilinearly to a harmonic bath.
Here we apply them to the UDW detector model, which phenomenologically describes a monopole detector coupled to a massless scalar field.
We start by considering a generic Hamiltonian
Ĥ = + + Hint: ,
which contains the free field described by , a detector modeled by , and an interaction Hamiltonian Hint: $.
In the Schrödinger picture, the field Hamiltonian reads
= k-∞∞ |k| b̂^†_k b̂_k,
and is described by bosonic annihilation (creation) operatorsb̂_k^(†).
The field is coupled to a detector, or emitter, with which we mean either a two-level system (TLS) or a harmonic oscillator (HO) that can emit or absorb energy by interacting with the field.
For these two cases, we consider the detector models (in the following,ħ= c = 1):
^(TLS) = Ω_d/2σ̂_z, ^(HO) = Ω_d ( â^†â + 1/2 ).
Here,Ω_dis the level spacing,â^(†)are ladder operators of the oscillator,σ̂denotes the Pauli spin operator andσ̂_zitszcomponent.
Finally, the bilinear interaction between field and detector is modeled by
Hint: =λ X̂⊗∫dx f(x) π̂(x),
with the dimensionless coupling constantλ, field momentumπ̂and the smearing functionf(x)that describes the shape of the coupling in real space.
The field couples to the detector via the system operatorsX̂ = σ̂^+ + σ̂^-(TLS) andX̂ = â^†+ â(HO), respectively.
Note that this interaction Hamiltonian is not number-conserving.
Generally speaking, at strong couplings this constitutes a formidable challenge for numerical treatments.
Lorentzian coupling profile.—
In the following, we choose to model the interaction by a Lorentzian smearing function,
f(x) = L/π(L^2 + x^2),
with length scaleL, see also Table <ref>.
Because this smearing function is even,f(x)=f(-x),
the detector only couples to the even sector of the field.
Withb̂^(e/o)_k =(b̂_k±b̂_-k)/√(2), which yields=k0∞k (b̂^(e)†_kb̂^(e)_k+b̂^(o)†_kb̂^(o)_k)=:^(e) +^(o) , the interaction Hamiltonian only couples to even modes,
Hint: =λ X̂⊗ (√(2)k0∞ f_k b̂^(e)_k + f_k^* b̂^(e)†_k ),
and we can discard the (dynamics of) the odd sector of the field henceforth.
Here, the coupling coefficientsf_kare
f_k = -√(k)/√(4π)( x kxf(x))= - i√(k/4π)e^-Lk.
Chain modes.—
The model captured by Eqs. (<ref>,<ref>,<ref>,<ref>) describes a (harmonic or two-level) detector coupled to independent harmonic oscillators, as schematically depicted in Fig. <ref>(a).
This Hamiltonian may be transformed such that the new model takes the form of a semi-infinite chain with only nearest-neighbor interactions.
To this end, we introduce the chain mode operators as
ĉ_i=√(2)k0∞ f_k p_i(k) b̂^(e)_k.
As originally presented and detailed in Ref. <cit.>, the functionsp_i(k)(i = 0, 1, 2, ...) form a family of orthogonal polynomials,
2 k0∞ |f_k|^2 p_i(k) p_j(k) =δ_ij,
which for the Lorentzian detector profile (<ref>) is given by rescaled and normalized Laguerre polynomials (see (<ref>)):
p_n(k)=L√(8π)/√(n+1)L^1_n(2L k).
Chain Hamiltonian.—
The chain form of the field Hamiltonian is obtained by plugging the inverse Bogoliubov transformation,
b̂_k^(e) =√(2)∑_i f_k^* p_i(k) ĉ_i,
into Eq. (<ref>), and using both Eq. (<ref>) and the recurrence relations <cit.>
k p_n(k) = γ_n p_n+1(k) +ν_n p_n(k) +γ_n-1 p_n-1(k),
where, by convention,γ_-1=0.
The Hamiltonian then takes the form
^(e) = ∑_i=0,1,…ν_i ĉ_i^†ĉ_i + γ_i ( ĉ_i^†ĉ_i+1 + ĉ_i+1^†ĉ_i ),
which describes the anticipated chain with nearest-neighbor interactions.
For the Lorentzian detector, the Laguerre polynomials' recurrence relations yield
γ_n =- √((n+2)(n+1))/2L, ν_n= n+1/L.
Correspondingly, the interaction Hamiltonian now takes the form of a detector coupled only to the first chain mode,
Hint: =λ κ X̂⊗ ( ĉ_0 + ĉ_0^† ),
with the normalization constantκ= 1/(L√(8π))for the Lorentzian coupling profile.
With this we arrive at the chain-mode representation of our general model (<ref>), which isĤ_chain = Ĥ_f^(e) + Ĥ_d^(TLS/HO) + Hint: $, combining Eqs. (<ref>), (<ref>) and (<ref>).
§.§ Thermal double construction
In this work, we investigate the coupling of a detector to thermal field states.
However, the chain transformation as introduced in the previous section for the vacuum state of the field is inapt for a direct treatment of thermal field states, since (i) their representation in terms of chain modes may be non-trivial or inefficient, and (ii) since they can contain a large number of excitations, whereas our numerical MPS simulations are restricted to a small number of field excitations.
This problem can be circumvented by resorting to a thermal double construction <cit.>, in which the original environment is viewed as a subsystem of an enlarged environment in its vacuum state.
This enlarged environment can again be treated efficiently using chain transformations and numerical simulations.
This subsection reviews how to apply the thermal double construction to our model for a thermal field state with inverse temperature β.
App. <ref> discusses how the energy density emitted from a detector at rest, which couples to a thermal state of the field, can be evaluated numerically and derives the necessary expressions.
Double construction.—The enlargement of the environment is given by a doubling of the field modes:
For each field mode (b̂_k) we introduce a partner mode (b̂_k^') with opposite excitation energy.
As indicated in Fig. <ref>(b), each pair of partner modes is in a two-mode squeezed state such that the individual modes' partial state is a thermal state.
The overall state of the doubled field, however, is pure and corresponds to the vacuum state of the 'unsqueezed' d̂_k and d̂'_k modes.
Before going through the individual steps of these transformations, as above, we make use of the fact that the detector only couples to even field modes.
Thus we can discard the odd sector of the field and apply the double construction to the even sector only.
The even-sector, doubled-field Hamiltonian reads
'^(e) =k0∞ k ( (b̂_k^(e))^†b̂_k^(e) -(b̂_k^(e)')^†b̂_k^(e)' ).
By acting with two-mode squeezing transformations on each pair of partner modes, we obtain a new basis of canonically commuting operators
d̂_k =β k/4b̂_k^(e) --β k/4b̂_k^(e)'^†/√(2sinh(β k/2))
, d̂'̂_k =β k/4b̂_k^(e)' --β k/4b̂_k^(e)^†/√(2sinh(β k/2)),
under which the field Hamiltonian remains invariant,
'^(e) =k0∞ k ( d̂_k^†d̂_k -d̂'̂_k^†d̂'̂_k ).
The squeezing parameter (β k)/4 is chosen such that the vacuum |0_D⟩ of these new modes (the state |0_D⟩ for which d̂_k^(')|0_D⟩ = 0) is the thermal state of with inverse temperature β on the original field modes b̂^(e)_k,
⟨0_D|b̂_k^(e)†b̂_k'^(e)|0_D⟩ = δ(k-k^') ( e^β k - 1 )^-1.
The interaction Hamiltonian remains unchanged in the thermal double construction.
To express in terms of the new modes, we invert (<ref>) and obtain = (e^β k/4 + e^-β k/4)/√(2sinh(β k/2)),
which we insert in (<ref>),
= λX̂⊗k-∞∞(k) f_|k|β k/4/√(|sinh(β k/2)|)d̂_k + ,
where we used, from (<ref>), that f_k^*=-f_k is purely imaginary, and place the primed d̂_k^' operators on the negative-k axis via the identification k<0: d̂_k = d̂^'_-k.
With this identification, the doubled field Hamiltonian (<ref>) takes the form '^(e) = k-∞∞ k.
By doubling the number of field modes the thermal double construction has enlarged the system we need to simulate from the total Hamiltonian Ĥ=^(e)++ to the total Hamiltonian Ĥ'='^(e)++.
However, the d̂_k-modes are eigenmodes of '^(e), and the initial state of the field is their vacuum state.
Hence, as depicted in Fig. <ref>, the enlarged system can again be treated efficiently with chain transformations, where the chain modes ĉ_i are constructed from the d̂_k-modes.
Chain modes.—The chain modes are obtained by the same procedure as outlined above, in Sec. <ref>.
However, instead of the Laguerre polynomials from (<ref>), in the thermal case the polynomials q_i(k) are defined on the entire real line and need to obey
k-∞∞ w(k) q_n(k) q_m(k) =δ_nm,
w(k)= k -2L|k|β k/2/4πsinh(β k/2) .
In contrast to the vacuum case, the weight function w(k) does, to our knowledge, not correspond to one of the well known and studied families of orthogonal polynomials.
Hence, the polynomial coefficients
q_n(k)=∑_i=0^n P_n,i k^i
have to be determined numerically.
For this it is useful that the moments of the weight function for the Lorentzian detector profile have a closed-form solution in terms of Polygamma functions:
(-1)π2^n+3 L^n+2k-∞∞ w(k) k^n
= (-1)^n(n+1)!-(1+(-1)^n) (2Lβ)^n+2 ψ^(n+1)(2Lβ) .
Based on numerical evaluations of these, the coefficients P_n,i can be obtained from a Cholesky decomposition of the moment matrix, as detailed in App. <ref>.
This step requires large numerical precision because the size of the weight moments (<ref>) spans a large range of orders of magnitude, as do the resulting coefficients P_n,i.
In fact, this is to be expected from the analytical solution of the vacuum field state in the previous Sec. <ref>:
The size of the coefficients |P_249,i| ranges from log_10|P_249,20|≈ 18 to log_10|P_249,249|≈ -414.
To numerically calculate the coefficients P_n,i in the thermal double field construction we use Mathematica <cit.> to obtain several hundreds of digits of precision.
This high precision in the beginning of the calculations, which may appear as an overhead at this point, is later consumed, for example, in the evaluation of the energy density emitted from the detector.
When evaluating expressions for the field energy density, such as (<ref>) or (<ref>) which we derive below, the coefficients P_n,i get multiplied by coefficients I_i spanning a similar range of orders of magnitude.
In this step, many digits of precision are lost, hence a high precision in the initial calculation of P_n,i (and I_i) is required in order to still be able to extract the energy densities from the numerically calculated covariance matrices of the state with good precision.
With the polynomials at hand, the chain modes for the thermal case are given by
ĉ_i = k-∞∞(k) f_|k|β k/4/√(|sinh(β k/2)|) p_i(k) .
In terms of these mode operators, the interaction Hamiltonian Hint: $ in (<ref>) takes the same form as (<ref>), and the double field Hamiltonian'^(e)takes the same form as in (<ref>), where now the normalization and coupling constants
κ=1/P_0,0, γ_n= P_n,n/P_n+1,n+1, ν_n=P_n,n-1/P_n,n-P_n+1,n/P_n+1,n+1,
follow from the polynomial recurrence relations <cit.>.
§.§ Numerical methods for time evolution
We consider two different detectors, a two-level system (TLS) and a harmonic oscillator (HO), as introduced in (<ref>).
To treat the composite detector-field system numerically, we utilize two different approaches: matrix product states methods for the TLS, and Gaussian state methods for the HO.
In the following we will briefly highlight these methods, and their different sources of numerical errors.
In addition to the method-specific errors, all numerical methods share a common error which arises because only chains of finite length can be treated numerically. Sec. <ref> discusses this general truncation error separately and in detail.
Two-level detector.—
The two-level system is described byĤ_d^(TLS)given in (<ref>).
To compute the time evolution|ψ_t+dt⟩ = Û(dt) |ψ_t⟩ = -iĤ_chain dt |ψ_t⟩, we time-evolve the MPS|ψ_t⟩at timetusing the Trotter method, a second-order Trotter-Suzuki decomposition of the time-evolution operatorÛ(dt).
It is well-known that this method is prone to two main sources of error <cit.>:
(i) a total time-step error of orderO(dt^2), and
(ii) a truncation error of the time-evolved state to a manageable bond dimension.
In order to reduce the first type of error, choosing a small time stepdtis desirable.
However this increases the required number of time steps to evolve.
Moreover, ifdtis chosen too small, the state truncated to a given bond dimension does not properly time-evolve since the truncation error becomes too large in comparison.
We discuss this effect in more detail in App. <ref>, where we also comment on the choice ofdt, with which we obtain the results in this work.
Harmonic detector.—
The harmonic detector is modeled byĤ_d^(HO)in (<ref>).
Since this Hamiltonian is quadratic and since the initial states we consider are Gaussian states, the state remains Gaussian throughout its time evolution and is fully characterized by its covariance matrix.
This allows us to calculate the time evolution highly efficiently using Gaussian-state methods by a direct numerical exponentiation of the Hamiltonian generator (see, <cit.>).
Using Mathematica <cit.> for these calculations allows us to obtain the time-evolved covariance matrix of the state with very high (hundreds of digits) precision.
For this reason, we expect the presented numerical results for HO detectors to be essentially unaffected by numerical errors, but to be only subject to the truncation error discussed in Sec. <ref>.
This difference between HO and TLS data is, noticeable in the energy densities discussed in the subsequent subsection.
§.§ Case study: energy density
As a first demonstration of our approach, in this section we consider the energy densities emitted from detectors at rest of the different models in (<ref>).
This study provides a good basis for the detailed discussion of the truncation error in the following section, before we consider the radiation emitted by accelerated detectors in Sec. <ref>.
Energy density of massless field.—
The energy density of the massless Klein-Gordon field <cit.>,
T̂_00(x)= 1/2( π̂(x) ^2+(∂_xϕ̂(x))^2)
= (x)^2+(x)^2,
decouples into the right-moving energy density^2and the left-moving energy density^2,
which are the squares of the right- and left-moving sectors of the field momentum,
π̂_∓= -k0∞√(k/4π)( ±k xb̂_±k- ∓k xb̂_±k^†) .
We consider the normal-ordered energy density,
π̂_∓^2(x) =k0∞k'0∞√(kk')/4π(2 ∓ (k-k') xb̂^†_± kb̂_± k'.
.
- ± (k+k')xb̂_± kb̂_± k' - ∓ (k+ k')xb̂_± k^†b̂_± k'^†),
which when integrated up over all space=xT̂_00(x)yields the field Hamiltonian (<ref>).
To evaluateπ̂_∓^2(x)from the numerical data, we rewrite the operator in terms of the chain mode operatorsĉ_i, as detailed in App. <ref>.
Because the coupling between detector and field is even, the expectation value of the left-moving energy densityπ̂^2_+(x,t)=-π̂^2_+(-x,t)is simply the mirror image of the right-moving energy density, and we need only consider the latter.
Numerical results.—
Fig. <ref> shows the right-moving energy density for the case of a detector coupled to the vacuum with a Lorentzian profile, for a harmonic detector (panels a and c) and a two-level detector (panels b and d) that initially is either in its ground state (panels a and b) or first-excited state (panels c and d).
The spatial profile of the detector, which determines the region with significant coupling between field and detector, is depicted in the lower panel of each subfigure.
Each panel shows the energy density for an early time (t/L=1), an intermediate time (t/L=4) and a late time (t/L=7) after the interaction begins att/L=0.
In this illustrative example, we can make several observations.
First, as can be seen by comparing the upper with the lower rows, the initially excited emitter naturally radiates much more energy into the field.
In contrast, when the emitter is initialized in its ground state and early excitations stem from counter-rotating terms in (<ref>), little energy is being emitted overall.
Moreover, in the case of ground states, we find negative densities propagating to the right, that are more pronounced for the two-level emitter than for the harmonic oscillator.
Once the right-moving density has left the region in which the coupling to the detector is significant, it maintains its shape and simply propagates to the right.
This behavior can be seen for all depicted cases and propagation times.
On the one hand, this reassures that the number of 250 chain modes used in the numerical simulation (which for consistency we use throughout the paper) is sufficient to reliable represent the full time evolution within the selected times.
On the other hand, it also allows us to extrapolate that radiation once it has propagated pastx/L≳Lwill maintain its shape as it propagates further.
Hence, the early-time radiation will maintain the profile observed in Fig. <ref> fort/L=7, and only to assess the radiation emanating from the detector at this late time numerical calculations employing a larger number of modes in the chain would be necessary.
To demonstrate that our chosen coupling paramaters go far beyond the perturbative regime, in Fig. <ref> we contrast the numerical results from Fig. <ref> with the values obtained from time-dependent perturbation theory to leading order, as derived in App. <ref>.
While both agree well and are hardly distinguishable at short times, we find large differences at long times.
Both for the harmonic and two-level detectors, perturbation theory fails to capture the features of the numerically exact result even qualitatively, by inaccurately producing a density with the wrong number of nodes or local extrema.
The discrepancy is particularly pronounced for the case of an initially excited emitter, which shows a radiation burst that propagates to the right (see Fig. <ref>), but which is significantly overestimated by the perturbatively obtained curves.
These disagreements underline the importance of advanced numerical methods that can treat and time-evolve the full state of the composite system, including both matter and field degrees of freedom in strong coupling regimes.
§ THE TRUNCATION ERROR
The star-to-chain transformation as introduced above (see Sec. <ref> and Sec. <ref>) is exact and introduces no approximations or simplifications to the original Hamiltonian.
However, in numerical studies the derived infinite chains have to be truncated as also indicated in Fig. <ref>, because only a finite number of modes can be represented on a computer.
This necessarily degrades the accuracy of numerical calculations at sufficiently long simulation times.
This section is devoted to the consequencues of the truncation error that is thereby introduced.
§.§ Heuristic of truncation error
In the study of dynamics, the truncation error stems from the difference between time evolution according to the infinite system and the truncated Hamiltonian.
This difference is rooted in neglecting the hopping term between the last considered mode (ĉ_N-1in Fig. <ref>) and the first truncated mode (ĉ_Nin Fig. <ref>).
Therefore, the truncation error can be understood intuitively and treated analytically to some extent.
The intuitive picture of the truncation picture is as follows: Initially, the time evolution of the truncated system agrees well with the time evolution of the exact, infinite system.
Since the chain starts out in the vacuum state, this holds up to the time which it takes excitations, created by the interaction with the emitter, to propagate from the front to the chain to the truncated end.
After this time, the excitations in the truncated, numerically implementated model are reflected back to the front of the chain, whereas they had propagated further down the original infinite chain, thus causing the truncation error.
Free field without detector.—
To make this picture more exact it is helpful to consider the excitation dynamics of the chain.
This approach was pursued in <cit.> to deepen the understanding of chain-mapping methods.
For our purpose it suffices to consider the chain for the free field to which no emitter is coupled, we putλ=0above, and to consider the evolution of the stateĉ_0^†|0⟩which has one excitation in the first mode of the chain att=0.
To this end, it is convenient to work in the Heisenberg picture and expressĉ_0(t)=∑_j=0^∞ρ_j(t)ĉ_j(0).
From (<ref>) it follows thatĉ_0(t)=√(2)ω0∞f_ωp_0(ω) b̂^(e)_ω-ωt, and we obtain
ρ_j(t)=ĉ_0(t)ĉ_j^†
=4 √(j+1) ( t/L)^j/(2+ t/L)^2+j .
The absolute value squared of these coefficients|ρ_j(t)|^2=⟨0| ĉ_0(-t)^†ĉ_j^†ĉ_jĉ_0(-t)|0⟩yields the number expectation value of thejth chain mode at timet. This distribution spreads and flattens out quickly over the chain, as can be characterised by the center of mass of the distribution
∑_j j |ρ_j(t)|^2 = t^2/(2 L^2)
growing quadratically in time. Also, the peak of the distribution|ρ_J(t)|^2:=sup_j |ρ_j(t)|^2has a position which asymptotically behaves asJ∼t^2/(4L^2)for large times and takes the value|ρ_J(t)|^2∼4L^2/( t^2)ast→∞.
These observations indicate that in order to avoid the truncation error in numerical calculations, the number of required chain modes may scale quadratically in the duration of the time evolution.
In the following Sec. <ref> we discuss how the error in the state arising due to the truncation may be bounded from above.
This rather straightforward bound, however, (i) is only useful for bounded observables, and (ii) does not take into account that chain modes near the front of the chain are affected by the truncation much later than modes near the chain end.
Both these points render the state error bound not useful for the energy density of the field which we are interested in here.
Therefore, in Sec. <ref>, we discuss how the truncation error arising in the energy density of the field can be assessed heuristically by a wave-equation source term.
§.§ Bounding the state truncation error
The truncation of the chain afterNchain modes corresponds to subtracting
ΔĤ= γ_N-1(ĉ_N-1^†ĉ_N+h.c.)
from the full HamiltonianĤ.
The system thus evolves from its initial state|ψ_0⟩att=0into a defective state
|ψ^ϵ⟩=exp(- t (Ĥ-ΔĤ)) |ψ_0⟩
instead of the correct state|ψ⟩=exp(-t Ĥ)|ψ_0⟩. The error|ϵ⟩=|ψ⟩-|ψ^ϵ⟩evolves as
t|ϵ⟩=t(|ψ⟩-|ψ^ϵ⟩)
=- H|ϵ⟩ -Δ H |ψ^ϵ⟩.
As detailed in App. <ref>, the norm of the state error evolves as
t|ϵ⟩≤√(⟨ΔĤψ^ϵ|ΔĤψ^ϵ⟩),
and its norm at timetis lower or equal to the integral
|ϵ⟩≤ϵ_t:=
|γ_N-1| t'0t √(⟨ψ^ϵ|ĉ_N-1^†ĉ_N-1|ψ^ϵ⟩).
Advantages of error bound.—
The expression (<ref>) achieves something practically useful, since numerically we have access to the expectation value⟨ĉ_N-1^†ĉ_N-1⟩with respect to the state we propagate,|ψ^ϵ⟩, at each available time step.
From this the integrated error bound can be obtained straightforwardly.
Moreover, if the emitter is itself a harmonic oscillator, then a bound on the error in the (Frobenius) normGof the covariance matrixG_ij
=ξ̂^iξ̂^j+ξ̂^jξ̂^iof the total system state can be derived.[Here ξ̂^=(q̂_1,p̂_1,...) represents a basis of quadrature operators.]
As detailed in App. <ref>, this uses that the system remains Gaussian both under the true and the truncated time evolution.
The bound on the error inGtranslates into a bound on the error in the expectation value of quadratic observablesÔ=1/2∑_i,j O_ijξ̂^iξ̂^j, provided that the norm ofOis bounded.
For example, this allows to bound the error in the expectation values of number operators of chain mode ladder operators, or of other collective mode operatorsB̂=ω g(ω)b̂_ω(withω|g(ω)|^2=1) which can be one way to characterize emitted radiation.
Drawbacks of error bound.—
Since the above error bound concerns the norm of the state, it only allows us to bound the error in the expectation values of observables with finite operator norm.
This excludes many operators of interest such as the number and quadrature operators of individual field modes, as well as the energy density of the field, all of which are quadratic in the mode ladder operators.
Moreover, whereas this bound may be interesting and practically useful for identifying the regimes of validity of simulations, it appears to be too rigorous for many applications.
This is because it does not take into account the decomposition of an observable in terms of the chain mode operators.
However, operators acting on modes at the front of the chain are affected by the truncation error much later than the modes at the truncated end of the chain.
An important and interesting subject for future research would therefore be to derive error bounds which take into account the decomposition and support of observables with respect to the chain mode operators.
A natural first step in this direction may well be to investigate a generalization of results from the literature regarding observables acting only on the emitter <cit.>.
§.§ Truncation error in the energy density
Since the coefficients of the field energy density with respect to the chain modes are not bounded, cf. Eq. (<ref>), the error bounds from above do not apply to the energy density.
To understand how it is impacted by the truncation error, we first consider the free field with initial stateĉ_0^†|0⟩.
For this case, we know the exact solution from Sec. <ref>, which allows to precisely quantify the errors arising in the numerical simulations of the truncated chain.
We find that the errors constitute themselves in the shape of oscillatory features which have a short wave length, tend to arise away from the location of the detector and can be recognized as contributions to the source term of the wave equation.
If the free field (λ=0in (<ref>), no emitter-field coupling) is prepared in the initial stateĉ_0^†|0⟩at timet=0, then the exact expectation value of the right-moving field energy density is
^2(x,t)_ex =⟨0|ĉ_0(-t) ^2(x)ĉ_0^†(-t)|0⟩
=L^2/π (L^2+(x-t)^2)^2.
The errorΔ^2(x,t)= ^2(x,t)_ex- ^2(x,t)_numwhich arises in the numerical calculations for the truncated chain is shown in Fig. <ref>(a), for the Gaussian methods applied for HO emitters, and in Fig. <ref>(b), for the MPS methods applied for TLS emitters.
The figures compare the errorΔ^2(x,t), in their upper panel, to the difference between the energy density at a given point and the density's value traced back a small distance (ϵ=L/20) along a light ray:
s_ϵ(x,t)
=^2(x,t)_num-^2(x-ϵ,t-ϵ)_num.
In the (exact solution of) the free field, this term always vanishes since the right-moving energy density is simply translated along light rays in time.
However, for the truncated chain, in Fig. <ref>(c) and Fig. <ref>(d) we see that a non-zero value of this difference builds up as the simulation time increases.
In particular, the behavior of the source term is highly parallel to the behavior of the absolute error.
Both signal the effects of the truncation error by the appearance of highly oscillatory features away fromx=0where the emitter is centered.
This observation motivates our use of the source term as a heuristic measure for the error arising in numerical simulations in scenarios where the emitter is coupled to the field and no analytical solution is available.
When the emitter is coupled to the field, the term (<ref>) serves as an approximation to the expectation value of the source term of the wave equation,
(∂_t+∂_x)^2(x) = ^2(x)
= -λfx X̂⊗(x).
In the exact solution of the model, the source term is restricted to the support of the (derivative of the) smearing functionf(x). Thus, a non-zero source term away from the support of the emitter signals the appearance of numerical errors.
Fig. <ref> shows the numerical source term (<ref>) for the data in Fig. <ref> which showed the energy density emitted by an HO and a TLS emitter at rest into the vacuum of the field. Based on the rise of oscillating features in the source term well away from the emitter's support around a total simulation duration of up tot=7L, we decide to only consider results up to this simulation time, here and in the following.
Also below, for detectors coupled to thermal field states, we checked the source term and energy densities for highly oscillatory features to ensure that the truncation error has no significant impact within this simulation time.
§ DETECTOR RADIATION IN THE UNRUH EFFECT
The previous sections discussed basic properties of chain transformations applied to relativistic fields, and applied them to non-perturbatively calculate the energy density emitted from a particle detector at rest. In this section we use chain transformations to address the Unruh effect as a paradigmatic phenomenon of relativistic quantum fields, and calculate the radiation emitted from a uniformly accelerated detector.
While the Unruh effect itself happens in flat spacetime, it captures a central lesson of quantum field theory in curved spacetimes which is that particles are an observer-dependent concept.
At its core the Unruh effect is the observation that what an inertial observer (which we refer to as Minkowski observer) describes as the vacuum state of the field, a uniformly accelerated observer (Rindler observer) describes as a thermal state of the field.
Famously, the associated Unruh temperatureT_U=a/(2π)is proportional to the proper accelerationaof the observer (see, <cit.>).
In fact, the Unruh effect exhibits intriguing parallels to the thermal double construction of Sec. <ref>.
For a self-contained and detailed review of the Unruh effect and this perspective we refer to App. <ref>.
In the following, we summarize it in a high-level overview to introduce and motivate our modeling of the radiation emitted from a uniformly accelerated detector.
§.§ Modeling the coupling of an accelerated detector
The Unruh effect takes place in ordinary, flat Minkowski spacetime.
We restrict ourselves to the (1+1)-dimensional case and use(t,x)as the standard coordinates for the Minkowski observer.
The quantum field is in the vacuum state|0_M⟩with respect to the Minkowski observer. That means that the mode operatorsâ_k, that the Minkowski observer uses to expand the field in, annihilate the vacuum state:â_k|0_M⟩=0.
As will be clear shortly, the Minkowski modes have no equivalent in the framework as discussed so far and depicted in Fig. <ref>, which is why we intentionally denote them asâ_k.
A wordline of a uniformly accelerated observer (see Fig. <ref>), an observer undergoing constant proper accelerationa, is
t=1/asinh(aτ), x=1/acosh(aτ),
whereτis the proper time of the accelerated observer.
The so-called Rindler coordinates(τ,ξ)in (<ref>) (for details see App. <ref>) are the natural choice of coordinates for a uniformly accelerated observer, rather than the Minkowski coordinates.
Similarily, such an observer will use so-called Rindler modesb̂^R_Ωto expand the field, rather than the Minkowskiâ_k-modes.
Again we choose this notation intentionally because the Rindler modes play exactly the role of the modes labelled asb̂_kearlier, in the thermal double construction and in Fig. <ref>(b):
Because the Rindler annihilation operators are linear combinations both of Minkowski annihilation and of Minkowski creation operators (see (<ref>)), they do not share the vacuum state with the Minkowski modes.
Instead the Minkowski vacuum is a thermal state with respect to the Rindler modes, whose temperature is the Unruh temperatureT_U=a/(2π), as seen from the expectation value (see (<ref>))
⟨0_M|b̂_Ω^R†b̂_Ω^R|0_M⟩ = δ(Ω-Ω') /2πΩ/a -1 ,
whereΩis the Rindler mode frequency.
The thermalb̂_k-modes in the thermal double construction are purified by their partnerb̂'_k-modes.
Where are then the partner modes of the Rindler modesb̂^R_Ωfound?
The uniformly accelerated observer above is restricted to the right Rindler wedge, the spacetime region of|t|<x, and theb̂^R_Ω-modes completely capture the field in this region.
Their purifying partner modesb̂^L_-Ωpertain analogously to the left Rindler wedge, the region|t|<-x, to which the mirror image (along the originx=0) of our uniformly accelerated observer (<ref>) is restricted.
As indicated by the notation, these modes have negative Rindler frequency and play exactly the role of theb̂'_k-modes in our discussion of the thermal double construction above.
Exactly as thed̂-modes are constructed in the thermal double construction, also the Rindler partner mode pairs can be transformed into pairs of so-called Unruh modesd̂_Ω(see (<ref>)).
For these modes the field state is the vacuum state,d̂_Ω|0_M⟩=0, and the chain modes for the numerical simulation of the system are constructed as linear combinations of Unruh modes.
Building on the relations summarized above, our approach to modeling the interaction of a uniformly accelerated detector with the quantum field in its Minkowski vacuum state is to map it to the interaction of a detector at rest with field modes in a thermal state.
That is, we use the total model HamiltonianĤ = + + Hint: $ with its three parts exactly in the same form as introduced in Sec. <ref> and Sec. <ref>, respectively.
However, the role of the Minkowski coordinates (t,x) is now played by the Rindler coordinates (τ,ξ), and the role of the (thermal) eigenmodes of the field operator is played by the Rindler modes.
In particular, as discussed in detail at the end of App. <ref>, the interaction Hamiltonian Hint: $ takes the form (see (<ref>))
Ĥ_i=λX̂⊗ξ f(ξ) ∂_τϕ̂(ξ) ,
where the detector smearing is performed with respect to Rindler coordinates.
The worldline of constant Rindler coordinateξ=0exactly is the detector worldline (<ref>).
Note that worldlines of constant Rindler coordinateξ_0correspond to a constant proper acceleration ofa-a ξ_0.
Hence, for our ansatz to model a detector experiencing a single constant proper acceleration, the width of the detector profile needs to be small, we requireaL ≪1.
A consequence of our approach is also, that our calculations now yield time evolution with respect to Rindler timeτas opposed to Minkowski coordinate timet.
Concerning detector observables, the action of the time evolution operatorexp(-T Ĥ)is to simply evolve the detector state forward with respect to detector proper time by an amountT, since alongξ=0the Rindler time coordinateτequals the detector's proper time.
Concerning field observables, because the Rindler field Hamiltonian'=Ω-∞∞Ω(b̂_Ω^R†b̂_Ω^R-b̂_Ω^L†b̂_Ω^L)generates Lorentz boosts in Minkowski spacetime, the action ofexp(-T Ĥ)is to, for example, transform a state defined on the hyperplanet=τ=0to the hyperplaneτ=T, which in Minkowski coordinates is the hyperplanet=tanh(aT) x.
In App. <ref> we discuss in detail how observables like the energy density of the field with respect to an inertial Minkowski observer are affected.
The easy way in which we handle this issue here is, figuratively speaking, to move the start of the interaction back in time: We move the onset of the interaction back to proper timeτ=-Tof the detector, at which point we assume the detector and field to be in a product initial state|ψ_0⟩⊗|0_M⟩, and then numerically calculate the action ofexp(-TĤ)on this state, which results in a state defined on the hyperplanet=0.
§.§ Results
In this section, we discuss the numerical results we obtained for three different acceleration values,aL=0.1, 0.2, 0.4.
The largest of these values is interesting to understand the numerical performance of our method, even if it may well be viewed as being in conflict with our modeling requirement thataL≪1, as discussed above.
Nevertheless, by considering the dynamics of the occupation number expectation value of the detectorn̂, which is
n̂^HO =â^†â, n̂^TLS=12(σ̂_z+𝕀)
for the HO detector the TLS detector respectively,
we see that the thermal response of the detector due to the Unruh effect is not too pronounced at these accelerations, for the numerical detector parameters that we consider.
Fig. <ref> shows the expectation value for initial states with zero and with one excitation for both detector types for a detector with the same coupling parameteres (Ω_d=2π/5, λ=2) as we considered in Sec. <ref> for a detector at rest.
For the TLS the detector, the response of the detector occupation for the three acceleration valuesaL=0.1, 0.2, 0.4is hardly distinguishable from a resting detector (aL=0). And even in the case ofaL=1which we present there for reference, and which corresponds to an inverse Unruh temperature ofβ=2πLthe difference is relatively small still.
For the HO detector the differences are somewhat more pronounced and already the case ofaL=0.4, corresponding to an inverse Unruh temperature ofβ=5 πL are noticeable.
Based on this observation, we would expect the radiation from our accelerated detectors to correspond to the profiles observed in Fig. <ref> for resting detectors, after undergoing a Lorentz boost (or Doppler shift) which along each light ray in the emitted radiation depends on the detector's velocity at the intersection between the detector's worldline with the light ray, the point in time at which the light ray would have been emitted from the detector.
In fact, the energy densities in Fig. <ref>, that shows the results for all four combinations of detector types and initial states for an acceleration ofaL=0.1, shows the expected similarities.
And Fig. <ref> and Fig. <ref> in App. <ref> confirm that to a very high degree this expectation agrees with our numerical results for the energy density emitted from a uniformly accelerated detector.
Fig. <ref>, on the one hand, shows how the emitted energy density profile changes as the acceleration increased.
On the other hand, the double logarithmic plots exhibit some characteristic features more clearly, which we observe for both HO and TLS detectors.
First, as a consequence of the accelerated detector coupling evenly to the Rindler modes, the observed (Minkowski) energy density exhibits the following symmetry between left-moving and right-moving energy densities:
π̂^2_+(x=1a-aξ,t)
=2aξπ̂^2_-(x=1aaξ,t),
which can be read of directly from expressions (<ref>) and (<ref>).
Second, foraL=0.4, we see that both left-moving and right-moving energy densities appear to diverge asx→0.
This behaviour is in fact to be expected for all accelerations towards the coordinate originx→0^+, if one takes into account that the end of the time evolution on the hyperplanet=0is equivalent to a sudden switch-off of the interaction between detector at field:
Since we applied the detector smearing function (<ref>) with respect to Rindler coordinates, in terms of Minkowski coordinates it reads
f'(x)=f(ξ)=L/π (L^2+ξ^2)=L/π(L^2+14a^2ln(a^2 x^2)^2).
The derivative of this functionlim_x→0^+f'x=∞diverges towards the coordinate origin. However, infinitely steep smearing functions lead to diverging energy densities for instantaneous interaction switch-offs for the detector model we employ here.
Furthermore, we highlight the oscillatory features appearing in the data foraL=0.4atx≈0.2Lin the right-moving energy density andx≈30Lin the left-moving energy density.
These features grow more dominant when the simulation is continued further and they appear at earlier simulation times for higher accelerations (respectively later for lower accelerations).
Based on our investigation of the truncation error above, we interpret them as indicating the onset of the truncation error effects at simulation times beyondt=7Lfor the chosen coupling parameters of our model and chosen chain length for our numerical simulations.
To reliably extend the simulation time beyond this regime one would therefore have to use more chain modes in the numerical calculations.
This could be of interest, for example, for further investigations dedicated to the radiation arising in scenarios in which the Unruh temperatures are larger relative to the detector energy gapΩ_d, because a longer chain would allow for longer simulation times which, in turn, would allow to cover an equal number of detector periods2π/Ω_dfor detectors with lowerΩ_d.
§ CONCLUSIONS & OUTLOOK
In summary, we have utilized chain-mapping methods to numerically study the interaction between a scalar quantum field, and localized quantum emitters both at rest and undergoing uniform acceleration.
The numerically exact treatment of the entire system, including the field, allows efficient access to a large variety of system and field observables.
In addition, while our main focus rests on the emission and absorption of excitations from an emitter, which we monitor by calculating and time-evolving the field energy density, the method is not restricted to these observables.
While we focus on a two-level or harmonic emitter, respectively, coupled to its bath via a Lorentzian coupling profile, for which we find convenient expressions within the chain-mapping approach, other emitters may be considered as well.
Future works may use our approach to study, bath or system-bath correlation functions, or to calculate the entanglement dynamics of multiple emitters coupled to a thermal bath.
In this context, an interesting question is whether the chain mapping can be efficiently implemented for two emitters coupled to the same continuum of bath modes.
This would pave the way for a new non-perturbative approach to many questions regarding communication, correlation or entanglement transfer between localized emitters and the quantum field.
In particular in the context of relativistic scenarios the present approach has the advantage to introduce no further causality violating approximations to the model such as a UV cutoff <cit.>.
Instead, the model is treated exactly within the maximal achievable simulation time determined by the number of chain modes used in the numerical simulation.
In Sec. <ref> we discussed an error bound which can be practically evaluated along with the numerical simulations and which rigorously controls the total error introduced to the time-evolved state due to the truncation of the chain.
Since this error bound appears to be too rigorous for many applications of interest, it would be useful to derive error bounds that are tailored towards specific observables by taking into account their decomposition in terms of the chain modes.
This may be achieved building on existing error bounds <cit.>.
§ ACKNOWLEDGMENTS
R.H.J. gratefully acknowledges support by the Wenner-Gren Foundations and, in part, by the Wallenberg Initiative on Networks and Quantum Information (WINQ).
Nordita is supported in part by NordForsk.
J.K. gratefully acknowledges support from Dr. Max Rössler, the Walter Haefner Foundation and the ETH Zürich Foundation.
We thank Mari-Carmen Bañuls for fruitful discussions.
§ POLYNOMIAL COEFFICIENTS FROM NUMERICAL WEIGHT MOMENT MATRIX VIA CHOLESKY DECOMPOSITION
First we calculate a vectorw⃗with2N+1entries containing the weights as given in (<ref>),
w⃗=[w_k]_k=0,…,2N, w_k=k-∞∞ w(k).
Then we arrange these into an(N+1)×(N+1)-matrixM⃗which represents the scalar product defined by (<ref>) with respect to the polynomialsk^n,
M⃗_ij=[w_(i+j)]_i,j=0,…,N.
We need the Cholesky decomposition of this matrixM⃗=L⃗L⃗^.L⃗is a lower triangular matrix and it corresponds to a basis change matrix from the basis given by the polynomials1,k,k^2,…,k^2Nto the polynomialsp_0(k),…,p_2N(k)which are orthonormal with respect to the inner product (<ref>).
In particular,
p_i(k)=∑_n=0^N(L⃗)^-1_i,n k^n, ⇒(L⃗)^-1_i,n=P_i,n,
the rows ofL⃗^-1contain the coefficients of the orthonormal polynomialsP_i,nas defined in (<ref>).
In practice, we obtained the best performance, in terms of speed and precision, by directly implementing the standard algorithm for the Cholesky transform and its inverse.
For the matrixL⃗that is
and its inverse can then be constructed as
§ MPS SIMULATIONS AND CHOICE OF TIME STEP
Real-time evolution of matrix product states has been reviewed in Ref. <cit.>, where the Trotter or time-evolving block decimation (TEBD) method is discussed with its strengths and weaknesses.
One of the critical numerical parameters within TEBD is the time stepdt, for which the usual trade-off consists in keeping the introduced errors per time step small, while maintaining a reasonable and manageable number of time-evolution steps for the total simulation period of interest.
Here we comment on our choice of suitable time stepsdt, for time evolving the state|ψ_t+dt⟩ = Û(dt) |ψ_t⟩, which we use in the simulations with which we obtain the results in the main text.
Too large time steps.—
When decomposing the time-evolution operator using the Trotter method, ideally the time steps should be sufficiently small.
When comparing panels (a) and (b) in Fig. <ref>, we indeed find that a shorter time step (dt = 0.001) reproduces the profile of a simply right-moving energy density more faithfully than a larger time step (dt = 0.01), up to a final propagation timet/L = 7.
For our simulations, this contains a first lesson:
(i) The choice of a suitable time step is always tied to the total propagation time, as time-step errors accumulate during time evolution.
Since we focus on simulation times of up tot/L = 7throughout most of this work, the time stepdt = 0.001seems preferable (overdt = 0.01and any larger time steps) based on numerical examples like this.
Too small time steps.—
On the other hand, and based on the same example of Fig. <ref>, we find that too small time steps lead to inaccurate predictions of the energy density.
When comparing panels (b) and (c) in Fig. <ref>, we see that the energy density deviates from its expected behavior already for relatively short times,t/L = 1.
In order to understand this, in Fig. <ref> we show the occupations⟨n̂_i ⟩= ⟨ĉ_i^†ĉ_i ⟩of all250chain modes for the same three propagation times as in Fig. <ref>.
When comparing Fig. <ref>(c) with the two remaining panels, we find that the excitation, which is at the first chain mode att=0, does not propagate through the chain for the smallest time step (dt = 10^-5).
We interpret this time step to be too small given the maximum bond dimension ofχ= 300, used to obtain the two figures <ref> and <ref>.
When the truncation error associated with a given bond dimension is larger than the error induced by the time evolution, the Trotter method fails to meaningfully evolve the MPS.
As a result, in the above example the initial excitation almost does not propagate through the chain.
This provides us with a second useful lesson:
(ii) The choice ofdtmust take into account the truncation error due to restricting the MPS to a realistic bond dimension.
If the time step is too small, the latter dominates and further decreasing the step size is counterproductive.
Here we showed the result for a very small time step,dt=10^-5.
Based on further numerical experiments that we do not show, we finally choose a time step between10^-3 ≤dt ≤5 ·10^-3which we use throughout the main text.
§ MINKOWSKI, RINDLER AND UNRUH MODES IN THE UNRUH EFFECT
The purpose of this appendix is to give a brief, but self-contained review of the different basis sets of modes relevant to the Unruh effect, Minkowski, Rindler and Unruh modes, and their relation to the chain modes and thermal double construction of the previous sections.
Table <ref> gives a compact overview of the modes appearing, and the relations between them.
Finally, the appendix arrives at the Bogoliubov transformation expressing the Minkowski mode operators in terms of the chain mode operators used in the numerical calculations.
Bogolubov transformations.—
It is central to the Unruh effect, as it is to many phenomena in quantum field theory in curved spacetime, that different observers may choose different sets of modes to expand the field observables, and to interpret the quantum state of the field <cit.>.
In 1+1-dimensional Minkowski spacetime, the general expansion of the amplitude of the massless scalar Klein-Gordon field that we consider here, in the Heisenberg picture, takes the form
ϕ̂(t,x)=k u_k(t,x)â_k+u_k^*(t,x)â_k^†,
where theu_kand their complex conjugates form a complete basis of complex solutions to the Klein-Gordon field equation, andâ_kare the associated mode operators. That is, the mode operators fullfill the canoncial commutation relationsâ_kâ_k'^†=δ(k-k'), and the set of solutions are orthonormal with respect to the Klein-Gordon inner product
(u_k,u_k') = -x( u_k∂_tu_k'^* -(∂_t u_k)u_k'^*)=δ(k-k') ,
where the integral is evaluated on a hyperplane of constant Minkowski coordinate timet.
(The inner product can be evaluated on any other Cauchy surface of the spacetime, and the result is independent of this choice <cit.>.)
Given a second complete basis of solutions, sayv_l(t,x), with associated mode operatorsâ'_l, expressions can be transformed from one basis to the other by the Bogoliubov transformations <cit.>
v_l=kα_lku_k+β_lku_k^*
, â'_l= kα_lk^*â_k-β^*_lkâ_k^†
where the Bogoliubov coefficients are given by
α_lk = ( v_l,u_k), β_lk=-(v_l,u^*_k).
The inverse transformations read
u_k=lα_lk^* v_l-β_lk v^*_l,
â_k=lα_lkâ'_l+ β^*_lkâ'_l^† .
Minkowski modes.—
With respect to the standard coordinates(t,x), the Minkowski metric readss^2=t^2-x^2and the massless Klein-Gordon wave equation reads
(∂_t^2-∂_x^2)ϕ(t,x)=0.
The plane wave solutions
u_k(t,x)= -|k|t+k x /√(4π|k|)
yield the orthonormal complete set of solutions which is the canonical choice of basis for inertial observers.
Since∂_t u_k = |k|u_kthey are eigenmodes of positive frequency with respect to the generator of translations along coordinate timet, they are eigenmodes of the Hamiltonian which generates time evolution with respect to the proper time of observers at rest relative to the(t,x)coordinates.
These modes separate into left-moving modesu_ω^+and right-moving modesu_ω^-, with positive frequencyω>0,
u_ω^±(t,x)=u_∓ω(t,x)= 1/√(4πω)-ω (t± x).
To these mode functions we associate the mode operatorsâ^±_ω, resulting in the mode operator expansion for the amplitude operator of the quantum field,
ϕ̂(t,x)= ω0∞∑_±=+,- u_ω^±(t,x) â^±_ω+u_ω^± (t,x)^* â^±_ω^† .
Here, in the context of the Unruh effect, we denote the Minkowski mode operators by the letterârather than the letterb̂, because
the Minkowski modes neither generically appear in the interaction part of the Hamiltonian, nor in the part that generates the relevant time evolution. Instead, this role is played by the Rindler modes, which are the generic choice of field modes for uniformly accelerated observer, and arise as plane wave solutions with respect to the Rindler coordinates. Hence, consistent with our notation throughout the article, we denote the Rindler modes by the letterb̂.
Rindler modes.—
The worldlinet( τ)=sinh(aτ)/a,x(τ)=cosh(aτ)/adescribes the wordline of an observer moving through Minkowski spacetime with constant proper accelerationa. This wordline is entire located within the region|t|<x<∞, the so called right Rindler wedge, and is, in fact, causally separated from the left Rindler wedge, where-∞<x<-|t|.
The right Rindler wedge is covered by the Rindler coordinates
t=aξ/asinh(aτ), x=aξ/acosh(aτ),
⇔τ=1/2alnx+t/x-t, ξ=1/2aln(a^2(x^2-t^2)).
with the timelike coordinateτand the spacelike coordinateξ.
These coordinates are the canonical choice of coordinates for uniformly accelerated observers because worldlines of constant Rindler coordinats are worldlines of constant proper acceleration. In fact, the proper timeσof an observer moving along the worldline of constantξ(τ)=ξ_0isσ(τ)= aξ_0τand its proper acceleration isa_0=a-aξ_0.
In particular, atξ=0, we recover the worldline with proper accelerationa.
With respect to the Rindler coordinates the Minkowski metric takes the forms^2=t^2-x^2=2aξ(τ^2-ξ^2). Accordingly, the massless Klein-Gordon equation takes the same form as with respect to Minkowski coordinates, namely,[For a treatment of the detector response in the Unruh effect for massive fields and in higher dimensions see, <cit.> among others.]
(∂_τ^2-∂_ξ^2)ϕ=0.
This form of the wave equation suggests to consider the left-moving and right-moving Rindler plane wave modes
v^R ±_Ω(τ,ξ)=1/√(4πΩ)-Ω (τ±ξ),
for the expansion of field operators.
From
τ±ξ=±ln(a(x±t))/a,
one sees that these wavefunctions, with respect to Minkowski coordinates, extend to left-moving and right-moving solutions with support to the right of the null linest=-xandt=x, respectively:
v^R ±_Ω(t,x)=1/√(4πΩ)∓Ω/a ln(a(x± t)), if (x± t) >0.
Together, with their mirrored versions which have support on the left side of these null lines
v^L ±_Ω(t,x)=1/√(4πΩ)±Ω/a ln|a(x± t)|, if x± t<0,
these modes form a complete orthonormal set of modes,
( v_Ω^S±, v_Ω'^S'±') = δ_±,±' δ_S,S' δ(Ω-Ω'),
which can be used to expand the field operator as <cit.>
ϕ̂(t,x)=Ω0∞∑_±=+,-
S=L,R v_Ω^S±(t,x) b̂^S±_Ω+v_Ω^S± (t,x)^* b̂^S±_Ω^†.
The Bogoliubov coefficients for the transformation from Minkowski to Rindler modes
v_Ω^R±=ω0∞α_Ωω^± R u^±_ω +β_Ωω^± R u^±_ω^*
are given in App. <ref>.
For the right Rindler wedge they are
α^R±_Ωω= √(Ω)πΩ/(2a)/2π a√(ω)(ω/a)^±Ω/a Γ(∓Ω/a),
β^R±_Ωω= -√(Ω) -πΩ/(2a)/2π a√(ω)(ω/a)^±Ω/a Γ(∓Ω/a),
and for the left Rindler wedge modes they are related by complex conjugationα_Ωω^L±= (α^R±_Ωω)^*andβ_Ωω^L±= (β^R±_Ωω)^*.
Lorentz boost.—
As discussed above, the Rindler coordinates are closely related to uniformly accelerated observers:
Worldlines of fixed Rindler spatial coordinate correspond to uniformly accelerated worldllines. Moreover, these worldlines correspond to orbits of the Lorentz boost operator and the Rindler modes are, in fact, eigenmodes of the Lorentz boost operator. We use this relation for our numerical calculation and use the Lorentz boost operator as the Hamiltonian which generates time evolution along the accelerated detector's worldline.
To illustrate this relation, consider the Lorentz boost, parametrized by a real parameterT, acting on points in Minkowski spactime as
[ t; x ]↦[ t'; x' ] =
[ cosh(a T) sinh(a T); sinh(a T) cosh(a T) ][ t; x ]
.
Inside the right Rindler wedge it transforms points exactly such as to addTto their Rindler time coordinate
aξ/a[ sinh(a τ); cosh(a τ) ]↦aξ/a[ sinh(a (T+τ) ); cosh(a (T+τ)) ].
Accordingly, under this Lorentz boost the Rindler modes in the right Rindler wedge acquire only a complex phase
v_Ω^R±(t',x')= -Ω T v_Ω^R±(t,x).
Hence, they are positive frequency modes with respect to Lorentz boosts and we refer toΩas their Rindler frequency, which here in the right Rindler wedge is positive.
In the left Rindler wedge, however, the Rindler modesv^L±_Ωhave a negative Rindler frequency. To see this we cover the left Rindler wedge by the coordinates(τ̃,ξ̃)with
t=-aξ̃/asinh(aτ̃), x=-aξ̃/acosh(aτ̃)
.
The Lorentz boost (<ref>) still increases the parameterτ̃,
aξ̃/a[ sinh(a τ̃); cosh(a τ̃) ]↦aξ̃/a[ sinh(a (T+τ̃) ); cosh(a (T+τ̃)) ],
however, since the left Rindler modes with respect to these coordinates read
v_Ω^L±(τ̃,ξ̃)
=1/√(4πΩ)Ω (τ̃±ξ̃)
they have a negative Rindler frequency, acquiring the phase
v_Ω^L±(t',x')= Ω T v_Ω^L±(t,x)
under the Lorentz boost,
which moves points in the left Rindler wedge into their causal past.
With these relations at hand we can express the Lorentz boost Hamiltonian in terms of the mode operators associated with the left and right Rindler modes:
Ĥ_L=∑_±=+,-Ω0∞Ω( b̂^R±_Ω^†b̂^R±_Ω-b̂^L±_Ω^†b̂^L±_Ω).
In the same way, as the Minkowski field HamiltonianĤ_f=ω0∞∑_±ωâ^±_ω^†a^±_ωgenerates translations along the Minkowski time coordinatet,Ĥ_Lgenerates translations along the Rindler time coordinatesτandτ̃.
In particular, this Hamiltonian generates time evolution with respect to the proper time of the uniformly accelerated observer moving along the wordlline atξ=0with proper accelerationain the right Rindler wedge.
Unruh temperature and Unruh modes.—
At its core, the Unruh effect is the observation that when the field is in the vacuum state with respect to Minkowski modes, then the Rindler modes of one wedge are in a thermal state with respect to the Lorentz boost operator.
In fact,
we find that in the Minkowski vacuum (see (<ref>))
⟨0_M|b̂^R±_Ω'^†b̂^R±_Ω|0_M⟩ = ω0∞β^R±_Ωωβ^R±_Ω' ω^*
= δ(Ω-Ω') /2πΩ/a -1 ,
the number expectation value of the Rindler modes equals thermal expectation value with the celebrated Unruh temperature given byT_U=a/(2π).
The analogous relation also holds for the left Rindler wedge modes.
Note here also, that the Rindler modes not having any cross-correlations, identifies them as the natural basis of normal modes to use in the Rindler wedges.
The HamiltonianĤ_Lthus takes the same role as the doubled Hamiltonian (<ref>) in the scenario of an inertial detector coupling to a thermal field state.
In the case of a thermal field, we use pairwise squeezing to transform a pairb̂_iandb̂_i'of thermal eigenmodes of the Hamiltonian into a pair of eigenmodesd̂_iandd̂_i'which are in their vacuum state.
In the same way, in the context of the Unruh effect, we can use pairwise squeezing of two Rinder modesb̂_Ω^R±andb̂_Ω^L±to transform them into a pair of Unruh modesd̂_Ω^±andd̂_-Ω^±which share their vacuum state with the Minkowski modes,d̂^±_Ω|0_M⟩=0.
Following the same steps as in the thermal case, the mode functions associated tod̂^±_Ωwhich have positive Rindler frequencyΩ>0are
w_Ω^± =Ωπ/(2a) v_Ω^R± +-Ωπ/(2a)(v_Ω^L±)^*/√(2sinh(Ωπ/a)) =cosh(r) v_Ω^R± +sinh(r) (v_Ω^L±)^*
,
where we introducedrsuch thatcosh(2r)=2/tanh(Ωπ/a)for compact notation.
The Unruh modes associated tod̂^±_-Ωwith negative Rindler frequency,Ω<0, are accordingly
w_Ω^± =cosh(r) v_(-Ω)^L± +sinh(r) (v_(-Ω)^R±)^*.
By construction, the Unruh modes are thus linear combinations of positive frequency Minkowski modes only,
w_Ω^± =ω0∞γ_Ωω^± u^±_ω ,
withγ_Ωω^±as in (<ref>), for both the positive and the negative Rindler frequency modes.
This implies that, most importantly,d̂_Ω^±|0_M⟩, the vacuum state of the Unruh modes coincides with the Minkowski vacuum.
And the the Lorentz boost Hamiltonian, in terms of the Unruh mode operators, reads
Ĥ_L =∑_±=+,-Ω-∞∞Ωd̂_Ω^±^†d̂_Ω^± .
Coupling to even Rindler modes.—
To couple the detector to the field along a uniformly accelerated oberserver's worldline, we now replace the Minkowksi coordinates, used for the detector at rest in (<ref>)
by Rindler coordinates, and obtain
Ĥ_i=λX̂⊗ξ f(ξ) ∂_τϕ̂(ξ) .
Since the detector's smearing functionfextends over a length scaleL, the different points of the detector actually experience different proper accelerations. Therefore, we assume thataL≪1so that such finite-size effects can be neglected.
Also note, that at this point we switch to the Schrödinger picture which is also employed in the numerical calculations. We take the observables in the Schrödinger picture to be equal to the observables in the Heisenberg picture one the spacelike hypersurfacet=τ=0.
The field observable∂_τϕ̂(ξ)=Ĥ_Lϕ̂(ξ)to which the detector couples can be expanded as
∂_τϕ̂(ξ) = ∑_±Ω0∞ (-Ω) v_Ω^R± (ξ)b̂^R±_Ω+ H.c.
= ∑_±Ω0∞-√(Ω)∓Ωξ/√(4π)b̂^R±_Ω+ H.c. .
from which it is clear that the time evolution of this model is equal to the time evolution of a detector at rest coupling to a field in a thermal state.
As before, in the numerical calculations we use that the atom couples symmetrically to left- and right-moving modes, it couples only to the even sector of the field, spanned byb̂_Ω^R ebut not the odd sector, spanned byb̂_Ω^R o, where
b̂_Ω^R e=1/√(2)(b̂_Ω^R++b̂_Ω^R-), b̂_Ω^R o=1/√(2)(b̂_Ω^R+-b̂_Ω^R-).
Accordingly, for the chain transformation and the numerical calculations, we use the corresponding even Unruh modes
d̂_Ω^e
=cosh(r) b̂_Ω^Re -sinh(r) b̂_Ω^Le^†
=d̂^+_Ω+d̂_Ω^- /√(2),
d̂_-|Ω|^e
=cosh(r) b̂_Ω^Le -sinh(r) b̂_Ω^Re^†
=d̂^+_-|Ω|+d̂_-|Ω|^-/√(2),
whose relation to the other modes is summarized also in Tab. <ref>.
The chain modes for the Unruh effect are thus the same chain modes as for a detector at rest coupling to a thermal field, when settingβ=2π/a.
They are composed from the even Unruh modes only,
ĉ_i = Ω-∞∞(Ω) f_|Ω|Ωπ/2a/√(|sinh(Ωπ/a)|) p_i(Ω)d̂^e_Ω.
The odd Unruh modesd̂^o_Ω=(d̂_Ω^+-d̂_Ω^-)/√(2), however, do not couple to the atom, and remain in their vacuum state.
App. <ref>, based on the Bogolubov transformations reviewed in App. <ref>, derives closed form expressions for the energy densityπ̂^∓(x)as measured by an observer at rest with respect to the Minkowski coordinates.
§ LORENTZ BOOST OF MINKOWSKI OBSERVABLES UNDER RINDLER TIME EVOLUTION
Rindler time evolution.—
The expression (<ref>) gives the expectation value of the right-moving and left-moving Minkowski energy density for a state defined on the spacetime hyperplaneτ=t=0.
Under the Rinder time evolutionexp(-TĤ)which we apply in our numerical simulation, the observable on the left-hand side of the expression transforms non-trivially.
Hence, after applyingexp(-TĤ), which corresponds to the Lorentz boost (<ref>), we need to reinterpret the expectation value given by the right-hand side of (<ref>).
For this, there are two alternatives.
The first alternative is to interpret the results as being measured on the hyperplanet=0, as it stands on the left-hand side of (<ref>), but in a scenario where the interaction between detector and field started on the hyperplanet=-tanh(aT)x(corresponding toτ=-T).
This alternative makes use of the Minkowski vacuum being invariant under Lorentz boosts. Hence, the initial state, which is taken to be a product state between the detector initial state and the Minkowski vacuum of the field, can be boosted back in Rindler time.
The second alternative interprets the result as arising in a scenario where the interaction between detector and field starts on the hyperplaneτ=t=0where the overall state is still given by the initial product state.
In this scenario, the expectation value given by the right-hand side of (<ref>) corresponds to the observable
±2aTπ̂^±(t',x')^2,
with(t',x')=(xsinh(aT),xcosh(aT)), as we derive in the following.
To derive this transformation, we first use (<ref>) to obtain
∂_t=τt∂_τ+ξt∂_ξ
= cosh(aτ) -aξ∂_τ-sinh(aτ) -aξ∂_ξ,
∂_x=τx∂_τ+ξx∂_ξ
= cosh(aτ) -aξ∂_ξ-sinh(aτ) -aξ∂_τ,
∂_τ = a (x∂_t+t∂_x),
∂_ξ=a(x∂_x+t∂_t).
Hence, with respect to Rindler coordinates the (left-moving and right-moving) field momentum on the hyperplanet=τ=0is
.π̂^±(x)|_t=0 =.1/2(∂_t±∂_x)ϕ̂(x)|_t=0
=.-aξ/2(∂_τ±∂_ξ)ϕ̂(ξ= ln(ax)/a)|_τ=0.
Under the Rindler time evolution(τ,ξ)→(τ+T,ξ), which is nothing but the Lorentz boost (<ref>), this observable transforms into
.-aξ/2(∂_τ±∂_ξ)ϕ̂(ξ)|_τ=T
= -aξ/2 a(x± t)(∂_t±∂_x) ϕ̂( t(ξ,T),x(ξ,T) )
= .± aTπ̂^±(xcosh(aT))|_t=xsinh(aT) .
Interpretation of Unruh radiation.—
Both as a benchmark and for an interpretation of the energy density emitted from an accelerated detector, we can boost the energy density emitted from a detector at rest, as calculated in Sec. <ref>, and pretend as if it emanates from the accelerated worldline (<ref>).
Then we compare the thereby obtained, boosted energy density profile to the results from Sec. <ref>.
Fig. <ref> compares this boosted radiation to our results from Sec. <ref> for the left-moving radiation.
In detail, we perform the following transformation of the data from the resting detector:
Fig. <ref> shows the right-moving energy density expectation value,π̂_-^2(x,t), emitted from detectors that are at rest.
We here use the data for an interaction time oft=7L.
By neglecting the width of the detector, we can interpret the energy densityπ̂_-^2(x,t=7L)in the interval0≤x≤7Las emanating from the detector (atx=0) at times-7L ≤t ≤0.
Since the detector is at rest, its proper timeτ=tequals the coordinate time.
It follows from the derivation of (<ref>) above, that the energy densityπ̂^±, which is emitted from the worldline (<ref>) at proper time-τ, is Doppler shifted by a factor of∓aτwith respect to the resting observer.
Radiation emanating from the wordlline atτ=-Thas light cone coordinatesz^±=t±x= ±∓a T/a, and thus arrives on the hyperplanet=0at the spatial coordinatex=∓a T/a.
In conclusion, the boosted dataπ̂^2_+_bstplotted in Figs. <ref> and <ref> is obtained from the left-moving energy densityπ̂^2_+(x,t=7L)_res(which is related to the right-moving energy density plotted in Fig. <ref> byπ̂^2_+(x,t)=π̂^2_-(-x,t))
as
π̂^2_+(x)_bst = 1/a^2 x^2π̂^2_+(-ln(ax)/a)_res.
The figures show that the boosted data resemble the results calculated for the accelerated detectors to a very good degree.
§ BOGOLIUBOV TRANSFORMATIONS BETWEEN MINKOWSKI, RINDLER AND UNRUH MODES
This section reviews the calculation of the Bogoliubov transformations between Minkowski, Rindler and Unruh modes.
In order to express the Rindler modes as linear superpositions of Minkowski modes, we need to calculate the Klein-Gordon inner prodcuts of u_ω^±andv_Ω^S±, withS=R,L, and their conjugates.
To see that any inner product between a left-moving and a right-moving solution vanishes, note that for left- and right-moving wave functions we have
∂_t u_ω^±=±∂_x u_ω^±, ∂_t v_Ω^S± =±∂_x v_Ω^S±.
Applying integration by parts and making use of the fact the boundary terms may be discarded we obtain:
( v_Ω^S±,u_ω^∓) = -x v_Ω^S±∂_t u_ω^∓^*-(∂_t v_Ω^S±) u_ω^∓^*
= -x (∓1) v_Ω^S±∂_x u_ω^∓^* ∓ (∂_x v_Ω^S±) u_ω^∓^*
= ±x v_Ω^S±∂_x u_ω^∓^*-v_Ω^S±∂_x u_ω^∓^* =0
For the non-vanishing inner products, we find:
α^R±_Ωω=(v_Ω^R±,u_ω^±)
= -x v_Ω^R±∂_t u_ω^±^*-(∂_t v_Ω^S±) u_ω^±^*
= ∓2 x v_Ω^R±∂_x u_ω^±^*
t=0=√(ω)/2π√(Ω)x0∞∓Ω/aln(ax)±ω x
The integral is not convergent because the Bogoliubov transformation is relating to set of improper, continuous modes. We can obtain a regularized expression by introducing a regularizing factor-ϵa x, where we have to take the limitϵ→0^+after integrating against properly normalized expressions.
With this we obtain (see 3.381.4 in <cit.>),
α^R±_Ωω= √(ω)/2π√(Ω) a^∓Ω/ax0∞ x^∓Ω/a±ω x- ϵ a x
= ∓√(Ωω)/2π a a^∓Ω/a/(ϵ a∓ω)^1∓Ω/aΓ(∓Ω/a)
ϵ→ 0^+→
√(Ω)/2π a√(ω)πΩ/(2a)(ω/a)^±Ω/a Γ(∓Ω/a).
Similarily,
β^R±_Ωω=- (v_Ω^R±,u_ω^±^*)
=± 2x v_Ω^R±∂_x u_ω^± =±2x0∞1/4π√(Ωω)∓Ω /aln(a x) (∓ω)∓ω x =√(ω)/2π√(Ω)x0∞∓Ω /aln(a x)∓ω x,
which is regularized to
β^R±_Ωω=√(ω)/2π√(Ω)x0∞∓Ω /aln(a x)∓ω x-ϵ a x
= ∓√(Ωω)/2π a a^∓Ω/a/(ϵ a±ω)^1∓Ω/aΓ(∓Ω/a)
ϵ→ 0^+→ -√(Ω)/2π a√(ω) -πΩ/(2a)(ω/a)^±Ω/a Γ(∓Ω/a).
The coefficients for the left Rindler wedge modes are related to the right ones by complex conjugation,
α^L±_Ωω =(v_Ω^L±,u_ω^±)
= ±2 x v_Ω^L±∂_x u_ω^±^*
t=0= -√(ω)/2π√(Ω)x-∞0 ±Ω/aln|ax|±ω x
= (α^R±_Ωω)^*
and similarily
β_Ωω^L± = (β^R±_Ωω)^*.
Using the regularized expression we calculate the Minkowski vacuum expectation value for the Rindler modes to be
b̂_Ω^R±^†b̂_Ω'^R±=ω0∞β^R±_Ωωβ^R±_Ω' ω^*
=ν-∞∞√(ΩΩ')/4π^2 a^2 -(Ω+Ω')π/2a± (Ω-Ω')ν/aΓ(∓Ω/a) Γ(±Ω'/a)
= δ(Ω-Ω') /2Ωπ/a -1.
Both for left-movers and right-movers we have the relation
-πΩ/(2a)α^R±_Ωω+πΩ/(2a)β^L±_Ωω^*
=-πΩ/(2a)α^R±_Ωω+πΩ/(2a)β^R±_Ωω=0
which is easy to read off from the regularized expressions, but also holds without regularisation but considering contour integrals.
These relations warrant that the Unruh modes are (linear combinations of only) positive Minkowski frequency modes.
WithΩ>0we have
w_Ω^± = Ωπ/(2a) v_Ω^R±+-Ωπ/(2a) v_Ω^L±^* /√(2sinh(πΩ/a)) =1/√(2sinh(πΩ/a))ω0∞( Ωπ/2aα^R±_Ωω+-Ωπ/2aβ^R±_Ωω) u^±_ω + (Ωπ/2aβ^R±_Ωω+-Ωπ/2aα^R±_Ωω)_=0 u^±_ω^*
=ω0∞√(Ωsinh(πΩ/a))/π a√(2 ω)(ω/a)^±Ω/a Γ(∓Ω/a) u^±_ω.
Similarly, for the Unruh modes with negative Rindler frequency,Ω<0, we have
w_Ω^± =|Ω|π/(2a) v_|Ω|^L±+-|Ω|π/(2a) v_|Ω|^R±^* /√(2sinh(π|Ω|/a)) =ω0∞( |Ω|π/2aα^R±_|Ω|ω^*+-|Ω|π/2aβ^R±_|Ω|ω^* ) u^±_ω/√(2sinh(π|Ω|/a)) =ω0∞√(Ωsinh(πΩ/a))/π a√(2 ω)(ω/a)^±Ω/a Γ(∓Ω /a) u^±_ω
thus we can unify the formulae for negative and positiveΩand write
w_Ω^± =ω0∞γ_Ωω^± u_ω^±,
γ^±_Ωω= √(Ωsinh(πΩ/a))/π a√(2 ω)(ω/a)^±Ω/a Γ(∓Ω/a).
§ ENERGY DENSITY FROM A RESTING DETECTOR COUPLING TO A THERMAL FIELD STATE
Also for a detector at rest coupling to a thermal field state, the energy density which is emitted into the field can be calculated from the numerical data for the chain modes.
In this appendix we discuss the evaluation of π̂_±^2 in this context and derive its expansion in terms of the chain modes obtained from the thermal double construction.
In the thermal state of the field, even before the detector couples to the field, the energy density expectation values is not zero.
This is because of the thermal occupation (<ref>) of the field modes b̂_k (all the modes b̂^(e)_k, b̂^(o)_k and b̂_k are thermal).
Hence the expectation value of the left-moving and right-moving energy density (<ref>) in this state is
π̂_∓^2(x) =k0∞k'0∞√(kk')/2π∓ (k-k') xb̂^†_± kb̂_± k'
=k0∞k/2π(β k-1) =π/12β^2
.
Even and odd modes contribute equally to this background energy density. The contribution from the odd sector remains constant in time, whereas the contribution from the even sector is modulated due to the interaction with the atom. Hence, we need to express
(π̂^(e)_∓)^2 =
ω0∞ω'0∞√(ωω')/4π∓ (ω-ω') xb̂^(e)_ω^†b̂^(e)_ω' -ω0∞ω'0∞√(ωω')/4π± (ω+ω')xb̂^(e)_ωb̂^(e)_ω'
in terms of the chain mode operators.
Starting from (<ref>), and using
the inverse transformation of (<ref>) which is
d̂_ω=∑_i (ω) √(|ω|)βω/4-L|ω|/√(4π |sinh(βω/2)|) p_i(ω)ĉ_i,
we have
b̂^(e)_ω =1/√(2sinh(βω/2))(βω/4d̂_ω+-βω/4d̂^†_-ω)
=∑_i=0^∞∑_k=0^i P_i,k√(ω)-Lω/√(8π)sinh(ω/θ)ω^k ( ω/θĉ_i + (-1)^k -ω/θĉ_i^†).
With this at hand
b̂^(e)_ω^†b̂^(e)_ω'
= ∑_i,j=0^∞∑_k=0^i∑_l=0^j √(ωω')-L(ω+ω') P_i,kP_j,lω^kω'^l/8πsinh(βω/2)sinh(βω'/2)( ω/θĉ_i^† + (-1)^k -ω/θĉ_i )( βω'/2ĉ_j + (-1)^l -βω'/2ĉ_j^†)
and
b̂^(e)_ωb̂^(e)_ω'
= - ∑_i,j=0^∞∑_k=0^i∑_l=0^j √(ωω')-L(ω+ω') P_i,kP_j,l/8πsinh(βω/2)sinh(βω'/2)ω^kω'^l ( βω/2ĉ_i + (-1)^k -βω/2ĉ_i^†)( βω'/2ĉ_j + (-1)^l -βω'/2ĉ_j^†).
Hence, in the expression for the energy density, the following integrals appear,
I_∓,+^k :
= 1/4πω0∞∓ω x-Lωω/θ/sinh(ω/θ) ω^k+1
= θ^k+2 (k+1)!/2^k+3πζ[k+2,(L± x)θ/2],
I_∓,-^k :
=1/4πω0∞∓ω x-Lω-ω/θ/sinh(ω/θ) ω^k+1
= θ^k+2 (k+1)!/2^k+3πζ[k+2,(L± x)θ/2+1],
where θ=2/β, ζ[a,b] is the generalized Riemann zeta function, and we used x0∞x^μ-1-β x/sinh(x)=2^1-μΓ(μ) ζ[μ,β+1/2], μ>1, β>-1,
with μ=k+2 and β=∓ 1+Lθ±θ x (see 3.552.1 of <cit.>).
Observe that ζ[s,a]^*=ζ[s^*,a^*] and ζ[s,a]=ζ[s,a+1]+1/(a^2)^s/2, hence,
I^k_∓,∘=(I^k_±,∘)^*
,
I^k_∓,+= I^k_∓,-+ θ^k+2(k+1)!/2^k+3π( ( (L± x)θ/2)^2 )^-(k+2)/2
= I^k_∓,-+ (k+1)!/2π( L± x )^k+2
I^k_∓,++(-1)^k I^k_±,-= (k+1)!/2π(L± x)^k+2 +(I^k_∓,-+(-1)^k I^k_±,-)= (k+1)!/2π(L± x)^k+2 + 2 I^k_∓,-, if k=0,2,4,...
2 I^k_∓,-, if k=1,3,5,... .
Making use of this relation and inserting everything into (<ref>) we obtain
(^(e))^2(x) =1/2∑_i,j=0^∞∑_k=0^i∑_l=0^j P_i,kP_j,l[
(I_-,+^k + (-1)^k I_+,-^k) (I_+,+^l+(-1)^l I^l_-,-) ĉ_i^†ĉ_j.
.
+(I_+,+^k+(-1)^k I_-,-^k) (I_+,+^l +(-1)^l I^l_-,-) ĉ_iĉ_j+(-1)^l( I_-,+^k+(-1)^k I_+,-^k )I_-,-^l δ_ij]
.
The very last term in this expression is a constant that is independent of the field state. Hence this term corresponds to the contribution from the even sector to the thermal background energy density (<ref>).
This contribution is only exact in the limit of an infinite number of chain modes, but it is impacted by the truncation error in any numerical calculation with only finitely many modes.
Hence, in numerical calculations, it is more practical to replace this constant term by its known exact value and to only evaluate the modulations of the energy density's expectation value due to the interaction with the detector over time from the numerical data.
§ FIELD ENERGY DENSITY IN TERMS OF CHAIN MODES FOR DETECTOR AT REST
Inserting b̂_±ω=1/√(2)(b̂_ω^(e)±b̂_ω^(o)) into the right-moving, normal-ordered energy density expectation value, and using that the odd field modes remain in their vacuum state,
yields
^2(x)=^2(-x)
=ω0∞ω'0∞√(ωω')/4π( - (ω-ω') xb̂_ω^(e)^†b̂^(e)_ω'
- [ (ω+ω')xb̂^(e)_ωb̂^(e)_ω']
).
Using (<ref>),
we obtain
^2(x) =ω0∞ω'0∞√(ωω')/4π( - (ω-ω') xb̂_ω^(e)^†b̂^̂(̂ê)̂_ω'
- [ (ω+ω')xb̂^̂(̂ê)̂_ωb̂^̂(̂ê)̂_ω'] )
=∑_i,jL^2√((i+1)(j+1))/π(-L- x)^j/(L- x)^j+2( (-L+ x)^i/ (L+ x)^i+2ĉ_i^†ĉ_j +(-L- x)^i/ (L- x)^i+2ĉ_i ĉ_j)
=∑_i,j(-1)^i+j√((i+1)(j+1))/π L^2 (1+x^2/L^2)^2( 2(j-i)arctanxLĉ_i^†ĉ_j +(4+2i+2j)arctanxLĉ_i ĉ_j)
where we expanded (<ref>),
p_n(ω)=L√(8π)/√(n+1)L^1_n(2L ω)
= √(2π/n+1)∑_k=0^n n+1n-k(-1)^k/k! 2^k+1 L^k+1ω^k,
used
ω0∞ω/π√(8)-Lω∓ω x p_n(ω)
= L √(n+1) (-L± x)^n (L± x)^-n-2/√(π),
and rewrote
L+ x= √(L^2+x^2)arctanxL.
The total energy density expectation value is given by T̂_00(x)=^2(x)+^2(x)=^2(x)+^2(-x) yielding
T̂_00 = 2/π L^2∑_k,l(-1)^k+l√((k+1)(l+1))/(1+x^2/L^2)^2( cos(2(k-l)arctanxL) ĉ_k^†ĉ_l +cos(2(k+l+2)arctanxL) ĉ_kĉ_l ).
§ PERTURBATIVE CALCULATION OF EMITTED ENERGY DENSITY
Fig. <ref> compares our numerical results for the emitted energy density to the results obtained within in leading order perturbation theory. This section derives the latter.
§.§ Perturbative state expansion
For time-dependent perturbation theory we employ the interaction picture, in which the field momentum operator reads
π̂(x,t) =k-∞∞-√(|k|)/√(4π)(- |k| t+ kxb̂_k-h.c.).
For the HO detector the interaction Hamiltonian reads
(t) = λ((t) +^†) ⊗x f(x) π̂(x,t)
= λ( -Ω_d t+^†Ω_d t)⊗Π̂_f(t),
Π̂_f(t) =k-∞∞(-√(|k|)/√(4π)- |k| t(x-∞∞ f(x) kx) b̂_k+h.c.)
=k-∞∞( - |k| t f_k b̂_k+h.c.)
.
We assume that the initial state is a product state between an emitter number state |n⟩, and the field vacuum state |0_f⟩.
The time evolved state is then expanded as |ψ_t⟩∼|n⟩⊗|0_f⟩+|ψ_t^(1)⟩+|ψ_t^(2)⟩+𝒪(λ^3) with
|ψ_t^(1)⟩ =-t'0t H_i(t')|ψ_0⟩
=-λt'0t ( -Ω_d t'√(n)|n-1⟩+ Ω_d t'√(n+1)|n+1⟩)⊗Π̂_f(t')|0⟩
|ψ_t^(2)⟩ =-t'0tt”0t' H_i(t') H_i(t”)|ψ_0⟩
=- λ^2t'0tt”0t'( -Ω_d (t'+t”)√(n(n-1))|n-2⟩ + Ω_d (t'+t”)√((n+2)(n+1))|n+2⟩. .
+(2ncos(Ω_d (t'-t”)) +Ω_d (t”-t'))|n⟩)
⊗Π̂_f(t')Π̂_f(t”)|0⟩.
For a TLS detector the interaction Hamiltonian reads
(t) = λσ(t)⊗x f(x) π̂(x,t)= λ( |g⟩⟨e|-Ω_d t+|e⟩⟨g|Ω_d t)⊗Π̂_f(t) .
We assume that the initial state is a product state between an atom eigenstate, either |g⟩ or |e⟩, and the field vacuum state |0_f⟩.
The leading order correction to the joint atom and field state is
|ψ_t^(1)⟩=-t'0t H_i(t')|ψ_0⟩
= -λt'0t ∓Ω_d t'|g⟩
|e⟩⊗Π_f(t')|0⟩.
Here the upper sign/line applies to the initital state |e⟩⊗|0⟩, and the lower to |g⟩⊗|0⟩.
And the second order correction to the state is
|ψ_t^(2)⟩ = -t'0tH_i(t')|ψ_t'^(1)⟩
= -λ^2 t'0t t”0t'∓Ω_d (t”-t')|e⟩
|g⟩⊗Π̂_f(t')Π_f(t”)|0⟩
§.§ Perturbative calculation of energy density
To leading order, the expectation value of the right-moving energy density is
⟨ψ_t|^2(x)|ψ_t⟩∼⟨ψ_t^(1)|^2(x)|ψ_t^(1)⟩+2⟨ψ_0 |^2(x)|ψ_t^(2)⟩.
In (<ref>) we have for the right-moving energy density
^2(x) =ω0∞ω'0∞√(ωω')/4π(2 - (ω-ω') xb̂^†_ωb̂_ω'
- (ω+ω')xb̂_ωb̂_ω' - - (ω+ω')xb̂_ω^†b̂_ω'^†).
Note that in the present calculation we need to interpret x as a null coordinate because we are working in the interaction picture.
Since the field starts out in the vacuum, the first order correction to the state is in the one-particle sector of the field. Hence the first term simplifies to
⟨ψ_t^(1)|^2(x)|ψ_t^(1)⟩= ω0∞ω'0∞√(ωω')/2π- (ω-ω') x⟨ψ_t^(1)|b̂^†_ωb̂_ω'|ψ_t^(1)⟩.
Similarily, the second term simplifies due to the vacuum in the initial state:
⟨ψ_0 |^2(x)|ψ_t^(2)⟩ =
-ω0∞ω'0∞√(ωω')/4π(ω+ω')x⟨ψ_0 |b̂_ωb̂_ω'|ψ_t^(2)⟩.
Note that
b̂_ω'Π̂_f(t')|0⟩ = b̂_ω'k-∞∞|k|t' f_k^*b̂_k^†|0⟩=ω' t' f^*_ω'|0⟩
hence
⟨0|Π̂_f(t”)b̂_ω^†b̂_ω'Π̂(t')|0⟩ = (ω't'-ω t”) f^*_ω' f_ω .
Similarily,
⟨0|b̂_ω'b̂_ωΠ̂_f(t')Π̂(t”)|0⟩
=k'-∞∞k”-∞∞(|k'|t'+|k”|t”) f_k'^* f_k”^* ⟨0|b̂_ω'b̂_ωb̂_k'^†b̂_k”^†|0⟩
=f_ω^* f_ω'^*( (ω t'+ω 't”)+(ω' t'+ω t”))
And from (<ref>), we have
f_k=-√(|k|)-L|k|/√(4π) .
For the TLS detector, using the upper sign for the excited initial state |e⟩, we obtain
⟨ψ_t^(1)|b̂^†_ωb̂_ω'|ψ_t^(1)⟩=
λ^2 t'0tt”0t ±Ω_d(t'-t”) (ω't”-ω t')√(ω'ω)/4π-L(ω+ω')
⟨ψ_t^(1)|^2(x)|ψ_t^(1)⟩
= λ^2/8π^2|t'0t ±Ω_d t'/(L+ (x+ t'))^2|^2
⟨ψ_0 |b̂_ωb̂_ω'|ψ_t^(2)⟩
=λ^2√(ω'ω)/4π-L (ω+ω')t'0tt”0t'∓Ω_d(t”-t')
( (ω t'+ω 't”)+(ω' t'+ω t”))
2⟨ψ_0 |^2(x)|ψ_t^(2)⟩
=
-λ^2/4π^2t'0tt”0t'∓Ω_d(t”-t')/(L-(x+t'))^2(L-(x+t”))^2
Thus, for the TLS emitter,
⟨ψ_t|^2(x)|ψ_t⟩∼λ^2/8π^2|t'0t ±Ω_d t'/(L+ (x+ t'))^2|^2
-λ^2/4π^2t'0tt”0t'∓Ω_d(t”-t')/(L-(x+t'))^2(L-(x+t”))^2+𝒪(λ^3)
For the HO detector we obtain
⟨ψ_t^(1)|b̂^†_ωb̂_ω'|ψ_t^(1)⟩
=λ^2 √(ωω')/4π-L(ω+ω')t'0tt”0t (n(Ω_d-ω) t'-(Ω_d-ω') t” +(n+1)-(Ω_d+ω)t' (Ω_d+ω')t”)
,
⟨ψ_0 |b̂_ωb̂_ω'|ψ_t^(2)⟩
=λ^2 √(ωω')/4π-L(ω+ω')t'0tt”0t'(2ncos(Ω_d (t'-t”)) +Ω_d (t”-t')) ( (ω t'+ω 't”)+(ω' t'+ω t”))
We then have, using ω0∞ωω ( X-L)=(L- X)^-2,
⟨ψ_t^(1)|^2(x)|ψ_t^(1)⟩
= λ^2 /8π^2( n | t'0t Ω_d t'/(L+(t'+x))^2|^2+ (n+1) | t'0t -Ω_d t'/(L+(t'+x))^2|^2)
,
2⟨ψ_0 |^2(x)|ψ_t^(2)⟩
=
-λ^21/4π^2t'0tt”0t' 2ncos(Ω_d (t'-t”)) +Ω_d (t”-t')/(L-(x+t'))^2 (L- (x+t”))^2
.
§ DERIVATION OF STATE ERROR BOUND
To bound the norm of the error |ϵ⟩, we first consider
t ⟨ϵ|ϵ⟩= 2⟨ϵ|t|ϵ⟩
=2⟨ϵ|Δ H|ψ^ϵ⟩≤ 2 |⟨ϵ|Δ H ψ^ϵ⟩| ≤ 2 √(⟨ϵ|ϵ⟩)√(⟨Δ Hψ^ϵ|Δ Hψ^ϵ⟩),
and since
t⟨ϵ|ϵ⟩= t |ϵ⟩^2=2|ϵ⟩t|ϵ⟩,
we have
t|ϵ⟩≤√(⟨ΔĤψ^ϵ|ΔĤψ^ϵ⟩).
This expression can be evaluated in numerical calculations, because it only involves |ψ^ϵ⟩ which we obtain from the numerical calculations.
The state |ψ^ϵ⟩ always remains a product state between the first N sites and the rest of the chain, which remains in its vacuum state,
hence
⟨ΔĤψ^ϵ|ΔĤψ^ϵ⟩=γ_N-1^2 ⟨ψ^ϵ|ĉ_N-1^†ĉ_N-1|ψ^ϵ⟩.
At t=0 the error vanishes, |ϵ⟩=0, and therefore its norm at time t is lower or equal to the integral
|ϵ⟩≤ϵ_t:=
|γ_N-1| t'0t √(⟨ψ^ϵ|ĉ_N-1^†ĉ_N-1|ψ^ϵ⟩).
§ ERROR BOUND FOR QUADRATIC OBSERVABLES AND HARMONIC EMITTERS
When the emitter is a harmonic oscillator and the initial state is Gaussian, the system remains in a Gaussian state both under the exact and the truncated Hamiltonian, because both are quadratic.
In this scenario, we can use Gaussian state methods to derive an error bound on quadratic observables similar to the error bound (<ref>).
We employ the Kähler structure formalism for Gaussian states (for a review, see <cit.>).
Assume that we are interested in the expectation value of a quadratic observable. Then,
working with respect to a real symplectic basis of quadrature operators (ξ̂^ =(q̂_1,q̂_2,...,p̂_1,p̂_2,...) with q̂_ip̂_j=δ_ij), we can express the observable as Ô=1/2∑_i,jO_ijξ̂^iξ̂^j. We may assume that the matrix O is symmetric, since any anti-symmetric part would only add an operator proportional to the identity operator to O. Thus, the expectation value of Ô is given by
Ô=1/4∑_i,jO_ijξ̂^iξ̂^j+ξ̂^jξ̂^i
= 1/4∑_i,jO_ijG_ij
=1/4O^G=O^Ω J^
=1/4Ω^ O J
=1/4⟨O^Ω, J⟩
where the matrix Ω_ij=ξ̂^iξ̂^j represents the symplectic form, G_ij=ξ̂^iξ̂^j+ξ̂^jξ̂^i represents the covariance matrix of the state and J=-G Ω^-1 represents the linear complex structure of the state (represented by a real square matrix), and we used the Frobenius scalar product ⟨A, B⟩=A^ B for real-valued square matrices.
The linear complex structure evolves in time as
J(t) =t KJ(t=0) -t K ⇒J̇= K J(t)- J(t) K= J(t) K.
Here K=Ω h represents the Hamiltonian generator of the full system Hamiltonian which is Ĥ=1/2∑_i,jh_i,jξ̂^iξ̂^j.
However, due to the truncation of the chain we are not calculating the state evolution under the full Hamiltonian, but only with the truncated Hamiltonian generator K^ϵ=K-Δ K.
Accordingly, we only calculate the linear complex structure J^ϵ=J- Δ J with J̇^̇ϵ̇=K^ϵJ^ϵ.
The error in the expectation value, which we seek to bound, is
|O-O^ϵ|
=1/4| <O^Ω,Δ J>|
≤1/4√(⟨O^Ω,O^Ω⟩)√(⟨Δ J,Δ J⟩)
=1/4O^ΩΔ J.
The time derivative of the error in the linear complex structure is
Δ̇ ̇J̇= K J-K^ϵJ^ϵ
= K^ϵ+Δ KJ^ϵ+Δ J -K^ϵJ^ϵ
= KΔ J+Δ K J^ϵ.
This we can use to bound
t √(⟨Δ J,Δ J⟩)
=t ⟨Δ J,Δ J⟩/2 √(⟨Δ J,Δ J⟩)
=t Δ J^Δ J/2 √(⟨Δ J,Δ J⟩) .
Next, since A B= B^ A^, we have
t Δ J^Δ J
=2Δ̇ ̇J̇^̇Δ J
=2 Δ̇ ̇J̇Δ J^
=2( KΔ JΔ J^ +Δ K J^ϵΔ J^).
The first term KΔ JΔ J^ vanishes, because both
KΔ JΔ J^=0 and Δ J KΔ J^=0. This is seen using the cyclicity of the trace and A = A^ and using that K^=- K, for example,
KΔ JΔ J^=Δ JΔ J^ K^=-Δ JΔ J^ K=0.
Thus,
t Δ J^Δ J
=2Δ K J^ϵΔ J^≤ 2 √(⟨Δ K J^ϵ,Δ K J^ϵ⟩)√(⟨Δ J^, Δ J^⟩)
such that
t Δ J=t √(⟨Δ J,Δ J⟩)≤√(⟨Δ K J^ϵ,Δ K J^ϵ⟩)=Δ K J^ϵ.
Note that since G=-JΩ we have G= J, thus, the bound directly gives a bound on the error in the covariance matrix of the calculated state.
In order to evaluate the right hand side, we express the truncation part of the Hamiltonian (<ref>) in terms of chain mode quadrature operators.
ΔĤ= γ_N-1(ĉ^†_N-1ĉ_N+ĉ_N^†ĉ_N-1)
= γ_N-1(q̂_N-1q̂_N+p̂_N-1p̂_N ).
Hence we have
Δ K=1/2Ω([ [ 0 γ_N-1; γ_N-1 0 ] 0; 0 [ 0 γ_N-1; γ_N-1 0 ] ])
=1/2([ 0 [ 0 γ_N-1; γ_N-1 0 ]; [ 0 -γ_N-1; -γ_N-1 0 ] 0 ])
Since we are restricting our calculation to N chain modes (and one mode given by the harmonic oscillator emitter), the matrix J^ϵ takes the form
J^ϵ=( [ A B ; 0 𝕀; C D ; 0 -𝕀 0 ]),
with (N+1)× (N+1)-matrices A, B, C, D.
Using indices A_i,j=a_i,j with i,1=-1,0,1,...,N-1, and analogously for the other matrices, we can calculate the right hand side of (<ref>) from
1/γ_N-1^2Δ K J^ϵ^2
= 2 (b_N-1,N-1-1)^2+ 2(c_N-1,N-1+1)^2 +∑_k=-1^N-2( (b_k,N-1)^2 +(b_N-1,k)^2+ (c_k,N-1)^2 +(c_N-1,k)^2 )
+∑_k=-1^N-1( (a_k,N-1)^2 +(a_N-1,k)^2+ (d_k,N-1)^2 +(d_N-1,k)^2 ).
In order to apply the above bound to the expectation value of the observable Ô the Frobenius norm of O needs to be finite.
One important example of such an operator is the number operator of a properly normalized positive frequency mode, a mode that shares the vacuum state with the chain modes.
§ MINKOWSKI ENERGY DENSITY FROM CHAIN MODES IN THE UNRUH EFFECT
Using the Bogoliubov transformations derived above, the Minkowski mode operators â_ω^± can be expressed as a linear combination
â^±_ω=Ω-∞∞γ_Ωω^±d̂_Ω^±
=1/√(2)Ω-∞∞γ_Ωω^±( d̂_Ω^e± d_Ω^o)
= Ω-∞∞γ_Ωω^±( ∑_i (Ω) f_|Ω|^* Ωπ/2a/√(2|sinh(Ωπ/a)|) p_i(Ω) ĉ_i
±d_Ω^o/√(2))
=∑_i A_ω,iĉ_i +Ô^(o)_ω
of chain mode operators ĉ_i, which is a linear combination of even Unruh modes, and some operator Ô^(o)_ω, which is a linear combination of odd Unruh modes. The precise form of Ô^(o)_ω is irrelevant to our purpose, because the odd sector remains in the vacuum state.
Formally, for the coefficients A_ω,i we use the regularized expression for γ^±_Ωω and, with (<ref>) and writing p_i(Ω)=∑_k=0^i P_i,kΩ^k, obtain
A_ω,i
=Ω-∞∞γ_Ωω^±(Ω) f_|Ω|^* Ωπ/2a/√(2|sinh(Ωπ/a)|) p_i(Ω)
= ∑_k=0^i P_i,k/4π a√(πω)Ω-∞∞(ω/a)^±Ω/a Γ(∓Ω/a) -L|Ω|Ωπ/2aΩ^k+1.
Based on this expression, a closed form expressions for the energy density of the field in terms of the chain modes can be obtained.
Using the notation as introduced in App. <ref>, the expectation value of the normal ordered, Minkowski energy density of the field (<ref>) takes the following form, into which we insert (<ref>):
π̂_±^2(x) =ω0∞ω'0∞√(ωω')/4π(2 ± (ω-ω') xâ_ω^±^†â_ω'^± - ∓ (ω+ω')xâ_ω^±â_ω'^±- ± (ω+ω')xâ_ω^±^†â_ω'^±^†)
=∑_i,jω0∞ω'0∞√(ωω')/2π( ± (ω-ω') xA_ω,i^*ĉ_i^† A_ω',jĉ_j - ∓ (ω+ω')xA_ω,iĉ_i A_ω',jĉ_j )
=∑_i,j J_j(x) ( J_i^*(x) ĉ_i^†ĉ_j - J_i(x) ĉ_i ĉ_j ),
where
J_j(x) = ω0∞∓ω x√(ω/2π) A_ω,j
= ω0∞∓ω x√(ω/2π)( ∑_k=0^j P_j,k1 /2π a√( 4πω)Ω-∞∞(ω/a)^±Ω/a Γ(∓Ω/a) -L|Ω|Ωπ/2aΩ^k+1)
= /4π^2 a√(2)∑_k=0^j P_j,kΩ-∞∞ a^∓Ω/a Γ(∓Ω/a) -L|Ω|Ωπ/2aΩ^k+1ω0∞∓ω xω^±Ω/a .
Now introduce a regularisation in the ω-integration,
ω0∞∓ω xω^±Ω/a -ϵω
= ±Ω/aΓ(±Ω/a) ∓Ω/aln(ϵ± x) 1/ϵ± xϵ→0→Ω/a xΓ(±Ω/a) ∓Ω/aln|x| (x) Ωπ/(2a),
then
J_j(x) =
/4π^2 a^2 x √(2)∑_k=0^j P_j,kΩ-∞∞|Γ(∓Ω/a)|^2 -L|Ω|Ω^k+2∓Ω/aln|x a| (1+(x))Ωπ/2a = /4π a x √(2)∑_k=0^j P_j,kΩ-∞∞-L|Ω|+(1+(x))Ωπ/2a∓Ω/aln|x a| Ω^k+1/sinh(πΩ/a) =∑_k=0^j P_j,k a^k+1/4π^k+3 x √(2)ν-∞∞-La |ν|/π +( (1+(x))/2∓/πln|x a|)νν^k+1/sinh(ν)_=:I^±_k(x)
=∑_k=0^j P_j,k I_k(x)
with ν=Ωπ/a.
We can split the ν-integration into two,
ν0∞(-L a/π+ (1+(x))/2∓/πln|x a|)νν^k+1/sinh(ν)
=2^-k-1Γ(k+2) ζ[k+2, 1/2(La/π-1+(x)/2±ln|xa|/π +1)]
=2^-k-1Γ(k+2) ζ[k+2, aL±ln|xa|/2π+1-(x)/4],
ν-∞0 -La |ν|/π +( (1+(x))/2∓/πln|x a|)νν^k+1/sinh(ν)
=(-1)^kν0∞-La ν/π -( (1+(x))/2∓/πln|x a|)νν^k+1/sinh(ν) = (-1)^k2^-k-1Γ(k+2) ζ[k+2, 1/2( La/π+1+(x)/2∓/πln|ax| +1)]
=(-1)^k 2^-k-1Γ(k+2) ζ[k+2, aL∓ln|xa| /2π +3+(x)/4]
where we used x0∞x^μ-1-β x/sinh(x)=2^1-μΓ(μ)ζ[μ,1/2(β+1)], μ>1, β>-1
(see 3.552.1 of <cit.>).
Inserting above we obtain
I^±_k(x) = a^k+1 (k+1)! /(2π)^k+3 x √(2)( ζ[k+2, aL±ln|xa|/2π+1-(x)/4]+(-1)^k ζ[k+2, aL∓ln|xa| /2π +3+(x)/4]).
For negative x<0, we obtain
I^±_k(x) = a^k+1 (k+1)! /(2π)^k+3 x √(2) 2ζ[k+2, aL±ln|xa|/2π+1/2], k even,
2ζ[k+2, aL±ln|xa|/2π+1/2], k odd,
which, however, is not relevant for our considerations here since in our setup we only consider the energy density in the right Rindler wedge.
There, for positive x>0, we obtain (using ζ(s,a)=ζ(s,a+1)+1/(a^2)^s/2)
I^±_k(x) = a^k+1 (k+1)! /(2π)^k+3 x √(2)( ζ[k+2, aL±ln|xa|/2π]+(-1)^k ζ[k+2, aL∓ln|xa| /2π +1 ])
= a^k+1 (k+1)! /(2π)^k+3 x √(2)( ζ[k+2, aL±ln|xa|/2π] .
. +(-1)^k ζ[k+2, aL∓ln|xa| /2π]+(-1)^k+1( aL∓ln|ax| /2π)^-k-2)
= ( (-1)^k+1 a^k+1 (k+1)! /2π x √(2)( aL∓ln|xa| )^-k-2 + a^k+1 (k+1)! /(2π)^k+3 x √(2) 2ζ[k+2, aL±ln|xa|/2π], k even
2ζ[k+2, aL±ln|xa|/2π], k odd).
These enter the final expression for the energy density as
π̂_±^2(x) =∑_i,j∑_k=0^i∑_l=0^j ( I_k^± *(x) P_i,kĉ_i^†ĉ_jP_j,lI^±_l(x) - I^±_k(x) P_i,kĉ_i ĉ_j P_j,lI^±_l(x) ).
Note that this expression is valid only on the hyperplane τ=t=0. Under the Rindler time evolution the right-handside of this equation evolves into a transformed observable expectation value as detailed in App. <ref>.
unsrtnat
|
http://arxiv.org/abs/2306.03059v1
|
20230605173047
|
Influence of the finite transverse size of the accelerating region on the relativistic feedback
|
[
"Alexander Sedelnikov",
"Egor Stadnichuk",
"Eduard Kim",
"Oraz Anuaruly",
"Daria Zemlianskaya"
] |
physics.ao-ph
|
[
"physics.ao-ph",
"physics.plasm-ph",
"86-10"
] |
APS/123-QED
Moscow Institute of Physics and Technology, Moscow, 117303, Russian Federation; Lebedev Physical Institute RAS
[email protected]
Moscow Institute of Physics and Technology, Moscow, 117303, Russian Federation;
HSE University, Moscow 101000 Russia
[email protected]
Moscow Institute of Physics and Technology, Moscow, 117303, Russian Federation;
Institute for Nuclear Research of RAS, Moscow 117312
[email protected]
Moscow Institute of Physics and Technology, Moscow,
Lebedev Physical Institute RAS
[email protected]
Moscow Institute of Physics and Technology, Moscow, 117303, Russian Federation;
Institute for Nuclear Research of RAS, Moscow 117312
[email protected]
Terrestrial gamma-ray flashes (TGFs) are commonly associated with relativistic runaway electron avalanches (RREAs). However, research shows that a single RREA cannot generate observable TGF fluxes. In an attempt to settle this issue the relativistic feedback mechanism was suggested by Joseph Dwyer. The Monte Carlo simulations and analytical descriptions of this type of feedback assume that acceleration region has a large size in a plane perpendicular to the direction of the electric field. Therefore these studies do not take into account transverse diffusion of RREAs starting points and the finite transverse size of the accelerating region. Electrons created by the feedback outside this region can not be accelerated by the electric field and form an avalanche, which may lead to a decrease in the total number of new avalanches and an increase in the requirements for self-sustaining RREA production by the feedback. In this article the transverse propagation of avalanches starting points was described using a modified two-dimensional diffusion equation. A correction to the criterion for self-sustaining production of RREAs was obtained. Monte Carlo simulation was also performed to calculate the correction for the feedback coefficient.
Influence of the finite transverse size of the accelerating region on the relativistic feedback
Daria Zemlianskaya
May 29, 2023
===============================================================================================
§ KEYPOINTS
* The influence of a finite transverse size of the accelerating region on RREAs dynamics was analytically considered.
* Taking diffusion into account does not make a significant contribution to the feedback coefficient when transverse size of accelerating region much larger than its longitudinal one.
* For a transverse size comparable to the longitudinal one, diffusion leads to a significant decrease in the feedback coefficient and a reduction in the number of avalanches in new generations.
§ INTRODUCTION
One of the unsolved problems in atmospheric physics is the construction of a model of Terrestrial Gamma-ray Flashes (TGFs). This phenomenon was first discovered in 1994 by the Compton Gamma Ray Observatory <cit.> and was observed by other space gamma-ray observatories such as Fermi <cit.>, which were created for observing gamma radiation from astrophysical sources. It has been established that avalanches of relativistic runaway electron avalanches (RREAs) accelerated by an electric field in thunderclouds might be the sources of these flashes <cit.>.
The force acting on relativistic electrons from the accelerating field may exceed losses in interactions with air molecules <cit.>. Such electrons are called runaway electrons. They produce new runaway electrons, leading to the formation of an avalanche <cit.>. The dynamics of avalanches is significantly influenced by feedback mechanisms studied by Joseph Dwyer <cit.>. As a result of feedback, the number of electrons is growing and new avalanches can be created.
There are positron and gamma feedback mechanisms. Positron feedback can be described as follows. An avalanche of runaway electrons radiates gamma-rays. These gamma-rays are generated by electron-positron pairs and positrons begin to propagate in the direction opposite to the direction of the electron avalanche. Then the positrons ionize the air at the beginning of the region, which leads to the formation of new RREA. The gamma feedback mechanism is based on the fact that radiated gamma rays are scattered backward and then, at the beginning of the accelerating region, generate new runaway electrons via Compton scattering or the photoelectric effect. Number of avalanches in new generation divided by number of avalanches in previous generation is called feedback coefficient. In other words, it is a probability that the RREA reproduces itself through relativistic feedback. If this coefficient is greater than one, avalanches multiplication becomes self-sustainable. This regime is called infinite feedback because if the electric field strength is constant, the process will never end, and the number of relativistic particles will be unlimited. It is extremely important to understand the conditions under which this regime occurs, because exactly infinite feedback greatly increases the number of runaway electrons, and therefore, can be used to describe the high flux of photons in TGF. <cit.>.
For relatively low electric field strength, positron feedback dominates over gamma-ray feedback <cit.>, which motivates to study the positron feedback mechanism in the first place. The criterion of infinite positron feedback was derived in the paper <cit.>. This work does not take into account diffusion of RREAs and the finite transverse size of the accelerating region. RREAs of new generations resulting from feedback may be created outside the acceleration region, which may lead to a decrease in the number of new avalanches and an increase in the requirements for self-sustaining RREA production by the feedback.
Balloon measurements showed that there are regions in the thundercloud where electric field exceeds threshold field (the minimum field required for the formation of runaway electrons) <cit.>. However, in view of the peculiarities of balloon measurements, it is difficult to draw conclusions about the transverse size of the overthreshold regions, which may affect infinite feedback. Therefore, it is necessary to estimate the dependence of the criterion of the infinite feedback regime on transverse size. To settle this issue in this article, an correction to the feedback coefficient was derived. Furthermore, using Geant4 simulation, its value was estimated.
§ AVALANCHES DIFFUSION
To describe the diffusion of RREAs via relativistic feedback, a simple diffusion equation can be used with an additional term responsible for the multiplication of avalanches. We will consider an accelerating region with a uniform electric field directed along the z-axis. The avalanche coordinate will be associated with the coordinate of the primary electron from which it was formed, and for simplicity only the two-dimensional distribution of the avalanche will be considered, without taking into account the z coordinate of the beginning of the avalanche. Therefore , the equation describing the concentration of avalanches is
∂ n_a/∂ t - ▿· (D ·▿ n_a) - n_a/τ^* = n_s
where n_a is a two-dimensional distribution of the RREA starting points. n_s is a source function , and τ^*=τ/ln Γ. The last term describes an increase in the number of avalanches due to the feedback factor Γ during the time of formation of a new generation τ.
For only one initial avalanche in the center of coordinate system n_s(x,y,t) = δ (x) δ (y) δ (t)
the solution of equation (<ref>) will be Green's function, which in the polar coordinate system is:
n_a = 1/4 π t DΓ^t/τexp( - r^2/4 D t) Θ (t)
This solution describes the distribution of avalanches in an accelerating region that has no boundaries in the transverse plane. Otherwise, the solution must satisfy additional boundary conditions. Since electrons born outside the accelerating region do not cause new avalanche creation, it would be logical to consider the concentration of avalanches at the edge equal to zero. Therefore the dynamics of avalanches in acceleration region with finite transverse size might be described with the following equation with boundary and initial conditions
∂ n_a/∂ t - ▿· (D ·▿ n_a) - n_a/τ^* = 0
n_a(t,r)|_r=R = 0
n_a(t,r)|_t=0 = n_I
n_I denotes the initial distribution of avalanches. The solution can be found in the following form:
n_a= ∑_k = 1^∞ T_k(t) X_k(r)
Solving the problem on the coordinate-dependent part we can obtain that X_k(r) = J_0 (r μ_k/R), where J_0 is the Bessel function and μ_k its zero.
Substitution a series into the equation (<ref>) gives the equation for the time-dependent component
T_k(t) + T_k(t) ( μ_k^2 D/R^2 - 1/τ^*) = 0
The initial value of T_k(t) can be found from the initial conditions on the avalanches distribution and the orthogonality of the Bessel functions:
T_k(0) = ∫_0^R n_I(r) J_0(μ_k r/R) r dr /∫_0^R J_0^2(μ_k r/R) r dr
Let A_k = T_k(0) ∫_0^R J_0(μ_k r/R) dr. Then the total number of avalanches in the acceleration region at time t is
N(t) = ∑_k = 1^∞ A_k ·exp(- ( μ_k/R)^2 D t ) exp(t/τ*)
Assuming that t=i ·τ, which corresponds to the time when i generations of avalanches were born, the number of avalanches in i generation is
N_i = ∑_k = 1^∞ A_k ·Γ^i α_k^i
where α_k = exp(- ( μ_k/R)^2 D τ)
§ CORRECTION TO THE CRITERION
The finite transverse size may lead to a decrease in the number of new avalanches and an increase in the requirements for self-sustaining RREA production by feedback. The obtained equation (<ref>) gives us the opportunity to find a correction to the criterion of self-sustaining RREA production. If Γ·α_1 is at least slightly more than 1 then other terms in the series decrease over time. Therefore it is enough to consider only the first term
N_i ≈ A_1 ·Γ^i α_1^i = A_1 ·Γ_d^i
where Γ_d = Γ· e^-(2.405/R)^2 D τ.
Thus, the criterion of self-sustaining production with correction is
Γ_d ≥ 1
§ GEANT4 SIMULATION
The Monte-Carlo simulation was performed via Geant4 to obtain value of the diffusion coefficient, which describes how strongly RREA starting points propagate in the transverse direction due to relativistic feedback. In this tool a physical list can be chosen to determine the processes that need to be taken into account. In this work G4EmStandartPhysics option4 physics list was chosen, which contains all necessary processes, including Compton scattering, photoelectric effect and pair production for energies characteristic for RREA processes <cit.>. The simulation took place in a cylindrical volume, which was filled with air with a density corresponding to an altitude of 10 km above sea level, ρ = 0.41 kg/m^3. For the longitudinal size of the area and the field strength, the following values were chosen: E = 300 kV/m and L=445.7 m. Such parameters according to <cit.> provide a feedback coefficient Γ = 1. The easiest way is to get the diffusion coefficient from a distribution (<ref>). Therefore, a sufficiently large radius of the cylinder R = 2000m was chosen. With such a radius, boundary conditions could be neglected, since avalanches launched from the center of the cylinder will not create new generations beyond the edge of the accelerating region.
The analytical consideration of diffusion does not take into account the coordinate of the beginning of the avalanche along the z-axis and describes only the two - dimensional distribution of avalanches in the transverse plane. Therefore, in order to correctly estimate the diffusion coefficient, it is necessary to consider an average avalanche. The simplest way to do this in simulation is to get the avalanche distribution and then launch avalanches with this distribution from the center of the cylinder. This is equivalent to launching average avalanches from the center. Thus, simulation was divided into four steps.
The first two steps are needed to obtain the distribution of the avalanches along the z-axis. First, the seed electrons were launched at the beginning of the electric field region. These electrons form RREAs, which radiate gamma-rays via bremsstrahlung. The energy, position, and momentum of the positrons generated by these gamma rays were recorded. After that, in the second step of the simulation, recorded positrons were launched. Electrons generated by these positrons were recorded. Thus, the obtained distribution is shown in Figure 1. It is worth noting that the form of the distribution is consistent with the distribution obtained analytically in the paper <cit.>.
In the third step obtained electrons were launched with recorded z coordinate and x = 0, y = 0. Finally, the fourth step consisted of launching recorded positrons from the third step. Electrons generated by these positrons were recorded. The transverse distribution of these electrons is shown in Figure 2. It differs from the second-generation avalanche distribution by a constant factor p_e - the probability that an electron turns around and runs away. This distribution was fitted according to formula (<ref>). The value of the diffusion coefficient multiplied by the time between generations was obtained from the fit: D τ≈ 836 m^2. This gives us the opportunity to evaluate the correction to the criterion of infinite feedback, therefore, the first four alpha coefficients are shown in Figure 3. As it was mentioned in section 4, α_1 in the first term of the series corresponds to the correction to the feedback coefficient Γ. Moreover, all other alpha coefficients decrease with the growth of their number.
§ DISCUSSION
The expressions obtained in Sections 3 and 4 for the feedback coefficient allow us to calculate the field strength required for the occurrence of the infinite feedback regime. Moreover, these expressions allow us to search for the minimum transverse size of the accelerating region. It can be calculated from the corrected feedback coefficient Γ_d and the rate of the TGF signal growth. This can be used, for example, to determine the size of regions with a uniform electric field. However, the presence of the diffusion coefficient and the average time τ in these expressions complicates the application of the expressions obtained. These parameters depend on the electric field strength and the length of the accelerating region and therefore must be calculated or measured. An attempt to solve this problem was made in Section 5, where the diffusion coefficient multiplied by time τ was evaluated by Monte Carlo simulation. Thus, we can use the obtained formulas only for a cell with a uniform electric field E=300 kV/m and a longitudinal size L = 445.7 m.
The parameters of the accelerating region used in the simulation impose strong restrictions on the use of the coefficients obtained. However, as was done in <cit.>, it can be assumed that the diffusion coefficient is determined only by the path length of the photon. This length, according to <cit.>, almost does not depend on the strength of the electric field. The average time between two avalanche generations is determined by the average velocity of electrons and positrons. In the work <cit.> it was shown that the average velocity changes slightly with the electric field strength, but remains close to 0.89c. Therefore, in the first assumption the value of D τ does not depend on the field strength. Using this assumption, for an accelerating region with a length of 445.7 m, it is possible to obtain the dependence of the intensity, at which infinite feedback occurs, on the transverse size (Figure 4). For the transverse size comparable with longitudinal one (R ≈ 223 m) E = 300.2 kV/m. However, this result was obtained only for an area with a certain longitudinal size L. Therefore, study of the influence of the size of accelerating region on transversal diffusion is of great interest. Moreover, it is important to note that our model can only be applicable when the transverse size of the region is larger than the transverse size of one avalanche. Otherwise, the feedback can be significantly affected by the diffusion of electrons in an avalanche, which was studied in <cit.>.
§ CONCLUSION
The main purpose of this work was to describe the transverse propagation of avalanches as a result of feedback. This was done using a two-dimensional diffusion equation. From the solution of the equation, a сorrection to the minimal conditions for the self-sustaining feedback was obtained.
It was shown that the effect of the limited transverse size of the accelerating region on the feedback coefficient is small if the transverse size of the region is much larger than the longitudinal one. It becomes necessary to take diffusion into account when the transverse size becomes smaller than the longitudinal one. In this case, the correction to the electric field required for infinite feedback becomes extremely significant. This result was obtained for the accelerating region with a longitudinal size L = 445.7 m.
The aim of further research will be to analytically or via Monte-Carlo simulation obtain the dependence of the diffusion coefficient and the average time of new generation formation on the electric field strength and the longitudinal length of the accelerating region.
|
http://arxiv.org/abs/2306.05294v1
|
20230607081856
|
Deep Learning with Partially Labeled Data for Radio Map Reconstruction
|
[
"Alkesandra Malkova",
"Massih-Reza Amini",
"Benoit Denis",
"Christophe Villien"
] |
eess.SP
|
[
"eess.SP",
"cs.IT",
"cs.LG",
"math.IT"
] |
U-PASS: an Uncertainty-guided deep learning Pipeline for Automated Sleep Staging
Elisabeth R. M. Heremans, Nabeel Seedat, Bertien Buyse,
Dries Testelmans,
Mihaela van der Schaar,
Maarten De Vos
This research is funded by a PhD fellowship from the Research Foundation
- Flanders (FWO) for E. Heremans (FWO project number 1SC2921N) and by the Flemish Government (AI
Research Program).
E. R. M. Heremans and M. De Vos are with KU Leuven, Department of
Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics, Kasteelpark Arenberg 10,
B-3001 Leuven, Belgium (e-mail: [email protected],
[email protected]).
N. Seedat and M. van der Schaar are with the University of Cambridge, Cambridge CB2 1TN, U.K.
B. Buyse and D. Testelmans are with UZ Leuven, Department of Pneumology, Herestraat 49, B-3000 Leuven, Belgium
May 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, we address the problem of Received Signal Strength map reconstruction based on location-dependent radio measurements and utilizing side knowledge about the local region; for example, city plan, terrain height, gateway position.
Depending on the quantity of such prior side information, we employ Neural Architecture Search to find an optimized Neural Network model with the best architecture for each of the supposed settings.
We demonstrate that using additional side information enhances the final accuracy of the Received Signal Strength map reconstruction on three datasets that correspond to three major cities, particularly in sub-areas near the gateways where larger variations of the average received signal power are typically observed.
§ INTRODUCTION
Retrieving the exact position of the connected objects has become an important feature of the Internet of Things (). Such connected objects have indeed been widespread over the last few years thanks to the low cost of the radio integrated chips and sensors and their possibility of being embedded in plurality of the devices.
By this they can help in fast development of large-scale physical monitoring and crowdsensing systems (like smart cities, factories, transportation, etc.). For the location-dependent application and services these abilities to associate accurate location with physical data gives huge opportunities <cit.>. For example, the fine-grain and dynamic update of air pollution and/or weather maps could benefit from geo-referenced mobile sensing <cit.>
(e.g., aboard taxis, buses, bicycles...), thus continuously complementing the data from static stations. One of the localization techniques is Global Positioning System () which has been widely used over the past decades. More recently, low-cost advanced solutions were proposed (like , Bi-band,..), but still they suffer from high energy consumption which is not suitable for applications.
As an alternative, one can opportunistically measure location-dependent radio metrics, like Received Signal Strength (Indicator) (), Time (Difference) of Arrival, Angle of Arrival, etc., because these sensor nodes communicate with one/several gateways at the same time (e.g. while sending a packet to the infrastructure to store this data in the cloud/server). Based on these metrics, there exist several methods to determine the node position: trilateration, triangulation, proximity detection, or fingerprinting <cit.>. We will focus on the last approach – fingerprinting <cit.>– which requires (ideally) full map of mentioned above radio metrics, covering the zone of interest. However, collecting metrics in each point of the zone of interest is impractical and time costly in real-world scenarios, therefore most approaches rely on sparse and non-uniformly distributed measurements.
In this sense, classical map interpolation techniques such as <cit.> or kriging <cit.> are used. Although these methods are relatively fast, they are quite weak in retrieving and predicting the complex and heterogeneous spatial patterns that are usually observed in real life signals (e.g., sudden and/or highly localized transient variations in the received radio metric due to specific environmental, local or topological effects).
Another approach consists on deterministic simulation such as Ray-Tracing tools <cit.>. Given some real field measurements and then calibrated over them, these models predict the radio propagation while simulating electromagnetic interactions with the environment. These technologies, however, need a complete description of the environment (properties of the materials of the obstacles, buildings, shape,etc.). Moreover, they are computationally complex, and in case of minor changes in the local area, these simulations should be re-run again. Recently, studies have employed machine learning for this task by considering radio maps as images and adapting neural network models that have been proposed for image completion. These models are based on the fully generated dataset by Ray-Tracing tools for predicting the signal propagation given the buildings mask and position of the transmitter <cit.>; or predicting the received power value for the Long Term Evolution () of the signal with use of additional information and neural networks <cit.> with handcrafted structures.
In this work we will focus on the received signal strength map reconstruction, where only small amount of ground truth -tagged measurements are available preventing to use existing models with handcrafted architectures and for which Ray-Tracing models could not be applied due to the lack of information about physical properties of the environment or due to high computational complexity. Our approach is based on Neural Architecture Search () <cit.> which aims to find an optimized model for this task. We show that by employing the latter technique, it is feasible to learn model parameters while simultaneously exploring the architecture. In addition, we employ unlabeled data in conjunction with ground-truth measurements in the training phase, as well as side information that accounts for the existence of buildings, to obtain knowledge and improve the model's performance. We assess our technique using three Map reconstruction collections, including one we produced for the city of Grenoble in France. In the case of the latter, we thoroughly examine its properties. In particular, we show that unlabeled data can effectively be used to find an efficient optimized model and that the side information provides valuable knowledge for learning. The obtained model is shown to have generalization ability on base stations that were not used in the training phase. The contribution of this paper is twofold:
* We propose a unified framework with the use of side information, for which we study the generalization ability of a neural network model which architecture is optimized over labeled and unlabeled data using side-information. This is an extension of the work of <cit.>.
* Furthermore, we provide empirical evaluation over three large-scale collections showing that the proposed approach is highly competitive compared to the state-of-the-art models in terms of quality metrics.
§ RELATED STATE OF THE ART
Classical techniques such as radial basis functions () or kriging <cit.> are simple and fast, but they are poor at predicting the complex and heterogeneous spatial patterns commonly observed in real-world radio signals (e.g., sudden and/or highly localized transient variations in the received signal due to specific environmental or topological effects, such as specific building shapes, presence of public furniture, ultra-navigation, etc.). Furthermore, data augmentation approaches for artificially increasing the number of measurements in radio map reconstruction issues have been developed.
The goal is to use the synthetic data created as extra data to train complex map interpolation models. However, these techniques need a highly thorough description of the physical environment and are unable to predict dynamic changes in the environment over time. A key bottleneck is their high computational complexity.
In the following, we go through some more relevant work on map reconstruction, including interpolation and data-augmentation techniques, as well as machine learning approaches.
§.§ Interpolation and data-augmentation techniques
Kriging or Gaussian process regression <cit.> is a prominent technique for radio map reconstruction in the wireless setting that takes into consideration the distance information between supplied measured locations while attempting to uncover their underlying 2D dependency.
Radial basis functions () <cit.> are another approach that simply considers the dependent on the distance between observed locations. As a result, this method is more adaptive and has been found to be more tolerant to some uncertainty <cit.>. Furthermore, in order to compare the performance of the with different kernel functions for the map reconstruction of signal strength of LoRa radio waves, <cit.> divided all of the points in a database of outdoor measurements into training and testing subsets, with the linear kernel showing the best accuracy in both standard deviation and considered metric. The two approaches stated above (which depend on kernel techniques and underlying spatial relationships of the input measurements) need a lot of input data to provide reliable interpolation results, making them sensitive to sparse training sets.
These methods have consequently been considered in pair with crowdsensing, where, for example in <cit.>, to improve the performance of basic kriging, one calls for measuring the radio metric in new points/cells where the predicted value is still presumably imprecise. A quite similar crowdsensing method has also been applied in <cit.> after considering the problem as a matrix completion problem using singular value thresholding, where it is possible to ask for additional measurements in some specific cells where the algorithm has a low confidence in the predicted result. In our case though, we assume that we can just rely on a map with
few ground-truth initial measurements.
Another approach considered in the context of indoor wireless localization (with map reconstruction firstly) relies on both measured field data and an a priori path loss model that accounts for the effect of walls presence and attenuation between the transmitter and the receiver <cit.>
by using the wall matrix, which counts the number of walls along the path from the access point to the mobile location and penalize value according to that number. In some outdoor settings, training points are divided into a number of clusters of measured neighbors having specific distributions, and local route loss models are applied in an effort to capture localized wireless topology effects in each cluster <cit.>. However as parametric path loss models are usually quite imprecise, these techniques have a limited generalization capabilities and require additional impractical in-site (self-)calibration.
A quite similar approach, except the use of additional side information, is followed in <cit.>, where they propose an algorithm called SateLoc. Based on satellite images, it is then suggested to perform a segmentation of the areas “crossed” by a given radio link, depending on their type (e.g., terrain, water, forest, etc.). Then, proportionally to the size of the crossed region(s), power path loss contributions are computed according to a priori model parameters (i.e., associated with each environment type) and summed up to determine the end-to-end path loss value.
One more way to build or complete radio databases
stipulated in the context of fingerprinting based positioning
consists in relying on deterministic simulation approach, namely Ray-Tracing tools (e.g., <cit.>).
This technique aim at predicting in-site radio propagation (i.e., simulating electromagnetic interactions of transmitted radio waves within an environment). Once calibrated with a few real field measurements, such simulation data can relax initial metrology and deployment efforts (i.e., the number of required field measurements) to build an exploitable radio map, or even mitigate practical effects that may be harmful to positioning, such as the cross-device dispersion of radio characteristics (typicaly, between devices used for offline radio map calibration and that used for online positioning).
Nevertheless, these tools require a very detailed description of the physical environment (e.g., shape, constituting materials and dielectric properties of obstacles, walls...). Moreover, they are notorious for requiring high and likely prohibitive
computational complexity in real applications.
Finally, simulations must be re-run again, likely from scratch, each time minor changes are introduced in the environment, e.g. the impact of human activity (like changing crowd density, temporary radio link obstructions).
§.§ based models trained after data augmentation
There is more and more interest in application of machine and deep learning methods to the problem of map reconstruction. These approaches have shown an ability to capture unseen spatial patterns of local effects and unseen correlations. Until now, to the best of our knowledge, these algorithms were primarily trained over simulated datasets generated by data-augmentation approaches (that were mentioned above).
In <cit.>, given a urban environment, city geography, Tx location, and optionally pathloss measurements and car positions the authors introduce a -based neural network called RadioUNet in the supervised learning setting, which outputs radio path loss estimates trained on a large set of generated using the Dominant Path Model data <cit.>.
<cit.> propose a two-phase transfer learning with Generative Adversarial Networks (), which comprises of two stages, to estimate the power spectrum maps in the underlay cognitive radio networks.
The domain projecting () framework is used to first project the source domain onto a neighboring domain.
The target domain's entire map is then rebuilt or reconstructed using the domain completing framework and the recovered features from the surrounding domain.
For training of the DP, fully known signal distribution maps have been used.
In another contribution, to improve the kriging predictions the authors have used the feedforward neural network for path loss modelling <cit.>, as conventional parametric path loss models has a small number of parameters and do not necessarily consider shadowing besides average power attenuation.
Apart from wireless applications, similar problems of map reconstruction also exist in other domains. In <cit.> for instance, the goal is to create the full topographic maps of mountains area given sparse measurements of the altitudes values.
For this purpose, they use a architecture, where in the discriminator they compare pairs of the input data and the so-called “received” map, either generated by the generator or based on the given full true map.
In other work <cit.> the authors estimate the sea surface temperature with use of architecture in the unsupervised settings but having a sequence of corrupted observations (with different clouds coverage) and known mask distribution.
Another close more general problem
making extensive use of neural networks is the image inpainting problem, where one needs to recover missing pixels in a single partial image. By analogy, this kind of framework could be applied in our context too, by considering the radio map as an image, where each pixel corresponds to the level for a given node location. It has been shown in <cit.> the interpretation of collected measurements on a map as an image with some pixels gives overall better result. But the problem is in consumed time as it does not give the generalized model and do not use the additional information about the local environment.
Usually, such image inpainting problems can be solved by minimizing a loss between true and estimated pixels, where the former are artificially and uniformly removed from the initial full image. This is however not possible and not realistic in our case, as only a few ground-truth field measurements collected on the map can be used to reconstruct the entire image.
In contrast to the previous approaches, in our study, we consider practical situations where data-augmentation techniques cannot be used, mainly because of unknown environment characteristics and computational limitations, and where only a small amount of collected ground-truth measurements is available.
Finally, a few contributions aim at predicting the received power value
based on neural networks and additional information. For instance, in <cit.>, values are predicted in exact points, given meta information such as the radio characteristics (e.g., transmission specifications or relationship between Rx and Tx, like horizontal/vertical angle, mechanical/electrical tilt angle, 2D/3D distance, base station antenna orientation, etc.)
and/or prior information about the buildings (e.g., height and presence). In case the latter information is missing, predictions can be made also by means of satellite images (e.g., paper <cit.>). In these papers though, the map reconstruction cannot be performed directly. As the prediction is realized for each point separately, it is thus time consuming. Moreover, the authors do not take into account the local signal values, but only the physical parameters and physical surroundings (similarly to standard path loss models).
§.§ Neural Architecture Search
The creation and selection of features in many tasks are done manually in general; this critical phase for some conventional machine learning algorithms might be time-consuming and costly. Neural Networks address this challenge by learning feature extractors in an end-to-end manner.
These feature extractors, on the other hand, rely on architectures that are still manually constructed, and with the rapid development of the field, designing an appropriate model has become onerous in many cases.
This problem has recently been addressed by a new field of research called () <cit.>. In a variety of applications, such as image segmentation and classification, Neural Networks with automatically found architectures have already outperformed “conventional” models with hand-crafted structures.
Different types of existing methods of search are described below.
In the last few years the research on the topic of has been shown a huge interest in the different fields. Among various studies, there are different techniques that are based on divers methods like Reinforcement Learning <cit.>, Evolutionary Algorithm <cit.> or Bayesian Optimization <cit.>.
Lately, recent gradient-based methods became more and more popular. For example, one of the first methods based on this technique was presented in <cit.> ans is called , which is using relaxation to, at the same time, optimize the structure of a cell, and the weight of the operations relative to each cell. After finding the best combinations, blocks are stacked manually to produce a neural network.
Based on , more complex methods have appeared such as AutoDeepLab <cit.> in which a network is optimized at 3 levels : (i) the parameters of the operations, (ii) the cell structure and (iii) the macro-structure of the network that is stacked manually. Despite the fact that a complex representation leads to powerful architectures, this technique has some drawbacks, such as the fact that the generated architecture is single-path, which means it does not fully exploit the representation's capabilities. Moreover, as the search phase is done over a fixed network architecture, it might not be the same between different runs, thus it is complicated to use transfer learning and the impact of training from scratch can be significant. To overcome these limitations, one possible technique is to use Dynamic Routing as proposed in <cit.>. This approach is different from the traditional gradient based methods proposed for in the sense that it does not look for a specific fixed architecture but generates a dynamic path in a mesh of cells on the fly without searching by weighting the paths during training procedure.
In the topic of signal strength map reconstruction there was no studies (for our knowledge) with application of to the field measurements.
In our study, we look at how well neural networks can extract complex features and their relationships to signal strength in the local area or under similar conditions, as well as their ability to take into account additional environmental information without having access to more complex physical details. This is performed through a search for a model with an optimized architecture adapted to the task. For this we consider the genetic algorithm of architecture search as it has been shown in <cit.> that it outperforms the dynamic routing search.
§.§ Semi-Supervised Learning
The constitution of coherent and consistent labeled collections are often done manually. This necessitates tremendous effort, which is generally time consuming and, in some situations, unrealistic. The learning community has been looking at the concept of semi-supervised learning for discrimination and modeling tasks since the end of the 1990s, based on the observation that labeled data is expensive while unlabeled data is plentiful and contains information on the problem we are trying to solve.
*Framework and definitions In this case, the labeled examples are generally assumed to be too few to obtain a good estimate of the association sought between the input space and the output space and the aim is to use unlabeled examples in order to obtain a better estimate.
For this, we will assume available a set of labeled training examples S = {(x_i, y_i) | i = 1, …, m}∈ (𝒳×𝒴)^m supposed to be generated i.i.d. from an underlying distribution 𝒟; and a set of unlabeled examples X_u = {x_i | i = m+1, …, m+u} that are drawn i.i.d. from the marginal distribution ℙ(x).
If X_u is empty, we fall back on the problem of supervised learning. If S is empty, we deal with an unsupervised learning problem. During learning, semi-supervised algorithms estimate labels for unlabeled examples. We note ỹ the pseudo-label of an unlabeled example x∈ X_u estimated by these algorithms. The interest of semi-supervised learning arises when u = |X_u| ≫ m = |S| and the goal is that the knowledge one gains about the marginal distribution, ℙ(x), through the unlabeled examples can provide information useful in inferring ℙ(y| x). If this goal is not achieved, semi-supervised learning will be less efficient than supervised learning and it may even happen that the use of unlabeled data degrades the performance of the learned prediction function <cit.>. It is then necessary to formulate working hypotheses for taking unlabeled data into account in the supervised learning of a prediction function.
*Inductive vs Transductive Learning Before presenting these hypotheses, we note that semi-supervised learning can be formulated in two different possible settings, namely transductive and inductive learning. The aim in inductive case is to minimize the generalization risk with respect to the distribution 𝒟, by training a model over a finite number of training samples <cit.>. This setting is also the most common in semi-supervised learning.
Despite this, accurate predictions for only the unlabeled cases in X_u are more crucial in some applications than finding a more general rule for all existing examples drawn i.i.d. with respect to 𝒟.
It was shown in <cit.> that in this scenario, it would be preferable to ignore the more general problem and concentrate on an intermediary problem known as transductive learning <cit.>. As a result, rather than looking for a general rule first (as in inductive learning), the goal of learning in this situation would be to predict the class labels of unlabeled cases by having the smallest average error (called the transductive error). These two settings are depicted in Figure <ref>.
*Central assumptions
The three major hypotheses in SSL that are: smoothness assumption, cluster assumption and manifold assumption.
The basic assumption in semi-supervised learning, called the smoothness assumption states that:
ℋ_1: If two examples x_1 and x_2 are close in a high density region, then their class labels y_1 and y_2 should be similar.
This assumption implies that if two points belong to the same group, then their output label is likely to be the same. If, on the other hand, they were separated by a low density region, then their outputs would be different.
Now suppose that the examples of the same class form a partition. The unlabeled data could then help find the boundary of each partition more efficiently than if only the labeled examples were used. So one way to use the unlabeled data would be to find the partitions with a mixture pattern and then assign class labels to the partitions using the labeled data they contain. The underlying hypothesis of the latter, called the cluster assumption, can be formulated by:
ℋ_2: If two examples x_1 and x_2 are in the same group, then they are likely to belong to the same class y.
This hypothesis could be understood as follows: if there is a group formed by a dense set of examples, then it is unlikely that they can belong to different classes. This is not equivalent to saying that a class is formed by a single group of examples, but that it is unlikely to find two examples belonging to different classes in the same group. According to the previous continuity hypothesis, if we consider the partitions of examples as regions of high density, another formulation of the partition hypothesis is that the decision boundary passes through regions of low density. This assumption is the basis of generative and discriminative methods for semi-supervised learning.
For high-dimensional problems, these two hypotheses may not be accurate since the search for densities is often based on a notion of distance which loses its meaning in these cases. A third hypothesis, called the manifold assumption, on which some semi-supervised models are based, then stipulates that:
ℋ_3: For high-dimensional problems, the examples are on locally Euclidean topological spaces (or geometric manifolds) of low dimension.
In the following, we will present some classic models of the three families of semi-supervised methods resulting from the previous hypotheses.
There are three main families of SSL approaches that have been developed according to the above assumptions.
*Generative Methods
Semi-supervised learning with generative models involves estimating the conditional density ℙ(x | y, Θ) using a maximum liklihood technique to estimate the parameters Θ of the model. In this case, the hidden variables associated with the labeled examples are known in advance and correspond to the class of these examples. The basic hypothesis of these models is thus the cluster assumption (Hypothesis ℋ_2) since, in this case, each partition of unlabeled examples corresponds to a class <cit.>. We can thus interpret semi-supervised learning with generative models (a) as a supervised classification where we have additional information on the probability density ℙ(x) of the data, or (b) as a partition with additional information on the class labels of a subset of examples <cit.>. If the hypothesis generating the data is known, generative models can become very powerful <cit.>.
*Discriminant Methods
The disadvantage of generative models is that, in the case where the distributional assumptions are no longer valid, their use will tend to deteriorate their performance compared to the case where only labeled examples are employed to learn a model <cit.>. This finding has motivated many works to overcome this situation. The first works were based on the so-called directed decision technique (or self-training) proposed in the context of the adaptive processing of the signal and which consists in using the current predictions of the model for unlabeled examples in order to assign them pseudo-labels and use the pseudo-labeled examples in the training process. This process of pseudo-labeling and learning is repeated until no more unlabeled examples are pseudo-labeled. In the case where class pseudo-labels are assigned to unlabeled examples, by thresholding the outputs of the classifier corresponding to these examples, it can be shown that the self-learning algorithm works according to the clustering assumption <cit.>.
*Graph-based Methods
Generative and discriminant methods proposed in semi-supervised learning exploit the geometry of the data through density estimation techniques or based on the predictions of a learned model. The last family of semi-supervised method uses an empirical graph G=(V,E) built on the labeled and unlabeled examples to express their geometry. The nodes V=[1,…,m+u] of this graph represent the training examples and the edges E translate the similarities between the examples. These similarities are usually given by a positive symmetric matrix W=[W_ij]_i,j∈ℝ^(m+u)×(m+u), where the weight W_ij is non-zero if and only if te examples indices i and j are connected, or equivalently, if (i,j)∈ E× E is an edge of the graph G.
§ APPLICATION TO THE STATED MAP RECONSTRUCTION PROBLEM
Additional information could be represented in different manners, and they could be included into the algorithm in a variety of ways, such as independent channels, parallel channels inputs, directly in the learning goal, or in the ranking metric during model selection.
We adapted the proposed algorithm presented in <cit.> for multi-channel input by combining additional context information with the data in the model's input; and we assessed the model's performance on unseen base stations that were not utilized in the learning process.
Here, we suppose to have a small set of n available base stations (X^j)_1⩽ j⩽ n. For each given matrix of base station X^j; j∈{1,…,n}, let Y^j∈ℝ^H× W be its corresponding 2D matrix of signal strength values measurements, where H× W is the size (in number of elements in a grid) of the zone of interest. In practice, we have access only to some ground truth measurements Y^j_m, meaning that Y^j_m = Y^j ⊙ M^j, with M^j ∈{0, 1}^H × W a binary mask of available measurements, and ⊙ is the Hadamard’s product. Here we suppose sparsity meaning that the number of non-null elements in Y^j_m is much lower than the overall size H× W. For each base station X^j we estimate unknown measurements Ỹ_u^j in Y^j with a interpolation given (X^j_m, Y^j_m), so that we have a new subset (X^j_u, Ỹ^j_u), where X^j_m=X^j ⊙ M^j is the associated 2D node locations of Y^j_m in X^j, and the values in Ỹ^j_u are initially given by predictions on X^j_u corresponding to the associated 2D node locations (or equivalently, the cell/pixel coordinates) with respect to the base station X^j which do not have measurements. In our semi-supervised setting, the values for unknown measurements in Ỹ^j_u will evolve by using the predictions of the current model during the learning process.
We further decompose the measurements set Y^j_m into two parts: Y^j_ℓ (for training), Y^j_v (for validation), such that Y^j_ℓ⊕ Y^j_v = Y^j_m, where ⊕ is the matrix addition operation. Let X^j_ℓ, X^j_v be the associated 2D node locations pf Y^j_ℓ and Y^j_v in X^j.
In our experiments the number of base stations n is small, so in order to increase the size of labeled and pseudo-labeled training samples, we cut the initial measurements maps (Y_m^j⊕Ỹ^j_u)_1⩽ j⩽ n into smaller matrices which resulted into the sets (S^j,i)_1⩽ j⩽ n
1⩽ i⩽ m_j where the sets S^j,i⊆ Y_m^j⊕Ỹ^j_u ; ∀ i∈{1,…,m_j} are shifted with overlapping of the points. Each submatrix S^j,i is hence divided into labeled, S^j,i_ℓ∪ S^j,i_v, and pseudo-labeled (first interpolated points using and then using the predictions of the current model) S^j,i_u\. To each submatrix S^j,i corresponds a 2D location X^j,i⊂ X. Figure <ref> gives a pictorial representation of the notations.
§ WITH GENETIC ALGORITHM FOR MAP RECONSTRUCTION USING SIDE INFORMATION
<cit.> is one of the mostly used primary Neural Network models that can handle multiple channels and hence consider side-information as well as the map on their input. As additional context (or side) information, we have considered:
* information about buildings presence, which was taken from the open-source OpenStreetMap dataset <cit.> – matrix of binary 0-1 values, denoted as “buildings map” further (Figure <ref> left);
* amount of crossed buildings by signal from base station to each point of the map.
By analogy to the data representation in the indoor localisation and map reconstruction, with the amount of crossed walls by signal – matrix of non-negative integer values, denoted as “buildings count map” further;
* information about distance from the base station. By the log-normal path loss model and corresponding (<cit.>: the signal strength is proportional to -10nlog_10(d) up to additive term, where n is a path loss exponent, d is a distance to base station) we can take the -log_10(distance) transformation to emphasize the zones closest from the base station – matrix of continuous values, denoted as “distance map” further;
* information about the relief represented by DSM (digital surface model): terrain elevation summed with artificial features of the environment (buildings, vegetation..), see Figure <ref>. This information was taken from the open-source dataset[<https://doi.org/10.5069/G94M92HB>] provided by Japan Aerospace Exploration Agency with 30m accuracy – matrix of integer values, denoted as “elevation map” further.
Our objective is to find the optimal architecture for using these side-information and study the generalization ability of obtained models for map reconstruction.
is performed with a Genetic algorithm is similar to one presented in <cit.> as it gave better performance result in terms of obtained accuracy.
From the
sets (S^j,i)_1⩽ j⩽ n
1⩽ i⩽ m_j
we use an evolutionary algorithm similar to <cit.> for searching the most efficient architecture represented as a Direct Acyclic Graph. Here, the validation sets (S^j,i_v)_1⩽ j⩽ n
1⩽ i⩽ m_j
are put aside for hyperparameter tuning.
The edges of this DAG represent data flow with only one input for each node, which is a single operation chosen among a set of candidate operations. We consider usual operations in the image processing field, that are a mixture of convolutional and pooling layers. We also consider three variants of 2D convolutional layers with kernels of size
3, 5 and 7, and two types of pooling layers that compute either the average or the maximum on the filter of size 4.
Candidate architectures are built from randomly selected operations and the corresponding models are trained over the set
(S^j,i_ℓ)_1⩽ j⩽ n
1⩽ i⩽ m_j
and its (possible) combinations with side information. The resulted architectures are then ranked according to pixel-wise error between the interpolated result of the outputs over (S^j,i_v)_1⩽ j⩽ n
1⩽ i⩽ m_j and interpolated measurements given by interpolation by filtering out the buildings. As error functions, we have considered the Mean Absolute Error () or its Normalized version () where we additionally weight the pixel error according to the distance matrix value. Best ranked model is then selected for mutation and placed in the trained population. The oldest and worst in the rank are then removed to keep the population size equal to 20 models.
Once the model with the optimized parameters are found by , f_θ, we consider the following two scenarios for learning its corresponding parameters θ by minimizing
ℒ(f_θ,S_ℓ∪ S^j,i_u\) =1/n∑_j=1^n1/m_j∑_i=1^m_j[1/|S^i,j_ℓ|∑_(x,y)∈ S^i,j_ℓℓ(y,f_θ(x)) .
.+ 1/|S^j,i_u\|∑_(x,ỹ)∈ S^j,i_u\ℓ(ỹ,f_θ(x))]
These two scenarios relate to obtaining model parameters on labeled and pseudo-labeled measurements using just interpolated data (scenario 1) or predictions from a first model learnt on these data (scenario 2). The overall learning process is depicted in Algorithm <ref>.
§ EVALUATION SETUP
We have considered three case studies from Paris, Antwerp (The Netherlands) and Grenoble.
In the area of data-based research and in the field of machine learning singularly, it is usually hard to find large open-source datasets made of real data. In some works however, alternatively (or as a complement) to using real data, synthetic data can be generated, for instance through deterministic simulations.
In our study, we make use of three distinct databases of outdoor measurements with respect to multiple base stations. The first one was generated through a Ray-Tracing tool in the city of Paris, France. The second database, which is publicly available (See <cit.>), consist of real -tagged LoRaWAN measurements that were collected in the city of Antwerp (The Netherlands). Finally, a third database, which is also made of real -tagged LoRaWAN measurements, was specifically generated in the city of Grenoble (France), in the context of this study.
§.§ Paris dataset
This first dataset is made of synthetic outdoor measurements, which were simulated in a urban Long Term Evolution () cellular context with a ray-tracing propagation tool named VOLCANO (commercialized by SIRADEL). Those simulations were calibrated by means of side field measurements <cit.>. This kind of deterministic tool makes use of both the deployment information (typically, the relative positions of mobile nodes and base stations) and the description of the physical environment (i.e., a city layout with a faceted description of the buildings, along with their constituting materials) to predict explicitly the electromagnetic interactions of the multipath radio signal between a transmitter and a receiver. Beyond the main limitations already mentioned in section <ref> regarding mostly computational complexity and prior information, we acknowledge a certain number of discrepancies or mismatches in comparison with the two other datasets based on real measurements. For example, in the simulated scenario, the dynamic range of observed is continuous in the interval [-190, -60] dBm, while with the real measurement data, a receiver sensitivity floor of -120 dBm is imposed. Moreover, the available simulation data was already pre-aggregated into cells, thus imposing somehow the finest granularity. The overall scene is 1000m× 1000m, each pixel being 2m × 2m, thus forming a matrix of size 500 × 500. The area considered in these simulations is located in Paris between Champ de Mars (South-West), Faubourg Saint Germain (South), Invalides (Est), and Quai Branly / d’Orsay (North), as shown in Figure <ref>. For each pixel, the value was simulated with respect to 6 different Base Stations. An example is given for one of these base stations in Figure <ref>. Further details regarding the considered simulation settings can be found in <cit.>.
§.§ Antwerp dataset
Measurement campaign and experimental settings
The LoRaWAN dataset was collected in the urban are at the city centre of Antwerp from 17 November 2017 until 5 February 2018, <cit.>, <cit.>.
The dataset consists of 123,529 LoRaWAN messages with coordinates on the map with measurements for that location. It was collected over a network driven by Proximus (which is a nation-wide network) by twenty postal service cars equipped with The City of Things hardware. The latitude, longitude and Horizontal Dilution of Precision information were obtained by the Firefly X1 receiver and then sent in a LoRaWAN message by the IM880B-L radio module in the 868 MHz band. The interval between adjacent messages was from 30s to 5 min depending on the Spreading Factor used.
The information was collected for 68 detected base stations in the initial database. We have filtered out some stations which have overall less than 10000 messages and/or which were located far from the collection zone having a flat signal. Finally we considered 9 base station – from BS'_1 to BS'_9 (see Figure <ref>).
The initial dataset
with information about each BS or gateway (), Receiving time of the message (RX time), Spreading Factor, Horizontal Dilution of Precision, Latitude, Longitude looks as following:
Dataset preprocessing and analysis.
As an example, in this part we will explain the way the dataset was processed for the future application. We aggregated the received power into cells of the size 10 meters × 10 meters (10 m × 10 m) and then averaged this power and translated into signal strength. To perform this aggregation, we measured the distance from the base station location based on local East, North, Up coordinates.
To compute the measurements density after data aggregation into cells of size 10m × 10m, we considered the close zone around the base station location of the size 3680m × 3680m. As an example, in the following Table <ref> there is an information about the first three considered base stations from this dataset.
We consider the zone of full city of the size 7000m × 7000m which covers the positions and most of the collected measurements in the city area. We analogically did the 10m × 10m aggregation.
In the initial dataset if in the visited point on the map there was no captured signal, this point for corresponding base station was marked as -200 dBm, so in the Figure <ref> the informative range of the signal values lies in [-120; -60] dBm, where the left boundary correspond to the sensitivity of the device.
§.§ Grenoble dataset
Measurement campaign and experimental settings.
There was conducted (and still in progress of extension of number of measurements) an experimental campaign of collecting a LoRa measurements in the real urban environment of Grenoble city. This data is valuable for the future applications especially because of limited amount of available data for the research purposes.
The dataset is in process of enlargement (with the installation of new base stations, extension of amount of collected measurements, etc.).
The data is collected for several base stations installed in Grenoble and consists of several parameters such as latitude, longitude, and corresponding value for each recognized base station. The collected data is similar to one described for Antwerp, but has a bit different structure, see Table <ref>.
The data was collected by different types of movements: walking of pedestrians, riding a bike or driving a car.
Total amount of stored lines in the database collected from 13-01-2021 to 11-01-2022 is 1574588. The data was collected in the frequency band of 868 MHz in the LoRaWAN network by several users with personal tags.
One example of stored data is shown in Table <ref>.
To collect the data, COTS telecom grade gateways iBTS from Kerlink manufacturer (see Figure <ref>) based on the LoRaWAN technology have been used. These gateways have fine time-stamping capability, and are synchronized with the time through a Pulse-per-Second signal generated by receiver included in the gateway with an accuracy of a few nanoseconds. In LoRaWAN technology, each of the tag uplift packets can be received by more that one base station, depending on the local structure (which cause interference) and mainly path loss. To store the received information from the gateways the LoRaWAN Network Server was used, as it is shown in Table <ref> (the amount of stored metrics is bigger – like Signal-to-Noise Ratio, Time of Arrival, uplink network parameters : frequency, DataRate, etc. but here we will focus on the data used in our study).
During the data collection, there was different deployment characteristics of the base stations and different amount of them. We consider the second version of the dataset (Grenoble-2) consisted of four base stations located in Grenoble and one which was far from the city region and thus was discarded because of the flatten signal in the zone with the biggest amount of measurements (the first one could be found in <cit.>). This version will be used in the validation of the results in the experiments section of the Section <ref>. Position of four considered base stations are shown in the Figure <ref>. This deployment is also interesting as one of the base stations (BS_3) is installed on the mountain peak higher than all the other base stations and thus quite specific to this database.
Dataset preprocessing and analysis.
First, we needed to filter out of the dataset the unreliable data resulting from errors during the data collection or from measurement artefacts. For example, the exact position could be significantly different from the real position (due to a lack of visibility to satellites) or, due to a specificity in the tag design, data transmission still occur while charging indoor, thus giving both the wrong value and/or the wrong position. For instance, regarding the latter issue that is quite obvious to detect, we simply rejected all the measurements exhibiting too high values and/or being static for a long time, which were most likely collected during the charging of the device. Being more precise, first we detected the base stations for which
the received signal strength was higher than -55 dBm for the entire acquisition time sequence and then removed the corresponding measurement points collected at the same time for this device with respect to all the other base stations.
However, values saturating at short distances or reaching receiver sensitivity at large distances were preserved in the database, for being somehow indirectly indicative of the tag distance to the BS.
Finally, we filtered out all the measurements for which the latitude was not valid (out of tolerated range).
Data aggregation per cell.
Just like for the Antwr<ep dataset, after removing outliers/artefacts, we then aggregated the signal in the cells. We converted the into milliWatts (as [dBm] = 10log_10[mW]), computed its average per cell in the cells of size 10 m × 10 m,
and converted back the result into signal strength values.
To perform this aggregation, we measured the distance from the base station BS_1 location considered to be (0,0) 2D Cartesian coordinate based on local East, North, Up coordinates.
Finally, we considered an overall area of interest of 3680m × 3680m (also for the radio mapping application), which covers the entire city, while containing most of the deployed base stations, as shown in the Figure <ref>.
To compare the measurements std values per cell in different conditions, we considered two types of aggregation cell sizes: 50 m × 50 m and 10 m × 10 m, as shown in Figure <ref>. In case of 50 m × 50 m aggregation cells, the amount of informative pixels (i.e., visited pixels with sufficient measurements) obviously reduces but at the same time, the aggregated value is more stable as a function of space (from pixels to pixels),
while with 10 m × 10 m aggregation cells, we can see significantly larger fluctuations of the average received power as a function of space but making available a larger amount of informative points for mapping.
dispersion per cell
The empirical standard deviation std of the collected measurements per cell after data aggregation has also been calculated for the two previous cell sizes, so as to study its distribution over cells having at least 3 measurements. First, the dependence of std on the cell size is analyzed on both Figure <ref> and Table <ref>. While comparing the two settings, the distribution mean looks similar, while other characteristics differ only marginally. For the 50 m × 50 m cell size, it turns out that the points with high std are mostly located closer to the base station, while for the 10 m × 10 m granularity, the std values are distributed more uniformly over the entire considered region of 368 by 368 cells.
It comes from the fact that, within typical 50 m × 50 m cells close to the BS, the average signal dynamic is such that the dispersion around the cell average value (i.e., the average of all the measurements collected in this cell) between the minimum and the maximum measurement values (i.e., even besides fast fading fluctuations) is naturally much larger than in the 10 m × 10 m case. In other words, the fine-grain average deterministic range-dependent power decay is interpreted as extra random fluctuations in 50 m × 50 m cells, due to a loose spatial grid.
Thereby, to preserve more information about the variability of the signal while solving the problem of map reconstruction, we will consider a 10 m × 10 m cell granularity in the following.
Then, keeping a 10 m × 10 m cell size, we further investigate the influence of the minimum amount of available measurement points per cell (spanning from 3 up to 30 measurements), with or without removing 10% of the points having the largest variance of measurements divided by the number of samples after in-cell data aggregation
(See Figure <ref>). This indicator indeed gives a hint on the capability to reduce fast fading dispersion through the coherent integration of in-cell instantaneous measurements (i.e., variance of residual dispersion after in-cell averaging).
After filtering out the data (Figure <ref>), the overall empirical distribution shape looks rather similar, even if its standard deviation is clearly decreased, as expected. This contributes typically to limit the number of cell occurrences hosting a std larger than 10dB, which are expected to very harmful to the fingerprinting process (typically, by limiting the suppression of fast-fading through averaging).
Beyond, as a relatively limited amount of input points could be visited physically during the collection campaign within this experimental dataset, when the minimum number of points to keep the cell is too demanding, the number of exploitable cells decreases drastically while the distribution characteristics over the cells do not vary much.
Accordingly, in terms of data preprocessing strategy, in the following (for further model parameters extraction or before applying our map interpolation algorithms), we will systematically reject 10% of the cells with the highest variance divided by number of measurements, while keeping 10 m × 10 m cells with at least 3 measurements.
Extraction of path loss model parameters
For any tag position ν=(x,y), we consider a set of independent average received power measurements {P_i^dBm}, i=1..N (in dBm) with respect to N base stations, which are assumed to be zero-mean and normally distributed with respective variances {σ^2_i}, according to the classical log-normal path loss model of Eq.
<ref>):
P_i^dBm(ν) = P_0,i^dBm- 10nlog_10d_i(ν)/d_0,i + w_i,
where d_i(ν) = √((x-x_i)^2+(y-y_i)^2) is the distance from a base station i of 2D Cartesian coordinates (x_i, y_i) to a tag of Cartesian coordinates ν=(x,y), P_0,i^dBm is the free-space average received power at the reference distance d_0,i, i =1...N, w_i ∼𝒩(0,σ_i^2). For simplicity in the following, we note P_i^dBm(ν) = P_i, d_0,i=1 m and P_0,i^dBm=P_0, ∀ i=1..N and d_i(ν) = d_i.
Prior to addressing more explicitly the radio map reconstruction problem (and its implications in terms of theoretical positioning performances), our goal here is first to determine empirically from real measurement data the key parameters of the path loss model introduced in Eq. <ref>, namely n, P^dBm_0 and σ, conditioned on propagation conditions. The objective is indeed two-fold. First, we want to observe in practice the global trends of the received signal strength in our concrete experimental context and set-up, as a function of both the transmission range and the operating conditions (e.g., the dispersion over line-of-sight () and non-line-of-sight () conditions <cit.> or over serving Base Stations, the practical ranges for reaching receiver sensibility or saturation, etc.). This will indeed be helpful to qualitatively interpret the dominating factors impacting the received signal dynamics (as a function of space). One more point is also to feed an analysis based on the evaluation of theoretical positioning performance bounds, while relying on synthetic models with representative radio parameters (See next section).
For this purpose, we perform Least Squares (LS) data fitting out of real field measurements from the Grenoble pre-processed datasets.
Let us denote m^BS = (m_1^BS, m_2^BS, ..., m_K^BS)^⊺
the vector of K measured values of the signal strength for the corresponding base station BS at distances d_k^BS,
𝐏(η^BS)=𝐀^BS. η^BS
the vector of noise-free signal strength values
calculated for the same distances d_k^BS based on Equation <ref>. For each base station BS, it hence comes:
m^BS = P(η^BS) + ϵ^BS,
where
ϵ_k^BS∼ N(0, (σ^BS)^2) are assumed to be independent and identically distributed residual noise terms (i.i.d.),
d_0^BS is the reference distance, d_k^BS= √((x^BS - x_k)^2+(y^BS - y_k)^2) is the distance from the base station of 2D Cartesian coordinates (x^BS,y^BS) to some tag position (x_k,y_k).
To compute the required model parameters, we thus need to solve the set of equations for all the measured points by minimizing the sum of squared errors, as follows:
(m^BS - A^BSη^BS)^⊺ (m^BS - A^BSη^BS) →min_η^BS
So the set of optimal parameters is calculated as follows :
η̂^BS = ((A^BS)^⊺A^BS)^-1(A^BS)^⊺m^BS
Retrospectively and in first approximation, residuals can be interpreted as noise in Equation <ref>, so that the standard deviation parameter σ^BS is simply determined with the optimal model parameters and Equation <ref>:
σ̂^BS = std(𝐀^BSη̂^BS - 𝐦^BS)
The results of this LS data fitting process for a few representative Base Stations of the Grenoble dataset
are reported in Tables <ref> and <ref>.
As it was mentioned above, we compared two settings with or without removing 10% of the cells with the highest variances of measured values per cell. It was done because of the possible errors or artefacts during the experimental data collection phase (e.g., erroneous position assignment due to satellite visibility conditions) or for a very small amount of collected data that were spotted to have non consistent values (e.g., due to tag failures, etc.). As we can see, removing even such a modest amount of those incriminated pathological cells can significantly impact the computation of the path loss model parameters ruling the deterministic dependency of average as a function of transmission range, while the standard deviation accounting for the dispersion of measurements around this average model remains unchanged.
By the way, for all the considered Base Stations, the standard deviation value std is observed to be high, which could be imputed to the fact that the underlying average path loss model is too inaccurate and can hardly account for so complex propagation phenomena.
Moreover, the amount of points is not so high (maximum around 10% of all the zone of interest), so that data fitting could be degraded by the sparseness and/or non-uniform distribution of the measurement points (with respect to the distance to the base station), which could lead to overweight the influence of some measurements in particular transmission range domains.
In Table <ref>, there are two sets of parameters for one of the base stations (namely "Bastille", BS_3), as we have identified two distinct zones in terms of topology (and hence, two propagation regimes): (i) on the hill hosting the base station, with a
larger angular distribution of the incoming radio signals from the tags and (ii) in the rest of city where the direction of arrival of the signal at the base station does not vary much (and accordingly the receive antenna gain).
Moreover, the position of the base station could not cover all the area around, similarly to BS (BS_2), being located on the Western border/part of the city, this base station naturally serves tags whose transmitted signals arrive systematically from the same side of the city (hence with a reduced span for possible angles of arrivals accordingly, which could somehow bias our extracted statistics).
Another remark is that the path loss exponent n can differ significantly from one base station to another and is usually larger in more complex environment contexts (i.e., in case denser buildings are present in the surroundings of the BS),
inducing more probable propagation conditions, and hence stronger shadowing effects even at short distances, which lead to lower values as a function of the distance in average. But the dispersion around the fitted path loss model, as accounted here by the standard deviation σ, is also very high on its own, primarily due to a relative lack of accuracy of the single-slope path loss model, but also to the remaining effect of instantaneous received power fluctuations even after in-cell measurements averaging (e.g., caused by multipath under mobility, fast changing tag orientation during measurements collection...).
In Figure <ref> below, we also show the overall distribution over 3 base stations in town (all except the BS “Bastille”, which again experiences a specific propagation regime due to the terrain elevation) as a function of the Tx-Rx distance, along with the superposed fitted function according to the path loss model. As expected, the dispersion accounted by the standard deviation is thus even worse here than that of previous BS-wise parameters extractions, while the other path loss parameters are close to their average values over the three considered base stations.
Illustration of distribution along a street
As an illustration, we consider the longest street in conditions served by the Bastille Base station (Figure <ref>), where it is theoretically easier to model the behaviour of the signal distribution as a function of transmission range, due to the absence of any big obstacles on the way of the signal. Considering again the path-loss model from Eq. <ref>, the parameters have been determined through data fitting and the resulting model is confronted to the measurements, as shown in Figure <ref>. As we can see, the log-distance classical path-loss model fits fairly well the collected data in terms of average trend, but still with a very large dispersion around the expected/predicted value, indicating also that conventional parametric model-based positioning (i.e., not based on fingerprinting) would be challenging despite the conditions.
As in this case we can model the signal behaviour, then it is possible to compare the gradients received by this model with the possible interpolations methods and then use it as one of the quality metrics in the map reconstruction problem, the results will be shown in Section <ref>.
Datasets.
The size of maps are 500 × 500 pixels for the generated dataset from Paris, 700 × 700 pixels for the dataset collected in Antwerp and 368 × 368 for Grenoble. For that, as a reminder, we aggregated and averaged the power of collected measurements in cells/pixels of size 10 meters × 10 meters by the measured distance from base station location based on local ENU coordinates for Grenoble and Antwerp, while for Paris dataset the cell size is 2 meters × 2 meters. As we also consider the generalization task, the algorithm should learn from all the available base stations data simultaneously.
In our settings, we only have access to several base stations lacking several orders of magnitude in size compared to aforementioned datasets. To artificially overcome this drawback, we created submatrices of the original images by cutting them into smaller ones (we tested over 96 by 96 pixels size because of memory issues during learning of the neural network for the storing of the model weights). We also added the flipped and mirrored images and we also did a shift in 20 pixels meaning that in our dataset there were overlapping between the images. Moreover, if the amount of pixels with measurements in the initial cutted image was high enough (more than 3% of the presented pixels) then we masked out the randomly sampled rectangle of presented measurements similar to the cutout regularization (<cit.>). By doing this we force the algorithm to do the reconstructions in the zones without measurements (not only locally) and be more robust to the amount of input data.
Matrices of the side information were used in the models as additional channels concatenated with measurements map.
Before feeding the data into the algorithm, all the values have been normalized between 0 and 1 in each channel separately before cutting them into smaller sizes to feed into the models.
Evaluation of the results over held out base stations
To evaluate the result we left one base station out of the initial set of each city to compare further the models performances with baselines, namely test Antwerp and test Grenoble. To do this, all the points were divided into two parts, namely train and test points for 90% and 10% respectively.
This will be used throughout all the following sections. Moreover, to highlight the importance of the zones close to base station (as it was mentioned in the Introduction) we compare the performance of the algorithms over different considered circles around the base station location, namely 200 meters, 400 meters and 800 meters radiuses (see Figure <ref>).
Information about base stations for each city, amount of all points, points, that were used in the validation/test process, and training set are given in Table <ref>.
We considered state-of-the-art interpolation approaches which are: Total Variation () in-painting by solving the optimization problem, Radial basis functions () <cit.> with linear kernel that was found the most efficient, and the k Nearest Neighbours (kNN) regression algorithm. The evolutionary algorithm in the model search phase was implemented using the
<cit.> package[<https://github.com/Pol22/NAS_DIP>]. All experiments were run on NVIDIA GTX 1080 Ti 11GB GPU.
§ EXPERIMENTAL RESULTS
In our experiments, we are primarily interested in addressing the following two questions: (a) does the use of side contextual information aid in the more accurate reconstruction of maps?; and; (b) to what extent is the search for an optimum design effective in the two scenarios considered (Section <ref>)?
Regarding the first point, we consider the following learning settings:
* given only the measurements (no side information),
* given both measurements and distance maps,
* given measurements, distances and elevation maps,
* given measurements, distances maps and map of amount of buildings on the way from base station to corresponding point in the map (or, in other words, buildings count).
From the standpoint of application, accurate interpolation in all regions where the signal varies the most is critical. We will compare the cumulative mistakes across held-out pixels for each of the zones that are close enough to the test base station for the LoRa signal (by considering the fixed radius of 1 km).
In the following, we will present our findings using the model utilizing side information as a multi-channel input.
§.§ Generalization ability of
We first study the learnability of the model (<cit.>) for map reconstruction without the use of unlabeled data. In order to see if there is an effect of using side information we have just considered distance maps as additional context information and considered the model with a hand-crafted classical architecture shown in Figure <ref> used for in-painting.
The goal of this early experiment is to validate the usage of for this task and investigate what effects the side information and labeled measurements have. For this we consider the simplest Paris data case where we keep only the points on the roads (as the points could be collected over the street by the vehicle drivers or pedestrians). The difference with Grenoble and Antwrep datasets is in the sampling procedure, as in reality it is very hard to obtain the collected data sampled uniformly in all the regions while in Paris dataset this is the case. All the measurements also exist in Paris dataset, this allows to see the importance of labeled information in the predictions by varying the percentage of labeled measurements in the training set.
Figure <ref> depicts the evolution of in measurements with respect to the distance to the test base station getting lower in comparison with the interpolation, as well as the overall error becomes smaller (BS”_6 - table <ref>), of , using only measurements ( only msm) and using measurements and distance maps ( msm+dist). From these results, it comes that outperforms using only measurements in a circle zone of less than 150 m radius around the base station. However, when distance maps are added to the model’s second channel, the situation is reversed. With the inclusion of side-information, we see that performs around 1 point better in than . This situation is illustrated on the map reconstruction ability of both models around the test base station BS”_6 in Figure <ref>. As can be observed, the projected signal levels are more discernible on the roads, which are actually the zones of interest where the signal is sought, as predicted by the model.
As a result, these findings show that the model can effectively account for side-information. We examined the and models for the influence of labeled measurements on the predictions by altering the percentage of labeled data utilized by for discovering the interpolation and by the model for learning the parameters. With regard to this proportion, Figure <ref> displays the average 200 meters (left) and 400 meters (right) away from the test base station. The test error of the model on unseen test data remains constant as the quantity of labeled training data increases, but the test error of the model decreases as this number increases.
§.§ The use of unlabeled data by taking into account side information with
We now expand our research to real-world data sets from Grenoble and Antwerp, taking into account more side-information and investigating the impact of neural architecture search on the creation of a better model. As in this case, the labeled training measurements are scarce we examine the usage of unlabeled data in addition to the labeled measurements as described in the previous section.
We begin by envisaging scenario 1 of Algorithm <ref> (Section <ref>) and investigating the impact of side information on the performance of the optimized model discovered by .
Figure <ref> shows the evolution of inside a circular zone of varied radius for f_θ^⋆_1 trained just on measurements and measurements with distance maps (left) and f_θ^⋆_1 trained on measurements, distance maps, and building counts or elevation (right) on the test base stations of Antwerp BS'_9. The use of distance maps to supplement measurements improves predictions, which is consistent with our earlier findings. When the third side-information is included, such as height or building counts, we find that the elevation yields better signal estimations than the latter. This is understandable because signal transmission can be severely slowed by building heights.
As a best model obtained by Algorithm <ref>, scenario 1 we consider the case with three input channels: measurements, distances and elevations and present comparative results with other baselines in Table <ref>. The lowest errors are shown in boldface. The symbol ^↓ denotes that the error is significantly greater than the best result using the Wilcoxon rank sum test with a p-value threshold of 0.01. According to these findings, f_θ^⋆_1 outperforms other state-of-the-art models as well as the model with a handcrafted architecture. These results suggest that the search of an optimal model with side-information has strong generalization ability for map reconstruction.
Figure <ref> depicts the average in dB of all models as well as the model f_θ^⋆_2 corresponding to scenario 2 of Algorithm <ref>, with respect to the distance to the test base Station BS'_9 for the city of Antwerp. For distances between 200 and 400 meters, f_θ^⋆_2 consistently outperforms in terms of . As in paper <cit.>, these findings imply that self-training constitutes a promising future direction for map reconstruction.
Figure <ref> presents the average in dB of all models with respect to the distance to the test base Station BS_4 for the city of Grenoble. These results are consistent with those obtained over the city of Antwerp.
The general conclusion that we can draw is that knowing about local patterns (even if from different locations/distributions/base stations) allows us to use this information in signal strength map reconstruction for application to unseen measurements from different base stations, demonstrating the ability to generalize output in the same area.
In order to get a finer granularity look at the estimations of the suggested technique, f_θ^⋆_2, Figure <ref> depicts the errors heatmaps on circular zones of radius 200m and 400m surrounding the test base stations for Antwerp and Grenoble. Each point reflects the difference between the real and predicted signal values. For both cities, we notice that there is
* an overestimation of the signal (higher predicted values than the true ones) within the zone of radius less than 200 meters where the values of the true signal are high. In absolute value, the average in dB are respectively 3.6
for Grenoble and 6.3
for Antwerp.
* an underestimation of the signal (lower predicted values than the true ones) within the zones of radius between 200 and 400 meters where the values of the true signal are low. In absolute value, the average in dB are respectively 4.9
for Grenoble and 6.2
for Antwerp.
To better understand the aforementioned results, we provide the empirical cumulative distribution function of different techniques in a 200-meter zone around the test base stations in Grenoble (Figure <ref>, left) and Antwerp (Figure <ref> right). From these results, it comes that the probabilities of having less absolute dB error is higher for both f_θ^⋆_1 and f_θ^⋆_2 than the other approaches.
The primary takeaway from these findings is that searching a Neural Network model with generalization capabilities might be useful for map reconstruction. To further investigate in this direction, we considered Scenario 1 of Algorithm <ref> in which the training points of both cities are combined, with the goal of evaluating the model's ability to produce predictions for one of the cities. The average in db with respect to the distance to the base stations for different approaches are shown in Figure <ref>.
According to these findings, the inclusion of signal data from another city disrupts the search for an efficient model and learning its parameters. This is most likely owing to the fact that the data distributions in these cities differ, and it would be interesting to study over alignment strategies, such as those proposed for domain adaptation <cit.>, in order to narrow the gap between these distributions in future work.
§ CONCLUSION
In this paper we studied the importance of the use of additional side information for the search of an optimized architecture for map reconstruction over three different datasets. We have shown that the addition of distance and elevation of buildings to the measurements allow to significantly reduce the mean absolute error in dB of the obtained model with an optimized found architecture. Our proposed approach tends to outperform agnostic techniques especially in the close zone near to the test base stations. We have also shown that our based approach has good generalization ability. However, in situations where there exists a distribution shift between two maps, the prediction confidence given by the training model may be highly biased towards, and thus may be not reliable.
In reality, a significant difference between two different maps could lead to complete degradation of the model's performance due to the large error in pseudo-labels. In practice, approaches like confidence regularization <cit.> may reduce the number of wrong pseudo-labels, but theoretically, studying the semi-supervised learning under a distribution shift is an important direction for future work.
plain
|
http://arxiv.org/abs/2306.01659v1
|
20230602162825
|
Existence of a global attractor for the compressible Euler equation in a bounded interval
|
[
"Yun-guang Lu",
"Okihiro Sawada",
"Naoki Tsuge"
] |
math.AP
|
[
"math.AP"
] |
Global attractor of the compressible Euler equations]
Existence of a global attractor for the compressible Euler equation in a bounded interval
School of Mathematics, Hangzhou Normal University, Hangzhou 311121, China.
[email protected]
Y.-G. Lu Lu's research is partially supported by the NSFC grant No.12071106 of China.
Faculty of
Engineering, Kitami Institute of Technology, 165 Koen-cho Kitami, Japan.
[email protected]
Department of Mathematics Education,
Faculty of Education, Gifu University, 1-1 Yanagido, Gifu
Gifu 501-1193 Japan.
[email protected]
N. Tsuge's research is partially supported by Grant-in-Aid for Scientific
Research (C) 17K05315, Japan.
Primary
35B41,
35L03,
35L65,
35Q31,
76N10,
76N15;
Secondary
35A01,
35B35,
35B50,
35L60,
76M20.
[
Naoki Tsuge
===============
In this paper, we are concerned with the one-dimensional initial
boundary value problem for isentropic gas dynamics.
Through the contribution of great researchers such as Lax, P. D., Glimm, J., DiPerna, R. J. and Liu, T. P., the decay theory of solutions was established. They treated with the Cauchy problem and the corresponding initial
data have the small total variation. On the other hand, the decay
for initial data with large oscillation has been open for half a century. In addition, due to
the reflection of shock waves at the boundaries, little is known for the decay of the boundary value problem on a bounded interval.
Our goal is to prove the existence of a global attractor, which
yields a decay of solutions for large data.
To construct approximate solutions, we introduce a modified Godunov scheme.
§ INTRODUCTION
The present paper is concerned with isentropic gas
dynamics in a bounded interval I=[0,1].
ρ_t+m_x=0,
m_t+(m^2/ρ+p(ρ))_x
=0,x∈(0,1), t∈(0,∞),
where ρ, m and p are the density, the momentum and the
pressure of the gas, respectively. If ρ>0,
v=m/ρ represents the velocity of the gas. For a barotropic gas,
p(ρ)=ρ^γ/γ, where γ∈(1,5/3] is the
adiabatic exponent for usual gases.
We consider the initial boundary value problem (<ref>)
with the initial and boundary data
(ρ,m)|_t=0=(ρ_0(x),m_0(x)) m|_x=0=m|_x=1=0.
The above problem (<ref>)–(<ref>) can be written in the following form
{[ u_t+f(u)_x=0, x∈(0,1), t∈(0,∞),; u|_t=0=u_0(x),; m|_x=0=m|_x=1=0 ].
by using u=^t(ρ,m), f(u)=^t(m, m^2/ρ+p(ρ)).
We survey the known results related to the above problem. The existence of solutions to conservation laws including (<ref>) was first established by Glimm <cit.>. Glimm studied the Cauchy problem
with initial data having small total variation. DiPerna <cit.>
proved the global existence of solutions to
(<ref>) by the vanishing viscosity method and a compensated compactness argument. We notice that this result can treat with the arbitrary L^∞ data.
The theory of decay for genuinely nonlinear 2×2 systems of conservation laws was established by Glimm-Lax <cit.>. Glimm-Lax showed that if initial data are constant outside a finite interval and have locally bounded total variation and small oscillation, then the tonal
variation of the solution of <cit.> decays to zero.
The Glimm-Lax theory had been further developed by DiPerna and Liu:
general conservation laws with a convex entropy function <cit.>,
general conservation laws with small initial data in total variation <cit.>. However, the decay for initial data with large oscillation has been open for half a century. Recently, Tsuge <cit.>
obtained a decay estimate for large L^∞ data.
On the other hand, little is known for the decay of the
boundary value problem such as (<ref>), because it is difficult
to treat with the reflection of shock waves at the both boundaries.
Our goal in the present paper is to investigate the decay structure of the boundary value problem (<ref>). We first introduce a modified Godunov scheme developed in <cit.>–<cit.>
to construct approximate solutions. To deduce the convergence,
we employ the compensated compactness developed by DiPerna. We next prove the existence of a global attractor, which
yields decay estimates of solutions for initial data with large oscillation.
To state our main theorem, we define the Riemann invariants w,z, which play important roles
in this paper, as
w:=m/ρ+ρ^θ/θ=v+ρ^θ/θ,
z:=m/ρ-ρ^θ/θ
=v-ρ^θ/θ (θ=γ-1/2).
These Riemann invariants satisfy the following.
|w|≥|z|, w≥0, v≥0.
|w|≤|z|, z≤0, v≤0.
v=w+z/2,
ρ=(θ(w-z)/2)^1/θ, m=ρ v.
From the above, the lower bound of z and the upper bound of w yield the bound of ρ and |v|.
Moreover, we define the entropy weak solution.
A measurable function u(x,t) is called an entropy weak solution of the initial boundary value problem (<ref>) if
∫^1_0∫^∞_0ρ(φ_1)_t+m(φ_1)_xdxdt+∫^1_0ρ_0(x)φ_1(x,0)dx=0,
∫^1_0∫^∞_0m(φ_2)_t+(m^2/ρ+p(ρ))(φ_2)_x dxdt+∫^1_0
m_0(x)φ_2(x,0)dx=0
hold for any test function φ_1,φ_2∈ C^1_0([0,1]×[0,∞))
satisfying φ_2(0,t)=φ_2(1,t)=0 and
∫^1_0∫^∞_0η(u)ψ_t+q(u)ψ_xdxdt≥0
holds for any non-negative test function ψ∈ C^1_0((0,1)×(0,∞)), where
(η,q) is a pair of convex entropy–entropy flux of (<ref>).
We set
ρ̅=∫^1_0ρ_0(x)dx. Since ρ̅=0 implies that the initial data becomes vacuum, we assume ρ̅>0. In addition, we define the mechanical energy as
η_∗(u)=1/2m^2/ρ+1/γ(γ-1)ρ^γ.
We choose a positive constant μ small enough. We then set
2 η̅=∫^1_0η(u_0(x))dx+μ, ν=3γ-1γ+1η̅ρ̅,
K=ρ̅ν-η̅,
M_∞=43γ-1(2γ^2(γ-1)3γ-1)^γ+1/2(γ-1)ν^3γ-1/2(γ-1)ρ̅ν-η̅+2(νρ̅+η̅+K)γ-1.
We notice that K>0, if necessary, by choosing μ small enough.
Moreover, for any fixed positive constant ε', we define
z̃(x,t)=z(x,t)-ε'∫^x_0ζ(u(y,t))dy,
w̃(x,t)=w(x,t)-ε'∫^x_0ζ(u(y,t))dy,
where
ζ(u)=η_∗(u)-νρ+K.
From the conservation of mass and the energy inequality, we find that
∫^x_0ζ(u(y,t))dy≤ ∫^x_0 {η_∗(u(y,t))+νρ(y,t)+K}dy
≤ ∫^1_0 {η_∗(u(x,t))+νρ(x,t)+K}dy
≤ ∫^1_0 {η_∗(u_0(x))+νρ_0(x)+K}dy.
Our main theorem is as follows.
We assume that
ρ_0(x)≥0 a.e. x∈I, ρ_0∈ L^∞(I), m_0ρ_0∈ L^∞(I).
Then, there exists a global entropy weak solution of the initial boundary value problem (<ref>). Moreover, for any positive constant ε,
there exists positive constant t_0 such that the solution satisfies
-M_∞-ε≤z̃(x,t), w̃(x,t)≤ M_∞+ε, ρ(x,t)≥0,
a.e. (x,t)∈I×[t_0,∞),
where t_0 depends only on ε, ε' and the bound of initial data.
For simplicity, we set ε'=1 hereafter.
In this remark, we state the important conditions necessary to construct an invariant region at the boundary. This condition
will be used in Section 3 (<ref>).
We choose a positive constant M_0 such that
-M_0+∫^x_0ζ(u_0(y))dy≤ z(u_0(x)),
w(u_0(x))≤ M_0+∫^x_0ζ(u_0(y))dy.
Then, in the proof of Theorem <ref>, we will observe that there exists a continuous function
ℳ(t) such that ℳ(0)=M_0,
ℳ(t_0)=M_∞ and
-ℳ(t)+∫^x_0ζ(u(y,t))dy≤ z(u(x,t)),
w(u(x,t))≤ℳ(t)+∫^x_0ζ(u(y,t))dy.
Let the lower and upper bounds in (<ref>) be
L(x,t;u)=-ℳ(t)+∫^x_0ζ(u(y,t))dy,
U(x,t;u)=ℳ(t)+∫^x_0ζ(u(y,t))dy,
respectively. Then we notice that
-L(0,t;u)≤ U(0,t;u), -L(1,t;u)≥ U(1,t;u).
In fact, the former is clear. The latter is from (<ref>) and the energy inequality deduced as follows.
L(1,t;u)+U(1,t;u)= 2∫^1_0{η_∗(u(x,t))-νρ(x,t)+K}dx
= 2∫^1_0{η_∗(u(x,t))-νρ(x,t)+νρ̅-η̅}dx
= 2∫^1_0{(η_∗(u(x,t))-η_∗(u_0(x)))
-ν(ρ(x,t)-ρ̅)-μ}dx
≤ -2μ.
(<ref>) is a necessary condition that
(<ref>) holds for boundary data m=0.
§.§ Outline of the proof (formal argument)
The proof of the main theorem is a little complicated. Therefore,
before proceeding to the subject, let us grasp the point of the main estimate by a formal argument.
We assume that a solution is smooth and the density is nonnegative in this section.
We consider the physical region ρ≥0 (i.e., w≥ z.). Recalling Remark <ref>, it suffices to
derive the lower bound of z(u) and the upper bound of w(u) to obtain the bound of u. To do this, we diagonalize (<ref>).
If solutions are smooth, we deduce from (<ref>)
z_t+λ_1z_x=0,
w_t+λ_2w_x=0,
where λ_1 and λ_2 are the characteristic speeds defined as follows
λ_1=v-ρ^θ, λ_2=v+ρ^θ.
We introduce z̃,w̃,ρ̃,ṽ,λ̃_1,λ̃_2 as follows.
z=z̃+∫^x_0{η_∗(u)-νρ+K}dy, w=w̃+∫^x_0{η_∗(u)-νρ+K}dy,
ρ̃=(θ(w̃-z̃)2)^1/θ, ṽ=w̃+z̃2, λ̃_1=ṽ-ρ̃^θ, λ̃_2=ṽ+ρ̃^θ.
We denote the flux of η_∗(u) by
q_∗(u)=m(1/2m^2/ρ^2+ρ^γ-1/γ-1).
Then, from (<ref>), it holds that
(η_∗(u))_t+(q_∗(u))_x=0.
For δ=θ Kε/2, we define
ẑ=z̃-δ t, ŵ=w̃+δ t.
We then deduce from (<ref>)_1 and (<ref>) that
ẑ_t+λ_1 ẑ_x=g_1(x,t,u), ŵ_t+λ_2 ŵ_x=g_2(x,t,u),
where
2 g_1(x,t,u)= -Kλ_1+1γ(γ-1)ρ^γ+θ
+1γρ^γv+12ρ^θ+1v^2-νρ^θ+1-δ,
g_2(x,t,u)= -Kλ_2-1γ(γ-1)ρ^γ+θ
+1γρ^γv-12ρ^θ+1v^2+νρ^θ+1+δ.
On the other hand, we notice that
-M_0≤ẑ_0(x), ŵ_0(x)≤ M_0.
Our goal is to prove that
Ŝ_inv={(ẑ,ŵ)∈ R^2;-M_0≤ẑ, ŵ≤ M_0}
is an invariant region for 0≤ t≤ t_0, where t_0=
max{(M_0-M_∞-ε)/δ,0}.
We consider the case where M_0>M_∞+ε. To achieve this, assuming that
-M_0< ẑ_0(x), ŵ_0(x)< M_0
and there exist x_∗∈(0,1), 0<t_∗≤ t_0 such that
(<ref>) or (<ref>) holds, we shall
deduce a contradiction, where
2 -M_0<ẑ(x,t), ŵ(x,t)< M_0, x∈(0,1),
0≤ t<t_∗
and ẑ(x_∗,t_∗)=-M_0, ŵ(x_∗,t_∗)≤ M_0,
2 -M_0<ẑ(x,t), ŵ(x,t)< M_0, x∈(0,1),
0≤ t<t_∗
and
-M_0≤ẑ(x_∗,t_∗), ŵ(x_∗,t_∗)=M_0.
To do this, we prove
g_1(x_∗,t_∗,u)>0, when (<ref>) holds,
g_2(x_∗,t_∗,u)<0, when (<ref>) holds.
From (<ref>), we notice ρ̃=ρ. We thus obtain
(ρ(x,t))^θθ= (ρ̃(x,t))^θθ
=w̃(x,t)-z̃(x,t)2
=ŵ(x,t)-ẑ(x,t)-2δ t2
≤ M_0-δ t
and observe
2 λ_1=z+3-γγ-1ρ^θ
=z̃+∫^x_0ζ(u)dx+3-γγ-1ρ^θ,
λ_2=w-3-γγ-1ρ^θ
=w̃+∫^x_0ζ(u)dx-3-γγ-1ρ^θ.
For (x,t)=(x_∗,t_∗), since
M_0-δ t_∗≥ M_0-δ t_0=M_∞
+ε, recalling δ=θ Kε/2 and (<ref>), we deduce from (<ref>) and (<ref>) that
3 g_1(x,t,u) = -K(z̃+∫^x_0ζ(u)dx
+3-γγ-1ρ^θ)
+ρ^θ+12(v+ρ^θ/γ)^2
+γ+12γ^2(γ-1)ρ^γ+θ
-νρ^θ+1-δ
≥ K(M_0-δ t_∗)-K(νρ̅+η̅+K)
-3-γγ-1θ K(M_0-δ t_∗)
+min_ρ{γ+12γ^2(γ-1)ρ^γ+θ-νρ^θ+1}-δ
≥ θ K(M_∞
+ε)-K(νρ̅+η̅+K)
+2(γ-1)3γ-1(2γ^2(γ-1)3γ-1)^γ+1/2(γ-1)ν^3γ-1/2(γ-1)-δ
= δ
> 0.
On the other hand, since z̃ attains the minimum at (x,t)=(x_∗,t_∗), we find that
z̃_t(x_∗,t_∗)≤0, z̃_x(x_∗,t_∗)=0. Then, from (<ref>)_1,
we have g_1(x,t,u)>0 at (x,t)=(x_∗,t_∗). This is a contradiction for (<ref>).
For (x,t)=(x_∗,t_∗), we can similarly obtain
3 g_2(x,t,u) = -K(w̃+∫^x_0ζ(u)dx-3-γγ-1ρ^θ)-ρ^θ+12(v-ρ^θ/γ)^2
-γ+12γ^2(γ-1)ρ^γ+θ
+νρ^θ+1+δ
≤ -K(M_0-δ t_∗)+K(νρ̅+η̅+K)
+3-γγ-1θ K(M_0-δ t_∗)
+max_ρ{-γ+12γ^2(γ-1)ρ^γ+θ+νρ^θ+1}+δ
≤ -θ K(M_∞
+ε)+K(νρ̅+η̅+K)
+2(γ-1)3γ-1(2γ^2(γ-1)3γ-1)^γ+1/2(γ-1)ν^3γ-1/2(γ-1)+δ
= -δ
< 0.
Since w̃ attains the maximum at (x,t)=(x_∗,t_∗), we find that
w̃_t(x_∗,t_∗)≥0, w̃_x(x_∗,t_∗)=0. Then, from (<ref>)_2,
we have g_2(x,t,u)>0 at (x,t)=(x_∗,t_∗). This is a contradiction.
(<ref>) implies that (z̃(x,t_0),
w̃(x,t_0)) is contained in
S̃_inv={(z̃,w̃)∈ R^2;-M_∞-ε≤z̃, w̃≤ M_∞+ε}.
In addition, we find that S̃_inv is an invariant region in the similar manner to (<ref>). Therefore, we
can prove (<ref>).
The present paper is organized as follows.
In Section 2, we construct approximate solutions by
the Godunov scheme mentioned above. In Section 3, we drive the bounded a decay estimate of our approximate solutions.
§ CONSTRUCTION OF APPROXIMATE SOLUTIONS
In this section, we construct approximate solutions. In the strip
0≤t≤ [[T]]+1 for any fixed positive constant T, we denote these
approximate solutions by u^(x,t)
=(ρ^(x,t),m^(x,t)), where [[T]] is the greatest integer
not greater than T.
For N_x∈ N, we define the space mesh
lengths by x=1/(2N_x). We take time mesh length t such that
x/t=2[
[max{M_0,M_∞+ε}+η̅+νρ̅+K]]+1,
where [[x]] is the greatest integer
not greater than x. Then we define N_t=([[T]]+1)/(2 t)∈ N.
In addition,
we set
(j,n)∈ N_x× N_t,
where N_x={1,3,5,…,2N_x-1} and N_t={0,1,2,…,2N_t}. For simplicity, we use the following terminology
2 x_j=jx, t_n=nt, t_n.5=(n+1/2)t,
t_n-=nt-0, t_n+=nt+0.
First we define u^(x,-0) by u^(x,-0)=u_0(x). Then, for j∈ N_x, we denote E_j^0(u) by
E_j^0(u)=1/2x∫^x_j+1_x_j-1
u^(x,-0)dx.
Next, assume that u^(x,t) is defined for t<t_n.
Then, for j∈ N_x, we denote E^n_j(u) by
E^n_j(u)=1/2x∫_x_j-1^x_j+1u^(x,t_n-)dx.
Let E^n(x;u) be a piecewise constant function defined by
E^n(x;u)=
E^n_j(u), x∈ [x_j-1,x_j+1) (j∈ N_x).
To define u_j^n=(ρ_j^n,m_j^n) for j∈ N_x, we first determine symbols I^n_j and L_n. Let the approximation of ζ(u) be
I^n_j:=∫^x_j-1_0ζ(E^n(x;u))dx
+1/2∫_x_j-1^x_j+1ζ(E^n(x;u))dx
=∫^x_j_0ζ(E^n(x;u))dx,
where ζ is defined in (<ref>).
Let 𝒟=(x(t),t) denote a discontinuity in
u^(x,t), [η_∗] and [q_∗]
denote the jump of η_∗(u^(x,t)) and q_∗(u^(x,t)) across 𝒟 from
left to right, respectively,
[η_∗]=η_∗(u^(x(t)+0,t))-η_∗(u^(x(t)-0,t)),
[q_∗]=q_∗(u^(x(t)+0,t))-q_∗(u^(x(t)-0,t)),
where q_∗(u) is defined in (<ref>).
To measure the error in the entropy condition and the gap of the
energy at t_n±, we introduce the following functional.
2
L_n= ∫^t_n_0∑_0≤ x≤ 1(σ[η_∗]-[q_∗])dt
+∑^n_k=0∫^1_0{η_∗(u^(x,t_k-0))-η_∗(E^k(x;u))}dx
+∑^n_k=0∑_j∈ J_k1/2x∫^x_j+1_x_j-1∫^x_x_j-1
R^k_j(y)dydx,
where
R^n_j(x)= ∫^1_0(1-τ)·^t(u^(x,t_n-)-E^n(x;u))
×∇^2η_∗(E^n(x;u)+τ{u^(x,t_n-)-E^n(x;u)})(u^(x,t_n-)-E^n(x;u))dτ
and the summention in ∑_0≤ x≤1
is taken over all discontinuities in u^(x,t) at a fixed time t over
x∈[0,1],
σ is the propagating speed of the discontinuities.
From the entropy condition, σ[η_∗]-[q_∗]≥0. From the Jensen inequality,
∫^1_0{η_∗(u^(x,t_n-0))-η_∗(E^n(x;u))}dx≥0. Therefore, we find that L_n≥0.
Using I^n_j and L_n, we define u_j^n as follows. First, we define a sequence {M_n}_n∈ N_t with the initial term M_0 as follows.
M_n+1=
M_n-δt, when M_n+L_n≥ M_∞+ε,
M_n, when M_n+L_n<M_∞+ε.
We notice that M_n+L_n≥
M_∞+ε-δt.
Next, we choose β such that 1<β<1/(2θ). If
E^n_j(ρ):=
1/2x∫_x_j-1^x_j+1ρ^(x,t_n-)dx<(x)^β,
we define u_j^n by u_j^n=(0,0);
otherwise, setting
z_j^n:=
max{z(E_j^n(u)), -M_n-L_n+I^n_j},
w_j^n:=min{w(E_j^n(u)), M_n+L_n+I^n_j}
,
we define u_j^n by
u_j^n:=(ρ_j^n,m_j^n):=(ρ_j^n,ρ_j^nv^n_j)
:=({θ(w_j^n-z_j^n)/2}
^1/θ,
{θ(w_j^n-z_j^n)/2}^1/θw_j^n+z_j^n/2).
We find
-M_n-L_n+I^n_j≤ z(u_j^n), w(u_j^n)≤ M_n+L_n+I^n_j.
This implies that we cut off the parts where
z(E_j^n(u))<-M_n-L_n+I^n_j
and w(E_j^n(u))>M_n+L_n+I^n_j
in defining z(u_j^n) and
w(u_j^n).
We must construct our approximate solutions u^(x,t) near the boundary and in an interior domain. Here we recall (<ref>). This condition is necessary so that the inequality (<ref>) holds even if a shock wave appears at the boundary x=1. To see this, we are devoted to treating with the construction near the boundary x=1.
For the construction in the interior domain, refer to <cit.>.
We then assume that
approximate solutions u^(x,t) are defined in domains D_1:
t<t_n (
n∈ N_t) and D_2:
x<x_2N_x-1, t_n≤t<t_n+1.
By using u_j^n defined above and u^(x,t) defined in D_2, we
construct the approximate solutions in the cell D: t_n≤t<t_n+1
(n∈ N_t), x_2N_x-1≤x<x_2N_x.
We denote u^n_2N_x-1 by u_-=(ρ_-,m_-)=(ρ_-,ρ_-v_-) and solve the Riemann initial boundary value problem (<ref>) and
u|_t=t_n=u_-, m|_x=1=0
in D. We draw a diagram by using
the wave curve of the first
family and the vacuum as follows (see <cit.> and Figure <ref>):
Case 1 If ρ_->0 and v_-≥0, there exists u_+=(ρ_+,m_+)=(ρ_+,ρ_+v_+) with v_+=0 from which u_+ is connected by a 1-shock curve.
Case 2 If v_-≤0 and w(u_-)≥0, then there exists u_+ with v_+=0 from which u_+ is connected by a 1-rarefaction curve.
Case 3 If v_-≤0 and w(u_-)≤0, then there exists u_* with ρ_*=0
from which u_- is connected by a 1-rarefaction, and u_* and u_+ with
ρ_+=v_+=0 are connected
by the vacuum.
Case 4 If v_-≥0 and ρ_-=0, then u_+ with
ρ_+=v_+=0 is connected from u_- by the vacuum.
§.§ Case 1: the case where a shock waves arise
In this case,
the solution u_ R(x,t) of (<ref>) and (<ref>) is as follows.
u_ R(x,t)=
u_-, D∩{x-x_2N_x≤σ_s(t-t_n)},
u_+, D∩{x-x_2N_x>σ_s(t-t_n)},
where σ_s is the speed of 1-shock wave.
We next replace the above constant states u_-, u_+ with functions of x and t as follows:
In view of (<ref>), we construct u^_-(x,t). We first determine the approximation of z̃,w̃ in (<ref>)
as follows.
2z̃^_-
= z_–∫^x_2N_x-1_0ζ(u^_n,0(x))dx, w̃^_1=w_–∫^x_2N_x-1_0ζ(u^_n,0(x))dx,
where u^_n,0(x)
is a piecewise constant function defined by
u^_n,0(x)=
u^n_j, x∈ [x_j-1,x_j+1) (j∈ N_x).
We set
2 ž^_-(x,t)= z̃^_-
+∫^x_2N_x-1_0ζ(u^_n,0(x))dx
+∫^x_x_2N_x-1ζ(u_-)dy
+{g_1(x,t;u_-)+V(u_-)}(t-t_n)
,
w̌^_-(x,t)= w̃^_-
+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+∫^x_x_2N_x-1ζ(u_-)dy
+{g_2(x,t;u_-)+V(u_-)}
(t-t_n),
where g_1 and g_2 are defined in (<ref>),
V(u)=q_∗(u)-ν m.
Using ǔ^_-(x,t), we next define u^_-(x,t) as follows.
2 z^_-(x,t)= z̃^_-
+∫^x_2N_x-1_0ζ(u^_n,0(x))dx+∫^x_x_2N_x-1ζ(ǔ^_-(y,t))dy
+
{g_1(x,t;ǔ^_-)+V(u_-)}(t-t_n)
,
w^_-(x,t)= w̃^_-
+∫^x_2N_x-1_0ζ(u^_n,0(x))dx+∫^x_x_2N_x-1ζ(ǔ^_-(y,t))dy
+{g_2(x,t;ǔ^_-)+V(u_-)}(t-t_n).
* We notice that approximate solutions z^_-,w^_-
and z̃^_-,w̃^_- correspond to
z,w and z̃,w̃ in (<ref>), respectively.
* For t_n<t<t_n+1, our approximate solutions will satisfy
2∫^x_2N_x-1_0 ζ(u^(x,t_n+1-))dx+∫^t_n+1_t_n∑_0≤ x≤ x_2N_x-1(σ[η_∗]-[q_∗])dt
=∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)t
+o(x).
* Our construction of approximate solutions uses the iteration method twice (see (<ref>) and (<ref>))
to deduce (<ref>).
We first set
2z̃^_+
=z_+- ∫^x_2N_x_0ζ(u^_n,0(x))dx, w̃_+
=w_+- ∫^x_2N_x_0ζ(u^_n,0(x))dx,
where z_+=-(ρ_+)^θθ, w_+=(ρ_+)^θθ.
We next construct ǔ^_+
2ž^_+(x,t) = z̃^_+
+ ∫^x_2N_x_0ζ(u^_n,0(x))dx
+∫^x_x_2N_xζ(u_+)dy
+g_1(x,t;u_+)(t-t_n)
,
w̌^_+(x,t) = w̃^_+
+∫^x_2N_x_0ζ(u^_n,0(x))dx+∫^x_x_2N_xζ(u_+)dy
+g_2(x,t;u_+)(t-t_n).
Using ǔ^_+(x,t), we define u^_+(x,t) as follows.
2z^_+(x,t) = z̃^_+
+ ∫^x_2N_x_0ζ(u^_n,0(x))dx+∫^x_x_2N_xζ(ǔ_+(y,t))dy
+g_1(x,t;ǔ_+)(t-t_n)
,
w^_+(x,t) = w̃^_+
+ ∫^x_2N_x_0ζ(u^_n,0(x))dx+∫^x_x_2N_xζ(ǔ_+(y,t))dy
+g_2(x,t;ǔ_+)(t-t_n).
Then, we define approximate solution u^(x,t)
in D as follows (see Figure <ref>).
u^(x,t)=
u^_-(x,t), D∩{x-x_2N_x≤σ_s(t-t_n)},
u^_+(x,t), D∩{x-x_2N_x>σ_s(t-t_n)}.
§.§ Case 2: the case where a rarefaction wave arises
Let α be a constant satisfying 1/2<α<1. Then we can choose
a positive value β small enough such that β<α, 1/2+β/2<α<
1-2β, β<2/(γ+5) and (9-3γ)β/2<α.
Step 1.
In order to approximate a 1-rarefaction wave by a piecewise
constant rarefaction fan, we introduce the integer
p:=max{[[(z_+-z_-)/(x)^α]
]+1,2},
where z_-=z(u_-),z_+=z(u_+) and [[x]] is the greatest integer
not greater than x. Notice that
p=O((x)^-α).
Define
z_1^*:=z_-, z_p^*:=z_+, w_i^*:=w_- (i=1,…,p),
and
z_i^*:=z_2N_x-1+(i-1)(x)^α (i=1,…,p-1).
We next introduce the rays x=1+λ_1(z_i^*,z_i+1^*,w_-)
(t-nt) separating finite constant states
(z_i^*,w_i^*) (i=1,…,p),
where
λ_1(z_i^*,z_i+1^*,w_-):=v(z_i^*,w_-)
-S(ρ(z_i+1^*,w_-),ρ(z_i^*,w_-)),
ρ_i^*:=ρ(z_i^*,w_-):=(θ(w_–z_i^*)/2)^1/θ ,
v_i^*:=v(z_i^*,w_-):=w_-+z_i^*/2
and
S(ρ,ρ_0):={[ √(ρ(p(ρ)-p(ρ_0))/ρ_0(ρ-ρ_0))
, ρρ_0,; √(p'(ρ_0)), ρ=ρ_0. ].
We call this approximated 1-rarefaction wave a 1-rarefaction fan.
Step 2.
In this step, we replace the above constant states with functions of x and t as follows:
In view of (<ref>), we construct u^_1(x,t). We first determine the approximation of z̃,w̃ in (<ref>)
as follows.
2z̃^_1
= z_–∫^x_2N_x-1_0ζ(u^_n,0(x))dx, w̃^_1=w_–∫^x_2N_x-1_0ζ(u^_n,0(x))dx.
We set
2 ž^_1(x,t)= z̃^_1
+∫^x_2N_x-1_0ζ(u^_n,0(x))dx
+∫^x_x_2N_x-1ζ(u_-)dy
+{g_1(x,t;u_-)+V(u_-)}(t-t_n)
,
w̌^_1(x,t)= w̃^_1
+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+∫^x_x_2N_x-1ζ(u_-)dy
+{g_2(x,t;u_-)+V(u_-)}
(t-t_n).
Using ǔ^_1(x,t), we next define u^_1(x,t) as follows.
2 z^_1(x,t)= z̃^_1
+∫^x_2N_x-1_0ζ(u^_n,0(x))dx+∫^x_x_2N_x-1ζ(ǔ^_1(y,t))dy
+
{g_1(x,t;ǔ^_1)+V(u_-)}(t-t_n),
w^_1(x,t)= w̃^_1
+∫^x_2N_x-1_0ζ(u^_n,0(x))dx+∫^x_x_2N_x-1ζ(ǔ^_1(y,t))dy
+{g_2(x,t;ǔ^_1)+V(u_-)}(t-t_n).
First, by the implicit function theorem, we determine a propagation speed σ_2 and u_2=(ρ_2,m_2) such that
(1.a) z_2:=z(u_2)=z^*_2
(1.b) the speed σ_2, the left state u^_1(x^_2(t_n.5),t_n.5) and the right state u_2 satisfy the Rankine–Hugoniot conditions, i.e.,
f(u_2)-f(u^_1(x^_2(t_n.5),t_n.5))=σ_2(u_2-u^_1(x^_2(t_n.5),t_n.5)),
where x^_2(t)
=1+
σ_2(t-t_n). Then we fill up by u^_1(x) the sector where t_n≤t<t_n+1,x_2N_x-1≤x<x^_2(t) (see Figure <ref>)
.
Assume that u_k, u^_k(x,t), a propagation speed σ_k and x^_k(t) are defined. Then we similarly determine
σ_k+1 and u_k+1=(ρ_k+1,m_k+1) such that
(k.a) z_k+1:=z(u_k+1)=z^*_k+1,
(k.b) σ_k<σ_k+1,
(k.c) the speed
σ_k+1,
the left state u^_k(x^_k+1(t_n.5),t_n.5) and the right state u_k+1 satisfy
the Rankine–Hugoniot conditions,
where x^_k+1(t)=1+σ_k+1(t-t_n). Then we fill up by u^_k(x,t) the sector where t_n≤t<t_n+1,x^_k(t)≤x<x^_k+1(t)
.
We construct u^_k+1(x,t) as follows.
We first determine
2 z̃^_k+1= z_k+1-∫^x_2N_x-1_0ζ(u^_n,0(x))dx-V(u_-)t/2
-∑^k_l=1∫^x^_l+1(t_n.5)_x^_l(t_n.5)ζ(u^_l(x,t_n.5))dx
,
2 w̃^_k+1= w_k+1-∫^x_2N_x-1_0ζ(u^_n,0(x))dx-V(u_-)t/2
-∑^k_l=1∫^x^_l+1(t_n.5)_x^_l(t_n.5)ζ(u^_l(x,t_n.5))dx
,
where x^_1(t)=x_2N_x-1, x^_l(t)=1+σ_l(t-t_n)
(l=2,3,…,k+1) and t_n.5 is defined in (<ref>).
We next define ǔ^_k+1 as follows.
2 ž^_k+1(x,t)= z̃^_k+1+∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)(t-t_n)
+∑^k_l=1∫^x^_l+1(t)_x^_l(t)ζ(u^_l(x,t))dx+∫^x_x^_k+1(t)ζ(u_k+1)dy
+g_1(x,t;u_k+1)(t-t_n.5)
+∫^t_t_n.5∑_x_2N_x-1≤ y ≤ x(σ[η_∗]-[q_∗])ds
,
2 w̌^_k+1(x,t)= w̃^_k+1
+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)(t-t_n)
+∑^k_l=1∫^x^_l+1(t)_x^_l(t)ζ(u^_l(x,t))dx
+∫^x_x^_k+1(t)ζ(u_k+1)dy
+g_2(x,t;u_k+1)(t-t_n.5)
+∫^t_t_n.5∑_x_2N_x-1≤ y ≤ x(σ[η_∗]-[q_∗])ds.
Finally, using ǔ^_k+1(x,t), we define u^_k+1(x,t) as follows.
2 z^_k+1(x,t) = z̃^_k+1+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)(t-t_n)
+∑^k_l=1∫^x^_l+1(t)_x^_l(t)ζ(u^_l(x,t))dx
+∫^x_x^_k+1(t)ζ(ǔ^_k+1(y,t))dy
+g_1(x,t;ǔ^_k+1)(t-t_n.5)
+∫^t_t_n.5∑_x_2N_x-1≤ y ≤ x(σ[η_∗]-[q_∗])ds
,
w^_k+1(x,t) = w̃^_k+1
+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)(t-t_n)
+∑^k_l=1∫^x^_l+1(t)_x^_l(t)ζ(u^_l(x,t))dx+∫^x_x^_k+1(t)ζ(
ǔ^_k+1(y,t))dy
+g_2(x,t;ǔ^_k+1)(t-t_n.5)
+∫^t_t_n.5∑_x_2N_x-1≤ y ≤ x(σ[η_∗]-[q_∗])ds.
By induction, we define u_i, u^_i(x,t) and σ_i (i=1,…,p-1).
Finally, we determine a propagation speed σ_p and u_p=(ρ_p,m_p) such that
(p.a) z_p:=z(u_p)=z^*_p,
(p.b) the speed σ_p,
and the left state u^_p-1(x^_p(t_n.5),t_n.5) and the right state u_p satisfy the Rankine–Hugoniot conditions,
where x^_p(t)=1+σ_p(t-t_n).
We then fill up by u^_p-1(x,t) and u_p the sector where
t_n≤t<t_n+1,x^_p-1(t)
≤x<x^_p(t)
and the line t_n≤t<t_n+1,x=x^_p(t), respectively.
Given u_- and z_+ with z_-≤z_+, we denote
this piecewise functions of x and t 1-rarefaction wave by
R_1^(u_-,z_+,x,t).
Finally, we construct u^_p(x,t) with
u^_p(1,t_n)=u_+
in the similar manner to u^_+(x,t) in the
Case 1 and fill up by u^_p(x,t) the sector where
t_n≤t<t_n+1,x^_p(t)
≤x≤1.
§.§ Case 3: the case where a rarefaction wave and the vacuum arise
In this case, we consider the case where ρ_+≤(x)^β,
which means that u_+ is near the vacuum. In this case, we cannot construct
approximate solutions in a similar fashion to the case 1–2. Therefore,
we must
define u^(x,t) in the different way.
Case 3.1
ρ_->(x)^β
Let u^(1)_- be a state satisfying w(u_-^(1))=w(u_-) and
ρ^(1)_-=(x)^β.
(i) z(u_+)-z(u^(1)_-)≤(x)^α
Notice that w(u_+)=w(u_-)=w(u^(1)_-).
Then there exists C>0 such that ρ^(1)_–ρ_+≤C(x)^α. Since α>β, we then have
ρ_+≥3(x)^β/4.
This case is reduced to the case 2.
(ii) z(u_+)-z(u^(1)_-)>(x)^α
Set
z̅:=-M_n+1-L_n+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)t
+∫^x_2N_x_x_2N_x-1{η_∗(u_-)+K}dx.
Let u^(2)_- be a state connected to u_- on the right by
R_1^(max{z^(1)_-,z̅})(u_-). Connecting the left
and right states u^(2)_-, u_+ with ρ_+=v_+=0 by a rarefaction curve and the vacuum, we construct a Rimann solution (u^(2)_-,u_+).
Then, in the region where u^(x,t) is R_1^(max{z^(1)_-,z̅})(u_-),
the definition of
u^(x,t) is similar to Case 2. In the other region, we
define u^(x,t) by the Riemann solution (u^(2)_-,u_+) itself.
Case 3.2 ρ_-≤(x)^β
(i) z(u_-)≥z̅
In this case, we define u^(x,t) as a Riemann solution
(u_-,u_+).
(ii) z(u_-)<z̅
Set
w̅:=M_n+1+ L_n+∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)t
-∫^x_2N_x_x_2N_x-1νρ_-dx.
Let λ_1(u_-) be the
1-characteristic speed of u_-.
In the region where
t_n≤t<t_n+1 and x_2N_x-1≤x≤ 1+λ_1(u_-)(t-t_n),
we define u̅^(x,t) in the similar manner to
u̅^_1(x,t) in Case 2.
We next take u^(3)_- such that z(u^(3)_-)=max{z_-,z̅} and
w(u^(3)_-)=min{w_-,w̅}. We then solve a Riemann problem
(u^(3)_-,u_+). In the region where
t_n≤t<t_n+1 and 1+λ_1(u_-)(t-t_n)<x≤ x_2N_x,
we define u̅^(x,t) as this Riemann solution.
The approximate solution u^(x,t) is piecewise smooth in each of the
divided parts of the cell. Then, in the divided part, u^(x,t) satisfies
(u^)_t+f(u^)_x=o(1).
§ THE L^∞ ESTIMATE OF THE APPROXIMATE SOLUTIONS
We deduce from (<ref>) the following
theorem:
For x_2N_x-1≤ x≤ 1,
2 z^(x,t_n+1-) ≥ -M_n+1-L_n
+∫^x_0ζ(u^(y,t_n+1-))dy- o(x),
w^(x,t_n+1-)
≤ M_n+1+L_n+∫^x_0ζ(u^(y,t_n+1-))dy+∫^t_n+1_t_n∑_0≤ x≤1(σ[η_∗]-[q_∗])dt
+ o(x),
where
M_n+1 is defined in (<ref>),
t_n+1-=(n+1)t-0 and o(x) depends only on the bound of solutions.
Now, in the previous section, we have constructed
u^(x,t) near the boundary x=1. In this case, we are devoted to case 1 in particular. For case 2 and 3, we refer to <cit.> and <cit.>.
§.§ Estimates of z^(x,t) for the case
where a shock arises near the boundary
We first consider z̃^_-. We recall that
2z̃^_-=z_–∫^x_2N_x-1_0ζ(u^_n,0(x))dx.
From (<ref>), we have z̃^_-≥ -M_n-L_n.
Since
ǔ^_-(x,t)=u^_-(x,t)+
O((x)^2),
recalling (<ref>), we have
2 z^_-(x,t) = z̃^_-
+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)(t-t_n)+∫^x_x_2N_x-1ζ(ǔ^_-(y,t))dy
+g_1(x,t;ǔ^_-)(t-t_n)
≥ -M_n-L_n+ ∫^x_2N_x-1_0ζ(u^_n,0(x))dx+V(u_-)(t-t_n)
+∫^x_x_2N_x-1ζ(u^_-(y,t))dy
+g_1(x,t;u^_-)(t-t_n)
-o(x).
If z^_-(x,t_n+1-0)>-M_n-L_n+I^n_2N_x-1-√(x),
from (<ref>) and M_n+1=M_n+O(x), we obtain (<ref>)_2. Otherwise,
from the argument (<ref>),
regarding M_0-δ t in (<ref>) as M_n+L_n,
we have g_1(x,t;u^_-)>δ.
From (<ref>), we conclude (<ref>)_1.
Next, we next consider z^_+.
We introduce the following lemma holds.
There exists a unique piecewise smooth entropy solution
(ρ(x,t), m(x,t))
containing the vacuum state (ρ=0) on D for the
problem (<ref>) and (<ref>)
satisfying
z(u(x,t))≥min(-w(u_-),z(u_-)),
w(u(x,t))≤max(w(u_-),0), ρ(x,t)≥0.
In this case, it follows from (<ref>) and
the above lemma that
z(u_+)≥min{-M_n-L_n-I^n_2N_x-1,-M_n-L_n+I^n_2N_x-1}.
On the other hand, we have
I^n_2N_x-1=I^n_2N_x+O(x),
where I^n_2N_x=∫^x_2N_x_0ζ(u^_n,0(x))dx.
Moreover, our approximate solutions satisfies the conservation of mass:
∫^1_0ρ^(x,t_n-)dx
=∫^1_0ρ_0(x)dx+o(1)
and the energy inequality:
∫^1_0η_∗(u^(x,t_n-))dx≤∫^1_0η_∗(u_0(x))dx+o(1).
From (<ref>), (<ref>)–(<ref>), we obtain
I^n_2N_x<-μ+O(x).
It follows from (<ref>) that
z(u_+)≥ -M_n-L_n+I^n_2N_x
by choosing x small enough.
Then, we have
2z̃^_+
=z_+- ∫^x_2N_x_0ζ(u^_n,0(x))dx≥ -M_n-L_n.
Therefore, since ǔ^_+(x,t)=u^_+(x,t)+O((x)^2), we conclude that
2z^_+(x,t) = z̃^_+
+ ∫^x_2N_x_0ζ(u^_n,0(x))dx+∫^x_x_2N_xζ(ǔ^_+(y,t))dy
+g_1(x,t;ǔ^_+)(t-t_n)
≥ z̃^_+
+ ∫^x_2N_x_0ζ(u^_n,0(x))dx+∫^x_x_2N_xζ(u^_+(y,t))dy
+g_1(x,t;u^_+)(t-t_n) .
In this case, we can obtain (<ref>)_1 in the similar manner to (<ref>). We can similarly obtain (<ref>)_2.
Our approximate solutions satisfy the following propositions holds (these proofs are similar to <cit.>–<cit.>, <cit.>, <cit.>.).
The measure sequence
η_∗(u^)_t+q(u^)_x
lies in a compact subset of H_ loc^-1(Ω) for all weak entropy
pair (η_∗,q), where Ω⊂[0,1]×[0,1] is any bounded
and open set.
Assume that the approximate solutions u^ are bounded and satisfy Proposition <ref>. Then there is a convergent subsequence u^_n(x,t)
in the approximate solutions u^(x,t) such that
u^_n(x,t)→ u(x,t)
a.e., as n→∞.
The function u(x,t) is a global entropy solution
of the Cauchy problem (<ref>).
99
C1Chen, G.-Q.: Convergence of the Lax–Friedrichs scheme for
isentropic gas dynamics (III). Acta Mathematica Scientia 6, 75–120
(1986)
C2Chen, G.-Q.: The compensated compactness method and the system of isentropic gas dynamics. MSRI preprint 00527-91, Berkeley, 1990
CCSChueh, K. N., Conley, C. C. and Smoller, J. A.: Positively invariant regions for systems of nonlinear diffusion equations. Indiana Univ. Math. J.
26, 373–392 (1977)
D1DiPerna, R. J., Decay of solutions of hyperbolic systems of conservation laws with a convex extensions, Arch. Ration. Mech. Anal. 64, 1–46 (1977)
D2DiPerna, R.J.: Convergence of the viscosity method for isentropic gas dynamics. Commun. Math. Phys. 91, 1–30 (1983)
DC1Ding, X., Chen, G.-Q., Luo, P.: Convergence of the
Lax–Friedrichs scheme for isentropic gas dynamics (I)–(II). Acta Mathematica Scientia
5, 415–432, 433–472 (1985)
DC2Ding, X., Chen, G.-Q., Luo, P.: Convergence of the
fractional step Lax–Friedrichs scheme and Godunov scheme for the isentropic
system of gas dynamics. Commun. Math. Phys. 121, 63—84 (1989)
GGlimm, J., Solutions in the large for nonlinear hyperbolic systems of equations,
Comm. Pure Appl. Math. 18, 697–-715 (1965)
GLGlimm, J., Lax, P. D., Decay of solutions of systems of nonlinear hyperbolic conservation laws,
Amer. Math. Soc. 101 (1970)
LLiu, T. P., Lax, P. D., Large time behavior of initial and initial-boundary-value problems of general systems
of hyperbolic conservation laws,
Comm. Math. Phys. 55, 163–177 (1977)
T1Tsuge, N.: Global L^∞ solutions of the compressible Euler equations with spherical symmetry.
J. Math. Kyoto Univ. 46, 457–524 (2006)
T2N. Tsuge: Existence of global solutions for unsteady isentropic gas flow in a Laval nozzle. Arch. Ration. Mech. Anal. 205, 151–193 (2012)
T3N. Tsuge: Isentropic gas flow for the compressible Euler equation in a nozzle, Arch. Ration. Mech. Anal. 209, 365–400 (2013)
T4N. Tsuge: Existence and stability of solutions to the compressible Euler equations with an outer force. Nonlinear Anal. Real World Appl. 27, 203–220 (2016)
T5N. Tsuge: Global entropy solutions to the compressible Euler equations in the isentropic nozzle flow for large data: Application of the generalized invariant regions and the modified Godunov scheme. Nonlinear Anal. Real World Appl. 37, 217–238 (2017)
T6Tsuge, N.: Global entropy solutions to the compressible Euler equations in the isentropic nozzle flow, Hyperbolic Problems: Theory, Numerics, Applications
By Alberto Bressan, Marta Lewicka, Dehua Wang, Yuxi Zheng (Eds.),
AIMS on Applied Mathematics 10, 666–673 (2020)
T7Tsuge, N.: Existence of a time periodic solution for the compressible Euler equation with a time periodic outer force. Nonlinear Anal. Real World Appl. 53, 103080 (2020)
T8N. Tsuge: Remarks on the energy inequality of a global L^∞ solution to the compressible Euler equations for the isentropic nozzle flow. Commun. Math. Sci. to appear.
T9N. Tsuge: Existence of a Time Periodic Solution for the Compressible Euler Equation with a Time Periodic Outer Force in a Bounded Interval, Arch. Ration. Mech. Anal. 247, Paper No. 41 (2023)
T10N. Tsuge: Decay of solutions of isentropic gas dynamics for large data, arXiv: 2301.01022
|
http://arxiv.org/abs/2307.00218v1
|
20230701043159
|
Optical N-plasmon: Topological hydrodynamic excitations in Graphene from repulsive Hall viscosity
|
[
"Wenbo Sun",
"Todd Van Mechelen",
"Sathwik Bharadwaj",
"Ashwin K. Boddeti",
"Zubin Jacob"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.str-el",
"physics.flu-dyn"
] |
APS/123-QED
equal contribution
Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
equal contribution
Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
[email protected]
Elmore Family School of Electrical and Computer Engineering, Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907, USA
Edge states occurring in Chern and quantum spin-Hall phases are signatures of the topological electronic band structure in two-dimensional (2D) materials. Recently, a new topological electromagnetic phase of graphene characterized by the optical N-invariant has been proposed. Optical N-invariant arises from repulsive Hall viscosity in hydrodynamic many-body electron systems, fundamentally different from the Chern and Z_2 invariants. In this paper, we introduce the topologically protected edge excitation – optical N-plasmon of interacting many-body electron systems in the topological optical N-phase. These optical N-plasmons are signatures of the topological plasmonic band structure in 2D materials. We demonstrate that optical N-plasmons exhibit fundamentally different dispersion relations, stability, and edge profiles from the topologically trivial edge magneto plasmons. Based on the optical N-plasmon, we design an ultra sub-wavelength broadband topological hydrodynamic circulator, which is a chiral quantum radio-frequency circuit component crucial for information routing and interfacing quantum-classical computing systems. Furthermore, we reveal that optical N-plasmons can be effectively tuned by the neighboring dielectric environment without breaking the topological properties. Our work provides a smoking gun signature of repulsive Hall viscosity and opens practical applications of topological electromagnetic phases of two-dimensional materials.
Optical N-plasmon: Topological hydrodynamic excitations in Graphene from repulsive Hall viscosity
Zubin Jacob
=================================================================================================
§ INTRODUCTION
Over the past few decades, the discoveries of topological phases and protected edge excitations of two-dimensional materials have gained a prominent role in condensed matter physics and photonics <cit.>. In graphene, the Chern invariant (C ∈ Z) originating from complex electron next-nearest-neighbor (NNN) hopping was first proposed to achieve a topological electronic phase without external magnetic fields <cit.>. The study of the corresponding chiral edge charge transport inspired discoveries beyond condensed matter physics <cit.>, in photonics <cit.>, cold atoms <cit.>, and acoustics <cit.>. On the other hand, the Z_2 invariant (ν∈ Z_2) emerges in graphene in the presence of spin-orbit coupling and characterizes the quantum spin Hall phase <cit.>. Insights into the associated chiral edge spin transport have driven potential applications in spintronics <cit.> and topological light sources <cit.>.
Recently, a new topological electromagnetic phase of graphene characterized by the optical N-invariant (N ∈ Z) was proposed <cit.>. This new topological phase arises only in the hydrodynamic regime of the interacting many-body electron system. The optical N-invariant characterizes the topology of bulk plasmonic as opposed to the electronic band structure and arises from the Hall viscosity of electron fluids. It is fundamentally different from the Chern and Z_2 invariant characterizing the topology of bulk electronic bands in graphene <cit.>. Inspired by this development, in this article, we introduce the topologically protected edge state – optical N-plasmon of this topological optical N-insulator and explore potential applications as well as control techniques.
Recent interest has focused on the hydrodynamic regime of graphene in the electronic context, such as the violation of the Wiedemann-Franz law <cit.> and negative local resistance <cit.>. However, we note that the unique topological plasmonic behavior in the hydrodynamic regime is relatively unexplored. Our article here combines electrodynamics and hydrodynamics of graphene to uncover the topological properties. A related quantity, Hall viscosity in the static regime was measured for the first time recently <cit.> even though the theoretical prediction was made two decades ago <cit.>. It was shown that ν_H is connected to non-local Hall conductivity, non-local gyrotropy, and topological acoustic waves <cit.>.
In this paper, we introduce the optical N-plasmon, a unique topological edge excitation that only occcurs in the many-body interacting hydrodynamic regime of graphene. We demonstrate that optical N-plasmons are fundamentally different from topologically-trivial edge magneto plasmons (EMPs), including the conventional EMP <cit.> in the characteristic dispersion relations, stability with respect to edge disorders, and edge profiles. We show that the dispersion of optical N-plasmons exhibits nontrivial topological nature and closes the bulk plasmonic bandgap (Fig. <ref>(a)). In stark contrast, dispersions of topologically-trivial EMPs fail to do so in general (Fig. <ref>(b)). We further reveal that, since the optical N-plasmon is topologically protected, it is not sensitive to either sharp boundary defects or edge disorders that can change the nature of electron-boundary scattering properties from diffusive to specular (see Fig. <ref>(c)). In contrast, EMPs not protected by topology can suffer from back-scattering and are generally unstable when certain edge disorders are present (see Fig. <ref>(d)). Finally, we also discuss that optical N-plasmons provide the experimental smoking gun signatures of the optical N-invariant in the 2D electron fluid.
Our study provides a rigorous comparison of different regimes for the emergence of optical N-plasmons and other plasmonic excitations in graphene. Graphene provides an important platform for studying plasmonic excitations in the 2D interacting many-body electron system in different regimes <cit.>. In the non-interacting 2D electron gas (2DEG) regime, conventional gapless graphene plasmons and gapped graphene magneto plasmons were studied by identifying the zeros of dielectric functions <cit.>. We notice that non-local effects on conventional graphene (magneto) plasmons are considered within the random phase approximation <cit.>. Conventional EMPs also emerge in the non/weakly interacting regime where the 2D electron system can be described by the Euler equation without any viscous term <cit.>. Meanwhile, optical N-plasmons proposed in this article emerge only in the strongly-interacting hydrodynamic flow regime with repulsive Hall viscosity. We obtain dispersion relations of optical N-plasmons by finding the propagating solutions of the underlying hydrodynamic equations. Therefore, non-local effects originating from the viscous hydrodynamic model are naturally included. We develop an electromagnetic-hydrodynamic simulation based on the multiphysics model combining the linearized Navier-Stokes equations and electromagnetic equations. We employ experimentally-relevant parameters for simulating the optical N-plasmons in the 2D graphene interacting many-body electron system.
Based on the optical N-plasmons, we propose the design of an ultra sub-wavelength broadband topological hydrodynamic circulator. Circulators are non-reciprocal circuit components important for microwave communications and quantum-classical information routing <cit.>.
Many conventional ferrite or plasmonic circulator designs <cit.> are based on chiral EMPs not protected by topology <cit.>. The topological hydrodynamic circulator inherits robustness from optical N-plasmons, and the circulation behavior will not be perturbed by boundary defects or edge disorders. We simulate the performance of the proposed topological circulator with realistic graphene parameters. We show that the simulated frequency, momentum, and edge profile of the optical N-plasmon match well with the topological theory.
We reveal that the optical N-plasmon can be effectively tuned by the neighboring dielectric environment without breaking its topological properties.
Engineering plasmon properties is crucial for manipulating light in nano-devices <cit.>. We study the properties of optical N-plasmons in both transparent and opaque neighboring dielectric environments. We show that without introducing electrical contacts or structure deformations <cit.>, group velocities of optical N-plasmons can be tuned in a contact-free manner by controlling the fringing fields in neighboring dielectric materials. The controllability and the aforementioned compact and topological nature indicate potential applications of the optical N-plasmons in graphene plasmonics <cit.>.
The paper is organized as follows. In Sec. <ref>, we discuss the hydrodynamic electron flow model and optical N-invariant. In Sec. <ref>, we study the dispersions and profiles of optical N-plasmons and other bulk and edge excitations in hydrodynamic electron fluids. We demonstrate the fundamental differences between the optical N-plasmon and other topologically trivial EMPs. In Sec. <ref>, we present the circulation of optical N-plasmons in the hydrodynamic topological circulator based on graphene electron fluids. In Sec. <ref>, we study the properties of optical N-plasmons in different neighboring dielectric environments. Section <ref> summarizes the paper and indicates further applications of optical N-plasmons for future research.
§ OPTICAL N-INVARIANT
For completeness, we first summarize some key aspects of the topological optical N-invariant in graphene's viscous Hall fluid. Interacting many-body electron systems in various two-dimensional (2D) materials can be described by the hydrodynamic electron flow model when the momentum-conserving electron-electron scattering is dominant <cit.>. The optical N-invariant classifies the electromagnetic topology in the presence of electron-electron interactions through the bulk atomistic susceptibility tensor. It was shown that the optical N-invariant is the winding number of the atomistic susceptibility tensor <cit.>. This response function tensor is a many-body Green's function of the system, which has both spatial and temporal dispersion (i.e., momentum and frequency dependence). Here, due to the f-sum rule <cit.> and Hall viscosity ν_H, the susceptibility tensor is properly regularized. As a result, the originally unbounded 2+1D momentum-frequency space of this continuum model can be compactified and is topologically equivalent to S^2× S^1 <cit.>. Through the Green's function formalism <cit.>, a quantized integer topological invariant – optical N-invariant can be defined for this interacting many-body system <cit.>:
N=sgn(ω_c)+sgn(ν_H),
where ω_c is the cyclotron frequency and ν_H is the Hall viscosity. The topological phase is characterized by N=± 2 in the presence of a repulsive Hall viscosity ω_c ν_H >0, and the topologically trivial phase is characterized by N=0 with ω_c ν_H < 0. The optical N-invariant represents the topological property of the bulk plasmonic band structure and is fundamentally different from the Chern invariant and Z_2 invariant that are related to the bulk electronic band structure <cit.>.
We emphasize that the topological hydrodynamic excitations in this paper have important implications beyond the linearized continuum model and can be generalized to include the lattice symmetry and local-field effects <cit.>. The topological protection is robust beyond the linear regime. The proof is related to the recently developed viscous Maxwell-Chern-Simmons theory, which connects the optical N-invariant with spin-1 eigenvalues at high-symmetry points <cit.> in momentum space. The U(1) gauge field of the 2D interacting fluid has a twist captured by the flip of spin-1 eigenvalues at high symmetry points. Thus any impurity or perturbation which does not cause spin-flipping for ultra-subwavelength (high momentum) plasmonic waves will not open the bandgap (between edge and bulk plasmonic states) in a topological optical-N insulator.
The optical N-plasmon introduced in this paper occurs on the edge and is a smoking gun signature of repulsive Hall viscosity. For the bulk magneto-plasmons in hydrodynamic graphene, there exists a spin-1 skyrmionic behavior in momentum space <cit.>. The experimental probe of this momentum space skyrmion was predicted to be evanescent magneto-optic Kerr effect (e-MOKE) spectroscopy <cit.>. The sign change of the e-MOKE angle can shed light on this unique optical N-invariant of matter. We note that the Chern invariant and Z2 invariant of graphene do not capture these effects arising only in the many-body interacting hydrodynamic regime.
Finally, we note that these unique edge states occur as super-symmetric partners between spin-1 excitations in Maxwell's equations and spin-1/2 fermions in the Dirac equation <cit.>. Gyrotropy in Maxwell's equations is analogous to an effective photon mass when compared to mass in the 2D Dirac equation <cit.>. The topological edge excitations in Maxwell's equations only occur from dispersive photon mass (non-local gyrotropy) of a specific sign: i.e., repulsive Hall viscosity. On the other hand, attractive Hall viscosity leads to a topologically trivial phase. Thus the signature of an optical N-phase in an ideal model can be considered to be massive spin-1 excitations in the bulk and massless linearly dispersing spin-1 excitations on the edge. However, no candidate material was known to exhibit these effects. One of our aims is to prove that graphene can exhibit these unique effects for experimental exploration.
§ OPTICAL N-PLASMONS
§.§ Hydrodynamic electron flow model
In this part, we present the hydrodynamic electron flow model considered in this work.
In the hydrodynamic regime, electron transport is governed by the linearized Navier-Stokes equation with a viscous term <cit.>. The anti-symmetric part of the viscous tensor, Hall viscosity ν_H, can emerge in the 2D electron fluid when both time-reversal and parity symmetries are broken by an external magnetic field <cit.>. This non-dissipative Hall viscosity ν_H was first measured in ultra-clean graphene <cit.>. For an interacting many-body electron system, when the momentum-conserving electron-electron scattering is dominant, the linearized Navier-Stokes equations describing the hydrodynamics of electrons in 2D is <cit.>:
∂𝐉/∂ t=-v_s^2∇ρ - (γ-ν∇^2) 𝐉
- (ω_c+ν_H ∇^2) 𝐉×ẑ +e^2n_0/m𝐄,
∂_tρ+∇·𝐉=0.
Here, 𝐉=(J_x,J_y) is the 2D current density, v_s^2=v_F^2/2 represents the compressional wave velocity in the 2D electron fluid, v_F is the Fermi velocity, ρ is the charge density, γ is the damping rate, ν is the ordinary shear viscosity, ω_c=eB/(mc) is the cyclotron frequency, ν_H is the Hall viscosity, n_0 is the electron density, and m is the effective electron mass. Within the quasi-static approximation, electric field 𝐄=-∇ϕ. In the absence of external free charges out of the electron fluid plane, 𝐄 arises from the fringing fields in the surrounding medium. The second term on the RHS of Eq. (<ref>) describes dissipation in the electron fluid. Meanwhile, the third term (ω_c+ν_H ∇^2) 𝐉×ẑ is dissipation-less and will only emerge in the 2D electron fluid when both time reversal symmetry 𝒯 and parity symmetry 𝒫 are broken at the same time by an external magnetic field B. Continuity Eq. (<ref>) describes the charge conservation law for electrons.
§.§ Bulk magneto plasmons
We first discuss the bulk magneto plasmons in the 2D electron fluid with Hall viscosity. We consider the dielectric material surrounding the 2D electron fluid is isotropic with an effective permittivity tensor ε=εI. From Eq. (<ref>,<ref>), in the low-loss limit (γ,ν→ 0), we can solve the dispersion of bulk magneto plasmons by considering propagating bulk modes of the form e^i(𝐪·𝐫 - ω t):
ω^2=2π e^2 n_0 |q|/m ε+v_s^2 q^2 +(ω_c - ν_H q^2)^2.
Here, the last term in Eq. (<ref>) originates from the external magnetic field and opens the bandgap between bulk bands at q=0. As q→ + ∞, the bulk magneto plasmon dispersion ω(q) is dominated by the Hall viscosity term and shows the asymptotic behavior ω=𝒪(q^2). In contrast, in the absence of ν_H, the conventional bulk magneto plasmon dispersion is dominated by the v_s^2 q^2 term with ω=𝒪(q) when q→ + ∞. We focus on the transparent surrounding material with ε>0. Hence, the bandgap between bulk bands will be opened for all momentum q. The shape of the bulk magneto plasmon dispersion largely depends on the Hall viscosity ν_H and dielectric permittivity of the surrounding medium ε. We can define a unitless value ℬ to classify two different shapes of bulk bands:
ℬ=(2 ν_H ω_c - v_s^2)^3 m^2 ε^2 /27 π^2 e^4 ν_H^2 n_0^2.
For ℬ⩽ 1, bulk bands will monotonically increase with momentum |q|. For ℬ>1, bulk bands will have a Mexican hat shape. In Fig. <ref>, we show these two classes of bulk bands by magenta curves (Fig. <ref>(a-c,g-i) for ℬ⩽ 1, Fig. <ref>(d-f) for ℬ > 1). In the figures, q̃=q v_s/ω_c and ω̃=ω/ω_c are the unitless momentum and frequency normalized by characteristic parameters of the system. It is worth noting that for ℬ⩽ 1, the bandgap of bulk bands Δ=2ω_c is only determined by the external magnetic field B. For ℬ>1, the bandgap of bulk bands Δ<2ω_c is also controlled by material properties and permittivity of the surrounding dielectric material.
§.§ Optical N-plasmons
In this part, we demonstrate the nontrivial topological properties of optical N-plasmons. We compare the dispersions of optical N-plasmons and other topologically trivial edge states in 2D electron fluid. Conventional edge states are usually believed to depend on boundary conditions sensitively <cit.>. In contrast, we show that the topologically protected optical N-plasmons are not sensitive to boundary conditions at the 2D electron fluid boundaries.
For the 2D electron fluid, fringing fields in the surrounding media and electron fluid boundary conditions complicate the edge problems significantly <cit.>. The fringing fields mediate the interactions between quasi-static charges in the 2D plane and contribute to an effectively non-local potential. We solve the edge problem with the non-local potential fully by numerical simulations in section <ref>. In this section, we adopt the Fetter approximation, which can provide accurate dispersions of edge states except in the long-wavelength limit (q → 0) <cit.>. Boundary conditions of the 2D electron fluid are microscopically determined by degrees of edge disorders and the mechanism of electron-boundary scattering, and can be characterized by the slip length l_s <cit.>. Many different factors, including charge density n_0, temperature, and smoothness of material boundaries, can influence l_s. In the low-loss limit, electron fluid boundary conditions can be written in terms of l_s <cit.>:
[t̂·ς·n̂+t̂·𝐉/l_s]=0,
ς=[ -∂_x J_y - ∂_y J_x ∂_x J_x -∂_y J_y; ∂_x J_x -∂_y J_y ∂_x J_y + ∂_y J_x ],
where t̂, n̂ are the unit vectors in the tangential and normal directions. The two extreme cases of electron fluid boundary conditions, no-slip and no-stress, correspond to slip length l_s = 0 and l_s = ∞, respectively. These two regimes could happen for the viscous electron fluid when the electron-boundary scattering is diffusive (no-slip) or specular (no-stress). In the intermediate regime where 0<l_s<∞, the finite-slip boundary condition is appropriate, where part of electron momentum is lost in the electron-boundary scattering process.
In Fig. <ref>, we show the dispersions of edge excitations when the viscous Hall electron fluid is in different topological phases and under various boundary conditions. The derivations of the edge state dispersion and the material parameters are given in Appendix <ref>. The bulk-boundary correspondence guarantees that for the topological phase characterized by N=2, optical N-plasmons always exist in the bandgap of bulk bands. Dispersion of optical N-plasmons is marked by cyan curves in Fig. <ref>(a - c) for bulk bands with ℬ⩽ 1 and in Fig. <ref>(d - f) for bulk bands with ℬ > 1. Optical N-plasmons can connect the bulk bands in both cases. Conventional edge states are usually considered to be sensitive to boundary conditions <cit.>. In contrast, as is shown in Fig. <ref>(a - f), the dispersion of the optical N-plasmon is independent of the fluid boundary conditions since it is protected by topology. This reveals the advantages of optical N-plasmons for practical applications in information technology, where stability is highly required. As a result, we simulate the performance of a circulator in section <ref> based on the optical N-plasmons. Apart from the optical N-plasmon, some other types of edge magneto plasmons (EMPs) can also exist in the bandgap under extreme boundary conditions (l_s=0 or ∞). These EMPs may also be chiral (CEMPs) and are marked by yellow solid and dashed curves in Fig. <ref>. It is worth noting that although optical N-plasmons are always chiral, CEMPs are not necessarily protected by topology. CEMPs exist in the bandgap due to the anomalous bulk-boundary correspondence under specific boundary conditions. It is related to the scattering of bulk modes at the boundary and “ghost edge modes” at infinite frequency <cit.>. Dispersions of CEMPs are very sensitive to boundary conditions. As is shown in Fig. <ref>(a, c, d, f), by continuously deforming the shape of bulk bands without closing the bandgap at any q̃ point, the group velocity of CEMP can be reversed, which is in contrast to the optical N-plasmon. The frequency windows where only optical N-plasmons can be excited are marked by the yellow-cyan regions.
In Fig. <ref>(g - i), we present the dispersions of edge states for the topologically trivial N=0 phase. Here, optical N-plasmons do not exist. Despite some unidirectional frequency windows existing under extreme cases of boundary conditions (marked by yellow region), the dispersions of these CEMPs are not stable under varying boundary conditions. Furthermore, the bandgap of bulk bands can not be connected under all boundary conditions. This is because the bulk material is in a topologically trivial phase, and no edge state is protected by topology. Figure. <ref>(j) shows the dispersion of the conventional Fetter edge magneto plasmons <cit.> (FEMP) for ν_H=0. In this case, since the unbounded momentum space can not be compactified due to the absence of Hall viscosity ν_H, no topological interpretation exists for the bulk. As a result, FEMP is unidirectional but not topological and can not connect bulk bands. Hence, FEMP is not guaranteed to be immune to back-scattering at the boundary defects.
As is shown in Fig. <ref>(k - n), optical N-plasmons (solid cyan curves) have distinct normal profiles compared with CEMP and FEMP (solid and dashed yellow curves). Here, δρ represents the charge density variation of different types of normalized edge states ψ̃. x̃=x ω_c/v_s is the normalized unitless distance from the fluid boundary. For optical N-plasmons, despite the confinement being related to the Hall diffusion length D_H=√(ν_H/ω_c) and may vary with Hall viscosity, δρ(x̃=0)=0 is always valid. For CEMP and FEMP, δρ(x̃=0)≠ 0. This difference is because the dispersion of optical N-plasmons is independent of while dispersions of CEMP and FEMP are sensitive to boundary conditions. With some algebra, we can prove that in the low-loss limit, δρ(x̃=0)=0 is a necessary and sufficient condition for ψ̃ to be independent of varying boundary conditions (see Appendix A).
In the next section, we employ optical N-plasmons to design an ultra sub-wavelength broadband topological hydrodynamic circulator.
§ TOPOLOGICAL HYDRODYNAMIC CIRCULATOR
§.§ Fringing fields and non-local in-plane potential
For the 2D electron fluid confined in the z=0 plane, the fringing fields out of the plane introduce a non-local effect in the coupling between the charge density ρ and in-plane potential ϕ. The non-local coupling between ϕ and ρ confined in the 2D domain Ω and free charges ρ_f out of the plane is:
ϕ(t,𝐫)=4π/ε∫_Ω d𝐫'G(𝐫,𝐫')ρ(t,𝐫')+ϕ_f(t,𝐫),
where ϕ_f(t,𝐫)=4π∫ d𝐑_0 G(𝐫,𝐑_0)ρ_f(t,𝐑_0)/ε is the electric potential generated from the free free charges ρ_f, ε is the effective permittivity of the surrounding medium, 𝐫, 𝐫' denote the 2D coordinates, 𝐑_0 denotes the 3D coordinates, G is the scalar Green's function,
G(𝐫,𝐫')=1/4π |𝐫-𝐫'|.
Here, in contrast to the 3D case, for the 2D electron fluid, no simple differential operator with respect to the 2D coordinates 𝐫 can relate ϕ and ρ locally <cit.>. In this section, we develop an electromagnetic-hydrodynamic simulation to solve the coupled Eqs. (<ref>) and (<ref>).
§.§ Graphene-based topological hydrodynamic circulator
Optical N-plasmons at the edge of the viscous Hall electron fluid in the N=2 phase are fundamentally protected by the topology and are robust against fluctuations. As a result, it is well suited for applications in information processing. In this section, we propose the design of an ultra sub-wavelength broadband topological hydrodynamic circulator based on the optical N-plasmons in graphene.
The schematic of the 3-port circulator design is demonstrated in Fig. <ref>(a). Graphene with the Y-shape circulator geometry (gray region) is on top of the isotropic dielectric material with permittivity ε_b (blue bulk). In this case, effective permittivity ε=ε_b/2. Graphene is required to be ultra-clean so that the interacting electrons can be described by the hydrodynamic flow model. A static external magnetic field is applied in the graphene region, and Hall viscosity can emerge in the system since the time-reversal symmetry and parity symmetry are broken. For repulsive Hall viscosity (ν_H ω_c >0), viscous Hall electron fluid in graphene will be in the topological N=2 phase. Three oscillating electric dipoles with oscillation frequency ω_s are placed on top of each port. These dipoles are used to excite optical N-plasmons in the circulator. Hence, ω_s is considered to be in the bandgap of bulk bands. The possible boundary defects of the circulator are captured by a sharp corner in port 2. Since optical N-plasmons are unidirectional and immune to back-scattering, the topological circulation behavior from port 1 → 2 → 3 will not be interfered with by the boundary defect. Reversing the direction of the magnetic field realizes the topological phase transition into N=-2, and the circulator will have an opposite circulation direction port 3 → 2 → 1 accordingly.
We employ the finite element method to simulate the topological hydrodynamic circulator in the time domain and demonstrate the topological circulation behavior of optical N-plasmons in Fig. <ref>(b). We also provide a supplementary video generated from the electromagnetic-hydrodynamic simulations. The graphene region is described by Eqs. (<ref>,<ref>) and a finite slip boundary condition 0<l_s<∞ is applied at the boundary of graphene. In the simulations, we employ experimental graphene parameters (Appendix. <ref>) in the low-loss limit with ℬ<1 and consider a high index substrate with ε=50 under graphene. An external magnetic field B=2 T is applied in the graphene region with a port width of 329 nm. The three dipoles on top of each port with oscillation frequencies ω_s=ω_c/2 contribute to ϕ_f in Eq. (<ref>). Their projections in the graphene plane are marked by red stars. Inside the graphene region, normalized charge density variations δρ are represented by the colorbar. From the simulations, it is clear that the excited optical N-plasmons at red stars will flow unidirectionally from port 1 → 2 → 3. Optical N-plasmons cross the sharp corner in port 2 smoothly without back-scattering.
In Fig. <ref>(c), we show the dispersion relation (cyan curve) of optical N-plasmons with the graphene parameters considered in our simulations. We mark the optical N-plasmons excited by the oscillating dipoles in our simulations with the red star in the frequency and momentum space. In Fig. <ref>(d), we show the normal profile of the normalized edge excitation at t̃≈23 and compare the charge density variations along cut line segment ℓ in port 1 (ℓ is marked by the magenta line in Fig. <ref>(b)). x̃ represents the distance from the fluid boundary. t̃=t ω_c is the unitless time normalized by the characteristic timescale of the system. The simulation results in each mesh along ℓ match the theory predictions (cyan curve) well. The small deviations at larger x̃ are related to coarser meshes in that region. δρ(x̃=0)=0 shows that the excited edge states in Fig. <ref>(b) are optical N-plasmons protected by topology instead of other types of chiral edge states. In Fig. <ref>(e), we study δρ(t) at points A and B and show the corresponding charge density variations (points A and B are marked by the magenta dots in Fig. <ref>(b)). The theory curves are plotted by fitting the simulation results at point A using a sine curve with period 2π/ω and translationally shifting it by d_AC/v_p to get theory results at point B. We can see that the frequency ω and propagation velocity v_p of the simulated optical N-plasmons match with their theoretical counterparts.
The performance of a circulator can be evaluated based on many aspects, including the form factor, isolation, and bandwidth. Form factor ℱ indicates the relative size of a circulator with respect to its working frequency range. In our design, ℱ is determined by the confinement of optical N-plasmons D_H=√(ν_H/ω_c) and can be defined as the ratio between port width d and vacuum wavelength corresponding to the frequency ω_s. For the simulated circulator performance in Fig. <ref>(b), ℱ≈ 2.5× 10^-3, revealing that this topological hydrodynamic circulator design is ultra-compact. Since no back-scattering is allowed by the topology and unidirectional optical N-plasmons are the only allowed state in the bandgap, this topological circulator should possess much larger isolation compared with other designs based on topologically-trivial edge states. The bandwidth of this topological circulator is determined by the bandgap of bulk bands. With an external magnetic field B=2 T, bandwidth BW=Δ=2 ω_c ≈ 4.5 THz is ultrawide. It is worth noting that the performance of this design, including form factor, bandwidth, and response speed, can be effectively tuned and optimized by changing the external magnetic field or surrounding media (see section <ref>). All the discussions above show that the proposed design is ultra sub-wavelength, broadband, tunable, and can operate in the THz range. This reveals that the topological hydrodynamic circulator can play an important role in next-generation information routing and interfacing quantum-classical computing systems.
§ CONTACT-FREE OPTICAL N-PLASMON CONTROL WITH NEIGHBORING DIELECTRIC ENVIRONMENT
Although fringing fields in the surrounding medium can cause intrinsic non-locality in Eq. (<ref>) and complicate the problem greatly, they can offer new flexibility to tune optical N-plasmons. The existence of optical N-plasmons is guaranteed by topology regardless of the neighboring dielectric materials, but it is possible to exploit the surrounding medium to tune and optimize the optical N-plasmon properties in nano-devices without introducing electrical contacts in the viscous electron fluid. In this section, we study the influence of the surrounding dielectric environment on optical N-plasmons and the topological hydrodynamic circulator.
We consider that the 2D graphene viscous electron fluid in the N=2 phase is on top of the isotropic transparent material with positive dielectric constant ε. The bandgap of bulk bands is always connected by optical N-plasmons at the edge. In Fig. <ref>(a), we show that the confinement of optical N-plasmons is not sensitive to ε. We demonstrate that the charge density variations corresponding to the normal profiles of the normalized optical N-plasmon states δρ(x̃) at ε=10, 30, 100 are similar. This is because the confinement of δρ(x̃) is determined by the Hall diffusion length D_H=√(ν_H/ω_c) independent of ε. In Fig. <ref>(b), we present that the group velocity v_g=v_s d ω̃/d q̃ of optical N-plasmons can be effectively tuned by ε. By changing ε from 5 to 100, group velocity v_g of the optical N-plasmon with ω̃=0.5 can be modulated by a factor of 10. It is worth noting that v_g > v_s=v_F/√(2) and can approach v_s asymptotically in the large ε limit. For the topological circulator design, the stable confinement of optical N-plasmons reveals that the circulator can always be ultra-compact, and the controllable v_g indicates a tunable response speed of the topological circulator.
§ CONCLUSION
To summarize, we introduce the optical N-plasmon, which is the topologically protected edge excitation of the two-dimensional hydrodynamic electron flow with repulsive Hall viscosity. Optical N-plasmons are fundamentally different from conventional chiral/Fetter EMPs in three aspects: dispersion relations, stability with respect to edge disorders, and edge profiles. We propose an ultra sub-wavelength broadband topological hydrodynamic circulator based on optical N-plasmons, which is a chiral quantum radio-frequency circuit component crucial for information routing and interfacing quantum-classical computing systems. The topological circulator has a robust performance when boundary defects and edge disorders are present. The simulated optical N-plasmons circulating in the circulator ports show a good match with the theory. We demonstrate that group velocities of optical N-plasmons can be tuned in a contact-free manner by controlling the fringing fields in neighboring dielectric materials. Our work provides an experimental signature of repulsive Hall viscosity and opens practical applications of the new topological electromagnetic phase of two-dimensional materials. Moreover, the compact, tunable, topologically protected optical N-plasmons can have further applications in various fields, including graphene plasmonics <cit.>, plasmonic metamaterials <cit.>, nonreciprocal quantum devices <cit.>.
§ ACKNOWLEDGEMENTS
This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Nascent Light-Matter Interactions (NLM) program and U.S. Department of Energy (DOE), Office of Basic Sciences under DE-SC0017717.
§ EDGE EXCITATIONS OF THE HYDRODYNAMIC ELECTRON FLUID
In this appendix, we provide solutions to the edge excitations of the hydrodynamic electron flow model based on the Fetter approximation <cit.>.
Assuming the hydrodynamic electron fluid has the half plane geometry in the x>0 region. The propagating edge modes have the form f(x,y,t)=f(x)e^i(qy-ω t), where f can be the charge density variation δρ or the 2D current density 𝐉. As we have shown in Eq. (<ref>), the fringing fields out of the electron fluid plane introduce intrinsic non-locality in the electromagnetic potential ϕ. Instead of solving this complex non-local problem for edge state dispersion, we consider an approximate integral kernel that makes the Poission's equation effectively local <cit.>. The non-local integral form of Poission's equation can thus be replaced by the differential equation:
∂^2 ϕ(x)/∂ x^2 - 2q^2 ϕ(x) = 4π |q| ρ(x)/ε.
This Fetter approximation provides accurate dispersion relations for our study except in the long-wavelength limit (q → 0), where the asymptotic behavior of the exact solution can not be recovered <cit.>. Combining Eq. (<ref>) and Eq. (<ref>), the dispersions ω=ω(q) of edge modes of the form e^i(qy-ω t) are given by the following coupled equations <cit.>:
-α^4 q̃'̃^6+(2α^2-1-α^4 q̃^2)q̃'̃^4 +
(ω̃^2-ω̃_̃b̃^2-Ω̃_̃p̃^2+α^4 q̃^4)q̃'̃^2
+ q̃^2 (ω̃^2-1) = 0,
[F_ij]_3 × 3 = 0,
where [F_ij] is a 3×3 matrix corresponding to boundary conditions:
F_1j=√(2)|q̃| + η_j,
F_2j=[ω̃η_j - (1-α^2q̃'̃_̃j̃^2)q̃] (q̃'̃_̃j̃^2+q̃^2) / q̃'̃_̃j̃^2,
F_3j^0=[ω̃q̃ - (1-α^2q̃'̃_j^2)η_j] (q̃'̃_̃j̃^2+q̃^2) / q̃'̃_̃j̃^2,
F_3j^+∞=( 2q̃[ω̃q̃ - (1-α^2q̃'̃_j^2)η_j] - ω̃q̃'̃_j^2 ) (q̃'̃_̃j̃^2+q̃^2) / q̃'̃_̃j̃^2,
where α=√(ω_cν_H)/v_s is a unitless constant determined by the electron fluid. ω̃=ω/ω_c and q̃=v_s q/ω_c are the normalized frequency and momentum. Equation (<ref>) is a cubic equation with respect to q̃'̃^2, and q̃'̃_̃ĩ^2 is the ith root of Eq. (<ref>). η_i=√(q̃^2-q̃'̃_̃ĩ^2) with Re η_i ⩾ 0. Here, plasma frequency Ω̃_̃p̃=β√(|q̃|), bulk plasmonic band dispersion ω̃_̃b̃^2=(1-α^2 q̃^2)^2+q̃^2+β^2|q̃| and β=√(2π e^2 n_0/(m εω_c v_s)). When the electron fluid boundary condition is no-slip with l_s=0 or no-stress with l_s=+∞, F_3j=F_3j^0 or F_3j=F_3j^+∞ respectively. For finite slip boundary condition, F_3j=F_3j^+∞ + κ F_3j^0, where κ is a constant determined by l_s.
In Fig. <ref>(a-c), we consider α=0.6817, β = 1.5263 corresponding to the monolayer graphene experimental parameters in Table <ref>. In Fig. <ref>(d-f), we consider α=3.9358, β = 1.5263 for the ℬ>1 case. In Fig. <ref>(g-i), we consider α=0.6817i, β = 1.5263 for the topologically trivial N=0 phase. In Fig. <ref>(j), we consider ν_H=0 and other paramters are the same as the monolayer graphene parameters. In Fig. <ref>, we consider the monolayer graphene parameters, and β is mainly determined by the neighbouring medium permittivity ε.
The solutions to δρ (x) of edge modes are:
δρ (x) ∝Σ_i (q̃'̃_̃ĩ^2+q̃^2) ϕ_i e^-η_i x̃,
where ϕ_i satisfies Σ_j F_ijϕ_j =0.
From Eq. (<ref>), δρ (x̃=0) ∝Σ_i (q̃'̃_̃ĩ^2+q̃^2) ϕ_i. In the low-loss limit, we can find that if there exists an edge mode satisfying all three kinds of electron fluid boundary conditions, δρ (x̃=0) = 0 since Σ_i (q̃'̃_̃ĩ^2+q̃^2) ϕ_i is a linear combination of Σ_i F_3i^0 ϕ_i and Σ_i F_3i^+∞ϕ_i. Correspondingly, if a propagating edge mode satisfies δρ (x̃=0) = 0, then this mode can exist under all three kinds of electron fluid boundary conditions.
This can also be understood from the continuity equation only. Combining the Fourier transform of Eq. (<ref>) with respect to x multiplied by k (momentum corresponding to x), δρ (x̃=0) ∝lim_|k|→∞ k f(k) <cit.>, and the electron fluid boundary conditions, we can reach the same argument in the previous paragraph.
§ SIMULATION DETAILS
In this appendix, we present the graphene parameters employed in the topological hydrodynamic circulator simulations. We also provide a supplementary video for the time-domain simulations of optical N-plasmons in the supplementary materials. Fig. <ref>(b) corresponds to the simulation results at t̃≈ 34.
Here, we also demonstrate that the topological circulator will also have a robust performance in the presence of large dissipation. We consider the normal viscosity ν=9.9× 10^-4 m^2/s and damping rate γ̃=γ / ω_c = 0.0176. In this case, we simulate the performance of the topological hydrodynamic circulator and show the results in Fig. <ref>. Boundaries of the circulator are taken to have finite slip length. Here, we can find that although the optical N-plasmons experience dissipation in the propagation process, it is still unidirectional and immune to back-scattering at the boundary defect in port 2. A video for the simulations of optical N-plasmons with large dissipation is also provided in the supplementary materials.
§ OPTICAL N-PLASMONS IN THE OPAQUE SURROUNDING MEDIUM
In this appendix, we consider the dielectric materials surrounding the electron fluid in N=2 phase to be opaque with negative dielectric constant ε. Distinct from the transparent medium case where the bulk bandgap is always open for all q̃, the bulk bandgap can be closed at some q̃ points for negative ε with small |ε| values and reopened at all q̃ when |ε| is large. We distinguish these two regimes by ε<0 and ε≪ 0. In Fig. <ref>(a,b), we show the dispersions of bulk magneto plasmons (magenta curve) and optical N-plasmons (cyan curve) and the profile of δρ(x̃) in the ε<0 case. It is worth noting that optical N-plasmons will persist in the bandgap close to q̃=0 points with a reversed direction. It is hard to excite the optical N-plasmons alone in this case since its frequency range is embedded in bulk bands. The reversal of the optical N-plasmon propagation direction is fundamentally different from the reversal of CEMP directions discussed in section <ref>, where bandgap closure never happens at any q̃ point. For the ε≪ 0 case, as is shown in Fig. <ref>(c,d), the bulk bandgap is reopened, and optical N-plasmons will have the same direction as in the ε > 0 case.
|
http://arxiv.org/abs/2306.06101v1
|
20230609175935
|
Prodigy: An Expeditiously Adaptive Parameter-Free Learner
|
[
"Konstantin Mishchenko",
"Aaron Defazio"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML"
] |
FIMPzilla dark matter and conformal sectors
Ameen Ismail
July 31, 2023
=============================================
We consider the problem of estimating the learning rate in adaptive methods, such as Adagrad and Adam. We describe two techniques, Prodigy and Resetting, to provably estimate the distance to the solution D, which is needed to set the learning rate optimally. Our techniques are modifications of the D-Adaptation method for learning-rate-free learning. Our methods improve upon the convergence rate of D-Adaptation by a factor of 𝒪(√(log(D/d_0))), where d_0 is the initial estimate of D. We test our methods on 12 common logistic-regression benchmark datasets, VGG11 and ResNet-50 training on CIFAR10, ViT training on Imagenet, LSTM training on IWSLT14, DLRM training on Criteo dataset, VarNet on Knee MRI dataset, as well as RoBERTa and GPT transformer training on BookWiki. Our experimental results show that our approaches consistently outperform D-Adaptation and reach test accuracy values close to that of hand-tuned Adam.
§ INTRODUCTION
Optimization is an essential tool in modern machine learning, enabling efficient solutions to large-scale problems that arise in various domains, such as computer vision, natural language processing, and reinforcement learning. One of the key challenges is the selection of appropriate learning rates, which can significantly impact the convergence speed and the quality of the final solution. Learning-rate tuning has been particularly challenging in applications where there are multiple agents that use their own optimizer. For instance, when training Generative Adversarial Networks (GANs) <cit.>, there are two neural networks with different architectures. In federated learning, tuning is even more challenging <cit.>, since there might be billions of devices <cit.>, each optimizing their objective locally. Another example is Neural Architecture Search (NAS) <cit.>, where the goal is to find the best neural network architecture automatically by training a lot of networks and evaluating them on a validation set. In cases like that, it becomes very expensive to manually tune the learning rate.
In recent years, "parameter-free" adaptive learning rate methods <cit.> have gained considerable attention due to their ability to automatically adjust learning rates based on the problem structure and data characteristics. Among these, the D-Adaptation method, introduced by <cit.>, has emerged as a promising practical approach for learning-rate-free optimization.
D-Adaptation works by maintaining a lower bound on the initial distance to solution D=‖ x_0-x_*‖, for any x_* in the solution set of min_x∈ℝ^p f(x). In practice this lower bound increases rapidly during the course of optimization, plateauing to a value close to the true D.
This D quantity is the key unknown constant needed to set the learning rate for non-smooth optimization methods, forming the numerator of the step size:
γ_k+1=D/√(∑_i=0^k‖ g_i‖ ^2), where D=x_0-x_*,
and the denominator is based on the Adagrad step size <cit.>. The Gradient Descent form of D-Adaptation simply plugs in the current lower bound at each step in place of D. This simple approach can be applied to estimate the step size in Adam <cit.>, which yields state-of-the-art performance across a wide-range of deep learning problems. <cit.> also show that asymptotically, D-Adaptation is as fast as specifying the step size using the true D (up to small constant factors).
In this paper, we present two novel modifications to the D-Adaptation method that enhance its worst-case non-asymptotic convergence rate. By refining the algorithm's adaptive learning rate scheme, we achieve improved performance in terms of convergence speed and solution quality. To validate our proposed modifications, we establish a lower bound for any method that adapts to the distance-to-solution constant D. We show that our improved methods are worst-case optimal up to constant factors among methods with exponentially bounded iterate growth.
We then conduct extensive experiments that consistently demonstrate that the improved D-Adaptation methods adapt the learning rate much faster than the standard D-Adaptation, leading to enhanced convergence rates and better optimization outcomes.
§ PRODIGY APPROACH
To understand how we can improve upon D-Adaptation, let us take a closer look at some nuggets in its analysis. In D-adapted Dual Averaging, the gradient at iteration k is scaled with weight λ_k. This leads to the error term
D-adaptation error=∑_k=0^n λ_k^2γ_kg_k^2.
The theory then proceeds to upper bound this sum using the largest of all λ_k's by using the upper bound λ_k≤λ_n. This, however, is quite pessimistic since then λ_k is set to be λ_k = d_k, so λ_n can be as large as D and λ_k can be as small as d_0. Therefore, replacing λ_k^2 with λ_n^2 can introduce a multiplicative error of D^2/d_0^2 in this term.
We take a different approach and instead handle the error term using modified Adagrad-like step sizes. In the Adagrad theory, the error term does not have any λ_k^2 factors, which is exactly why Adagrad places √(∑_i=0^kg_i^2) in the step-size denominator. The required modification is then obvious: since the error terms are now d_i^2g_i^2 instead of g_i^2, the new adaptive step size should be
γ_k+1 = 1/√(∑_i=0^k d_i^2g_i^2)
for the Dual Averaging algorithm and
η_k = d_k^2/√(∑_i=0^k d_i^2g_i^2)
for the Gradient Descent algorithm. This way, we can still control the error term of D-Adaptation but the obtained step size is provably larger since d_k is non-decreasing. For instance, for Gradient Descent, we have
d_k^2/√(∑_i=0^k d_i^2g_i^2)≥d_k/√(∑_i=0^k g_i^2).
Having larger step sizes while preserving the main error term is the key reason why the new Algorithms converge, as we show below, with a faster rate.
Notice, however, that the methods might still be slow because the denominator in the step size might grow too large over time. To remedy this, we introduce a modification for the Gradient Descent step size by placing an extra weight λ_k next to the gradients:
η_k
=d_k^2 λ_k/√(d_k^2 G^2 + ∑_i=0^kd_i^2λ_i^2‖ g_i‖ ^2).
In fact, the modified step size might even increase between iterations, whereas the Adagrad step size always decreases. We will show that as long as λ_k does not grow too quickly, the worst-case convergence rate is almost the same.
Algorithm <ref> and Algorithm <ref> give Gradient Descent and the Dual Averaging variants of our new method. In contrast to Adagrad, they estimate the product of D and G in the denominator, so we call the proposed technique Prodigy. We give the following convergence result for Algorithm <ref>:
Assume f is convex and G-Lipschitz. Given any weights 1≤λ_0≤…≤λ_n, the functional gap of the average iterate of Algorithm <ref> converges as
f(x̂_n) - f_*
≤√(2λ_n)DG2d_n+1 + d_n+1log(1+∑_k=0^n λ_k^2)/√(∑_k=0^n λ_k d_k^2).
Notice that we have the freedom to choose any non-decreasing sequence λ_k as long as the right-hand side is decreasing. This allows us to put much more weight on the recent gradients and get more reasonable step sizes. For instance, we can choose λ_k = k^p, where p>0 and since ∑_k=0^k k^2p=𝒪(k^2p+2), it would result in an extra factor of 1+p in the numerator due to the log term. The denominator, on the other hand, would increase as well, giving us a trade-off that depends on the values of d_k's. We note that weights λ_k=k have appeared previously in Accelegrad <cit.>, which combines Adagrad step sizes with momentum.
To understand why this improves the convergence rate, consider the following lemma, which we prove in the appendix. The lemma presents an upper bound on the terms related to the d_k sequence in the right-hand side of equation <ref>.
Let d_0≤ d_1≤…≤ d_N be positive
numbers and assume N≥2log_2+(d_N/d_0), where log_2+(·) = 1 + log_2(·). Then,
min_t<Nd_t+1/√(∑_k=0^td_k^2)≤4√(log_2+(d_N/d_0))/√(N).
In contrast to the bound in <cit.>, we bound d_t+1/√(∑_k=0^td_k^2) instead of d_t+1/∑_k=0^td_k. This is the reason why the overall guarantee improves by a factor of √(log_2(D/d_0)). For instance, if we set λ_k=1 for all k and substitute the bound from Lemma <ref>, we get the convergence rate
f(x̂_t) - f_*
= 𝒪( GDlog(n+1)√(log_2+(D/d_0))/√(n)).
where t≤ n is chosen as the argmin from Lemma <ref>. Furthermore, for arbitrary increasing positive weights, we get the following guarantee by applying Lemma <ref> directly to the bound in Theorem <ref>:
f(x̂_t) - f_*
=𝒪( GDlog(n+1)√(log_2+(λ_n+1D/d_0))/√(n)log(∑_k=0^nλ_k^2)).
Even though our theory does not guarantee that it is beneficial to use increasing weights λ_k, this result is, to the best of our knowledge, new for Adagrad-like methods. It allows for a wide range of choices in λ_k. For example, if we set λ_k = β_2^-k^p with β_2<1 and p<1/3, then the method is still guaranteed to converge at the rate of 𝒪(1/n^(1-3p)/2). This is of particular interest when we study Adam-like methods, see Section <ref> for a discussion.
The logarithmic term log(n+1) is, however, not necessary and only arises due to the use of Gradient Descent update. The Dual Averaging update, provides a tighter guarantee as given in the next theorem.
Let f be a convex and G-Lipschitz function. For Algorithm <ref>, it holds that:
f(x_t) - f_*
≤4GD/√(n)√(log_2+2(D/d_0)),
where t=min_k≤ nd_k+1/√(∑_i=0^k d_i^2) and log_2+(·) = 1 + log_2(·).
Comparing this with the previous rate, the only difference is the removed log(n+1) factor. This improvement, however, is mostly theoretical as Gradient Descent typically performs better in practice than Dual Averaging. We also note that we do not have a convergence result for Algorithm <ref> with weights other than λ_k=d_k^2. This is due to the fact that the DA analysis requires the step size to be monotonically decreasing, so we cannot place an extra weighting factor in the numerator of γ_k+1.
§ RESETTING APPROACH
A somewhat unsettling observation about Prodigy is that the convergence rate for Gradient Descent variant is worse than for Dual Averaging. Practically, Dual Averaging typically works worse, most likely due to using the initial iterate x_0 even for large k. It would be interesting to see if a method closer to Gradient Descent would achieve the same convergence rate as Dual Averaging by sometimes forgetting the initial value of x_0. And this is exactly what Algorithm <ref> does.
Algorithm <ref> describes a variant of D-Adaptation where the Dual Averaging process is reset whenever the current d_k estimate increases by more than a factor of 2. The resetting process re-centers the averaging process around the current iterate x_j+1. This resetting process has a number of other effects:
* The step-size sequence γ is also reset, resulting in larger steps right after the reset.
* The convergence of the method is proven with respect to an unweighted average of the iterates, rather than a weighted average.
* Since the quantities tracked to compute d̂ are also reset, the value of d̂ often will increase more rapidly than it can when using the standard D-Adaptation estimate.
This resetting variant has the advantage of being significantly simpler to analyze in the non-asymptotic case than standard D-Adaptation or Prodigy. This makes it well suited to be used in other settings.
Under the assumption of convex and G-Lipschitz f, we have for Algorithm <ref>:
f(x̅_n)-f_*≤13D√((G^2+∑_k=0^n‖ g_k‖ ^2)log_2+(D/d_0))/n+1.
If we upper-bound the gradient sum as (n+1)G^2, this results in the same 𝒪(DG√(log_2+(D/d_0))/√(n+1)) rate as was established for the Dual Averaging variant of Prodigy.
§ COMPLEXITY LOWER BOUND
A lower complexity bound can be established for the Lipschitz-Convex complexity class via a simple 1-dimensional resisting oracle. The bound depends on the "scale" of the initial step of the algorithm, which is the size of the initial step from x_0 to x_1. This initial step is g_0 · d_0/√(G^2 + ‖ g_0‖ ^2) for D-Adaptation, and can be though of as an algorithm-agnostic measure of d_0.
Consider any Algorithm for minimizing a convex G-Lipschitz function
starting from x_0 at the origin, which has no knowledge of problem constants.
At each iteration k, the algorithm may query the gradient at a
single point x_k. Then for any sequence of x_1,…,x_n,
there exists a convex Lipschitz problem f and constant D≥‖ x_0-x_*‖
for all minimizers x_* of f such that:
min_k≤ nf(x_k)-f_* ≥DG√(log_2log_2(D/x_1))/2√(n+1).
<cit.> give a method with convergence rate:
min_k≤ nf(x_k)-f_*=𝒪(DGlog_2log_2(D/d_0)/√(n+1)),
which is worse by a square root factor than this lower bound. It is unclear if their method can be improved upon or if our lower complexity bound is not tight.
Lower complexity bounds for the average regret in the more general
online learning setting also apply here. They are of the form <cit.>:
1/n∑_k=0^n⟨ g_k,x_k-x_*⟩ = Ω(DG√(log_2(D/ϵ))+ϵ/√(n+1)).
where ϵ is a “user-specificed constant” playing a similar role to x_1. Bounds on the average regret directly bound function value sub-optimality as
f(x̅)-f_*≤1/n+1∑_k=0^n[f(x_k)-f_*]≤1/n+1∑_k=0^n⟨ g_k,x_k-x_*⟩,
where x̅=1/n+1∑_k=0^nx_k.
§.§ Exponentially Bounded Algorithms
The lower bound construction above applies to algorithms generating sequences of iterates growing arbitrary fast. We can obtain an interesting class of algorithms, which contains our two D-Adaptation variants, by restricting the rate of growth.
An optimization algorithm is exponentially bounded if there exists a constant d_0, so that for any sequence of G-bounded gradients it returns a sequence of iterates such that for all k:
‖ x_k-x_0‖≤2^kd_0.
D-Adaptation, DoG, Prodigy and D-Adaptation with resetting are exponentially bounded.
Our new D-Adaptation variants are optimal among exponentially bounded algorithms for this complexity class:
Consider any exponentially bounded algorithm for minimizing a convex G-Lipschitz function
starting from x_0, which has no knowledge of problem constants G and D. There exists a fixed gradient oracle such that any sequence
of x_1,…,x_n, there exists a convex Lipschitz problem
f with G=1 and ‖ x_0-x_*‖≤ D for all minimizing points x_*, consistent with the gradient oracle such that:
min_k≤ nf(x_k)-f_*≥DG√(log_2(D/x_1))/2√(n+1).
Using the simple construction from Theorem <ref>,
we show in Appendix <ref> that the
class of exponentially bounded methods (potentially with an exponent other than 2) covers all Gradient Descent approaches that
use an estimate of d_k≤ cD for some constant c, and use a step
size γ_k≤ d_k/G. So the only way to achieve a loglog
dependence on d_0 is via using step sizes that overshoot the
standard D/G step size by more than a fixed constant factor. Although using larger step sizes is not problematic
for Lipschitz functions, it comes with the risk of causing training
divergence when applied to functions whose gradients are only locally
bounded by G, which is common in machine learning settings.
§ RELATED WORK
In this section, we review the major classes of techniques for optimizing convex Lipschitz functions with some level of problem parameter independence.
The Polyak step size <cit.> trades the knowledge of D for f_*, achieving optimal convergence rate without additional log factors. Stable convergence requires accurate f_* estimates. A restarting scheme converges within a multiplicative log factor of the optimal rate <cit.>. There has been substantial recent recesearch on modifications of the Polyak step size to make it better suited to machine learning tasks <cit.> but as of yet they have not seen widespread adoption.
Coin-betting <cit.> is a family of approaches from the online learning setting which are also applicable for convex non-smooth optimization. They work by establishing a relationship by duality between regret minimization and wealth-maximization. Existing approaches for wealth-maximization can then be mapped to algorithms for regret minimization. Coin-betting approaches give convergence rates for an equal-weighted average of the iterates of the form:
f(x̅_n)-f_*=𝒪(DG√(log(1+D/d_0))/√(n+1)).
Standard D-Adaptation obtains asymptotic rates without the log factor, but was otherwise (theoretically) slower in finite time, as it had a log(·) rather than a √(log(·)) dependence on D/d_0:
f(x̂_n)-f_*≤16log_2+(d_n+1/d_0)/n+1D√(∑_k=0^n‖ g_k‖ ^2)≤16DGlog_2+(D/d_0)/√(n+1).
Our two new variants close this gap, giving the same sqrt-log dependence as coin betting.
The DoG method <cit.>, based on the bisection approach of <cit.>, is the only other approach that we are aware of that estimates D in an online fashion. DoG estimates D by r̅_k:
r̅_k=max_i≤ k‖ x_i-x_0‖.
<cit.> use this quantity as a plug-in estimate for the numerator of the step size, similar to D-Adaptation's approach. This approach can divergence in theory, but with additional modifications to the step size, the "tamed" T-DoG method is shown to converge. It has a log_+(D/d_0) dependence on the initial sub-optimally of the D estimate, so our approach improves on this dependence by a √(log_+(D/d_o)) factor.
<cit.> proposed AdGD, a method for convex optimization that does not require any hyperparameters and has a rate that is at least as good as that of the optimally tuned Gradient Descent. However, AdGD requires the objective to be locally smooth, which hinders its use in many practical problems. <cit.> partially addressed this gap by proposing a proximal extension, but the case of non-smooth differentiable functions has remained unstudied.
§ DERIVING ADAM-LIKE STEP SIZES
To derive an Adam-like method, which should use exponential moving average for the step size, we want to approximate Adam's update of the exponential moving average of squared gradients:
v_k+1 = β_2 v_k + (1-β_2)g_k^2
= (1-β_2) ∑_i=0^k β_2^k-i g_i^2,
where g_k^2 is the coordinate-wise square of the gradient g_k. We can achieve this using exponential weights, λ_k = β_2^-k/2, which after substituting the definition of η_k give us the following identity:
d_k^4/η_k^2
= d_k^2/λ_k^2G^2 + ∑_i=0^kd_i^2λ_i^2/λ_k^2g_i^2
= d_k^2/λ_k^2G^2 + d_k^2g_k^2 + ∑_i=0^k-1β_2^k-id_i^2g_i^2.
This can be seen as computing an exponential moving average of d_k g_k rather than g_k itself. This is our first observation. In addition, in Appendix <ref>, we provide a coordinate-wise version of Algorithm <ref> and study its convergence properties. Based on the theory presented there, the denominator for d̂_k+1 should use the ℓ_1 norm of the weighted gradient sum. Thus, combining this insight with the design of Algorithm <ref>, we obtain the following expression for the Adam estimate of D:
d̂_k+1
= ∑_i=0^k λ_id_i^2⟨ g_i, x_0 - x_i⟩/‖∑_i=0^k λ_id_i^2 g_i‖_1
= ∑_i=0^kβ_2^(k-i)/2d_i^2 ⟨ g_i, x_0 - x_i⟩/‖∑_i=0^kβ_2^(k-i)/2d_i^2 g_i‖_1.
The update uses exponential moving average as well, although it is more conservative as it uses √(β_2) instead of β_2. Note that there is an extra of (1-β_2) in the update for v_k, which can be optionally compensated for by using the bias correction discussed by <cit.>. These update rules are summarized in Algorithm <ref>. This is the main algorithm that we study numerically in the next section.
§ EXPERIMENTS
We test our methods on convex logistic regression as well as deep learning problems.
The Prodigy method is used as presented in Algorithm <ref> in all deep learning experiments.
Logistic regression. For the convex setting, we ran a set of classification experiments. For each dataset, we used the multi-margin loss <cit.>, a multi-class generalization of the hinge loss. This non-smooth loss results in bounded gradients, which is required by our theory. We perform full-batch rather that stochastic optimization, for two reasons. Firstly, it matches the assumptions of our theory. Secondly, fast learning rate adaptation is more crucial for full-batch optimization than stochastic optimization as fewer total steps will be performed.
We performed 1,000 steps for each dataset, using a randomized x_0 and plot the results of 10 seeds. We ran both DA and SGD variants of each method. Each plot shows the accuracy of the average iterate for each method. Figure <ref> shows that both our proposed algorithms greatly out-perform regular D-Adaptation. Our weighted SGD variant of D-Adaptation is faster consistently across each dataset. Additionally, it adapts faster than the DoG method <cit.> on 10 of the 12 problems.
CIFAR10. For neural network experiments[The PyTorch code of our optimizer is avilable at <https://github.com/konstmish/prodigy>], we consider training on CIFAR10 <cit.> with batch size 256, where D-Adapted Adam has a gap of a few percent compared to the standard Adam. We use cosine annealing with initial step size 1 for all Adam-based methods and initial step size 10^-3 for Adam itself. The considered networks are VGG11 <cit.> and ResNet-50 <cit.>. To simplify the experiment, we do not use weight decay, so both networks slightly overfit and do not reach high test accuracy values. All methods were run using same 8 random seeds.
We show the results in Figure <ref>. As we can see, this gap is closed by Prodigy, which is achieved by the larger estimates of the step size. We note that on most problems, where D-Adapted Adam was shown to perform well in <cit.>, Prodigy, in contrast, overestimated the step size. Therefore, we view Prodigy and D-Adaptation as methods that are complimentary to each other. It is also worth noting that we did not test scaling d_k with an extra constant factor such as 0.5 or 2.
We also note that DoG produces larger step size estimate, but this is counterbalanced by the larger denominator in DoG. We also tried to modify DoG to use Adam-like step sizes but our heuristic modification diverged on this problem. We also observed that among DoG and its layer-wise version, L-DoG, there is no clear winner as the former performed better on VGG11 and the latter was better when training ResNet-50.
nanoGPT transformer.
We also train a 6-layer transformer network from nanoGPT[<https://github.com/karpathy/nanoGPT>] on the Shakespeare dataset. For all methods, we use batch size 256, clip the gradients to have norm not exceeding 1 and use float16 numbers. We use AdamW as the given in the repository with β_2=0.99, weight decay 0.1, stepsize 10^-3, cosine annealing with warmup over 100 steps. The same weight decay value and cosine annealing is used for Prodigy and D-Adapted Adam, except that the latter two methods use stepsize 1. We accumulate minibatches of size 12 into a batch of size 480. We tuned the weight decay for DoG and L-DoG and found the value 10^-4 to work well for this problem. We ran each method with 8 random seeds and report the average as well as one-standard-deviation confidence intervals.
See Figure <ref> for the results. In terms of the test loss, all methods are roughly equivalent except that DoG and L-DoG were slower to reach the best value of roughly 1.5. For the train loss, Prodigy was on par with tuned AdamW and slightly better than D-Adapted Adam. Surprisingly, the estimated step size in Prodigy was very consistent across the 8 random seeds.
§.§ Large-scale Adam experiments
To validate the performance on large-scale practical applications directly against D-Adaptation, we ran the subset of the experiments from <cit.> that use the Adam optimizer. Methods without coordinate adaptivity are known to underperform on these problems and so we exclude SGD and DoG from these comparisons.
LSTM, RoBERTa, GPT, DLRM, VarNet. On the smallest problem of LSTM training, Prodigy appears to converge significantly faster in training loss and slightly overfits in test loss compared to the baselines. For RoBERTa <cit.> and GPT <cit.> training on BookWiki, Prodigy matches the performance of the baseline with only negligible differences. For the application problems, DLRM <cit.> on the Criteo Kaggle Display Advertising dataset, and fastMRI VarNet <cit.>, Prodigy again closely matches the baselines.
ViT training. <cit.> present a negative result for training vision transformer <cit.>, where D-Adaptation significantly underperforms tuned Adam. We investigated this effect, and we were able to reproduce this gap across a wide range of weight-decay values, although this problem has high run-to-run variance of 1-2% of test accuracy, which makes comparison difficult. Using decay 0.05 instead of 0.1 significantly improved performance of each variant, and so we present results for both the baselines and Prodigy at that value. We can see that Prodigy almost closes the gap between tuned Adam and D-Adaptation, giving a test accuracy of 74.63% compared to 75.4% for Adam, and more than 2% higher than D-Adaptation. See Figure <ref> for the results.
§ CONCLUSION
We have presented two new methods for learning rate adaptation that improve upon the adaptation rate of the state-of-the-art D-Adaptation method. Prodigy, a form of weighted D-Adaptation, was shown to adapt faster than other known methods across a range of experiments. The second method, D-Adaptation with resetting, is shown to achieve the same theoretical rate as Prodigy with a much simpler theory than Prodigy or even D-Adaptation.
plainnat
§ ANALYSIS OF PRODIGY
As a reminder, we use the notation log_2+(a) = 1 + log_2(a) to denote the logarithm that is lower bounded by 1 for any a≥ 1.
§.§ Useful Lemmas
For any sequence of nonnegative real numbers a_0,…, a_n
√(∑_k=0^na_i)≤∑_k=0^na_k/√(∑_i=0^ka_i)≤ 2√(∑_k=0^na_i).
For completeness, we prove both statements here.
Notice that for any α∈[0, 1], it holds 1 - √(1-α)≤α≤ 2(1 - √(1-α)). Substituting α=a_k/∑_i=0^k a_i gives
1 - √(1-a_k/∑_i=0^k a_i)≤a_k/∑_i=0^k a_i≤ 2(1 - √(1-a_k/∑_i=0^k a_i)).
If we multiply all sides by √(∑_i=0^k a_i), the inequality above becomes
√(∑_i=0^k a_i) - √(∑_i=0^k-1 a_i)≤a_k/√(∑_i=0^k a_i)≤ 2(√(∑_i=0^k a_i) - √(∑_i=0^k-1 a_i)).
Summing over k = 0,…, n, we get the stated bound.
§.§ Proof of Lemma <ref>
Following the proof in <cit.>,
we define K=⌈log_2(d_N/d_0)⌉
and n=⌊N/K⌋. Consider a partitioning
of the sequence t≤ N into half-open intervals I_k=[nk,n(k+1))
for k=0 to K-1. We want to show that there is at least one interval
such that d_k changes by at most a factor of 2 on that interval. We
will use proof by contradiction.
Suppose that for all intervals, d_nk<1/2d_n(k+1). Then
d_k at least doubles in every interval, and so:
d_0<1/2d_n<1/4d_2n…<1/2^Kd_nK<1/2^Kd_N,
which implies that d_N/d_0>2^K and so K<log_2(d_N/d_0)
which contradictions our definition K=⌈log_2(d_N/d_0)⌉.
Therefore, there exists some k̂ such that d_nk̂≥1/2d_n(k̂+1). We can now proceed with proving
the Lemma by considering the summation over interval I_k̂
only:
min_t<Nd_t+1/√(∑_k=0^td_k^2) ≤d_n(k̂+1)/√(∑_k=0^n(k̂+1)-1d_k^2)≤d_n(k̂+1)/√(∑_k=nk̂^n(k̂+1)-1d_k^2)≤d_n(k̂+1)/√(∑_k=nk̂^n(k̂+1)-1d_nk̂^2)
=d_n(k̂+1)/√(nd_nk̂^2)≤d_n(k̂+1)/√(1/4nd_n(k̂+1)^2)=2/√(n)=2/√(⌊N/K⌋)
≤2/√(N/K-1)≤2/√(N/log_2(d_N/d_0)+1-1)=2√(log_2+(d_N/d_0))/√(N-log_2+(d_N/d_0))
N≥2log_2+(d_N/d_0)≤4√(log_2+(d_N/d_0))/√(N).
§.§ GD Analysis
Assume that d_0≤ D. Then, the estimate d_k in Algorithm <ref> satisfies d_k≤ D for all k.
By optimality of f_*, we have f(x_k) - f_*≥ 0, so
0
≤∑_k=0^n η_k (f(x_k) - f_*)
≤∑_k=0^n η_k ⟨ g_k, x_k - x_*⟩
= ∑_k=0^n η_k ⟨ g_k, x_0 - x_*⟩ + ∑_k=0^n η_k ⟨ g_k, x_k - x_0⟩.
Collecting the gradients in the first sum together and using Cauchy-Schwarz inequality, we obtain
0
≤∑_k=0^n η_k (f(x_k) - f_*)
≤⟨ x_0 - x_n+1, x_0 - x_*⟩+ ∑_k=0^n η_k ⟨ g_k, x_k - x_0⟩
≤ x_0 - x_n+1x_0 - x_* + ∑_k=0^n η_k ⟨ g_k, x_k - x_0⟩ .
Using the definition of d̂_n+1, this is equivalent to 0≤ (D - d̂_n+1)x_0 - x_n+1, which implies d̂_n+1≤ D. Therefore, since d_0≤ D, we can show by induction d_n+1≤ D as well.
The following inequality holds for the iterates of Algorithm <ref>:
x_0 - x_n+1≤ 2d_n+1 + 1/2d_n+1∑_k=0^nη_k^2g_k^2.
Let us rewrite d̂_n+1 in a slightly different manner:
d̂_n+1x_n+1 - x_0 def=∑_k=0^n ⟨ x_k - x_k+1, x_0 - x_k ⟩
= ∑_k=0^n 1/2(x_k+1 - x_0^2 - x_k - x_k+1^2 - x_k - x_0 ^2 )
= 1/2x_n+1 - x_0^2 -1/2∑_k=0^n x_k - x_k+1^2 .
Combining this with the property d̂_n+1≤ d_n+1, we derive
1/2‖ x_n+1 - x_0‖ ^2-1/2∑_k=0^n‖ x_k - x_k+1‖ ^2
= d̂_n+1‖ x_n+1 - x_0‖≤ d_n+1‖ x_n+1 - x_0‖.
Applying inequality 2αβ≤α^2 + β^2 with α^2 = 2d_n+1^2 and β^2= 1/2x_n+1 - x_0^2 and plugging-in the bound above, we establish
2d_n+1x_n+1 - x_0 =2αβ≤α^2 + β^2
= 2d_n+1^2 + 1/2x_n+1 - x_0^2
≤ 2d_n+1^2 + d_n+1x_n+1 - x_0 + 1/2∑_k=0^nx_k - x_k+1^2.
Rearranging the terms, we obtain
d_n+1x_n+1 - x_0 ≤ 2d_n+1^2 + 1/2∑_k=0^nx_k- x_k+1^2
= 2d_n+1^2 + 1/2∑_k=0^nη_k^2g_k^2.
It remains to divide this inequality by d_n+1 to get the desired claim.
For any sequence of nonnegative numbers a_0, …, a_n and A>0, it holds
∑_k=0^na_k/A + ∑_i=0^ka_i≤log(A+∑_k=0^na_k) - log(A).
For any t>0 it holds 1/(1+t) ≤log(1+1/t). Substituting t=s_k-1/a_k, where s_k-1 = A + ∑_i=0^k-1 a_i, we get
1/1+s_k-1/a_k
= s_k-1/a_k + s_k-1
= s_k-1/A + ∑_i=0^n a_i≤log (1 + a_k/s_k-1)
= log(s_k) - log(s_k-1).
Summing this over k from 0 to n, we get
∑_k=0^na_k/A + ∑_i=0^ka_i ≤∑_k=0^n (log(s_k) - log(s_k-1))
= log(s_n) - log(s_0)
= log(A+∑_k=0^na_k) - log(A).
This is exactly what we wanted to prove.
Assuming the weights λ_0, …, λ_n are positive, it holds for the iterates of Algorithm <ref>:
∑_k=0^n d_k^4λ_k^2g_k^2/d_k^2G^2 + ∑_i=0^kd_i^2λ_i^2g_i^2≤ d_n^2log(1 + ∑_k=0^nλ_k^2).
The lemma follows straightforwardly from Proposition <ref> by substituting a_k = d_k^2/d_n^2λ_k^2g_k^2 for k from 0 to n:
∑_k=0^n d_k^4λ_k^2g_k^2/d_k^2G^2 + ∑_i=0^kd_i^2λ_i^2g_i^2 = d_n^2∑_k=0^n d_k^2/d_n^2λ_k^2g_k^2/G^2 + ∑_i=0^kd_i^2/d_k^2λ_i^2g_i^2
d_k≤ d_n≤ d_n^2∑_k=0^n d_k^2/d_n^2λ_k^2g_k^2/G^2 + ∑_i=0^kd_i^2/d_n^2λ_i^2g_i^2
(<ref>)≤d_n^2(log( G^2 + ∑_k=0^nd_k^2/d_n^2λ_k^2g_k^2) - log(G^2))
≤ d_n^2log(1 + ∑_k=0^nλ_k^2),
where in the last step we used d_k^2/d_n^2λ_k^2g_k^2≤λ_k^2 G^2.
Let us restate Theorem <ref>:
Given any weights 1≤λ_0≤…λ_n, the functional gap of the average iterate of Algorithm <ref> converges as
f(x̂_n) - f_*
≤√(2λ_n)DG2d_n+1 + d_n+1log(1+∑_k=0^n λ_k^2)/√(∑_k=0^n λ_k d_k^2).
The first steps in the proof follow the same lines as the theory in <cit.>, but we still provide them for completeness.
Firstly, let us continue developing the bound proved in the proof of Lemma <ref>:
∑_k=0^n η_k (f(x_k) - f_*)
≤x_0 - x_n+1D + ∑_k=0^n η_k ⟨ g_k, x_k - x_0⟩
= x_0 - x_n+1D + ∑_k=0^n ⟨ x_k - x_k+1, x_k - x_0⟩
= x_0 - x_n+1D + 1/2∑_k=0^n [x_k - x_k+1^2 + x_k - x_0^2 - x_k+1 - x_0^2]
≤x_0 - x_n+1D + 1/2∑_k=0^n x_k - x_k+1^2.
We upper bound the first term with the help of Lemma <ref>:
∑_k=0^n η_k (f(x_k) - f_*)
≤ 2Dd_n+1 + D/2d_n+1∑_k=0^nη_k^2g_k^2 + 1/2∑_k=0^n η_k^2g_k^2.
Since by Lemma <ref>, 1≤D/d_n+1, we can simplify it to
∑_k=0^n η_k (f(x_k) - f_*)
≤ 2Dd_n+1 + D/d_n+1∑_k=0^nη_k^2g_k^2
= 2Dd_n+1 + D/d_n+1∑_k=0^nd_k^4λ_k^2/d_k^2G^2 + ∑_i=0^k d_i^2λ_i^2g_i^2g_k^2
(<ref>)≤ 2Dd_n+1 + D/d_n+1d_n^2log(1+∑_k=0^nλ_k^2).
Using the convexity of f, we can apply Jensen's inequality on the iterate x̂_n to get
f(x̂_n) - f_*
≤1/∑_k=0^n η_k∑_k=0^nη_k(f(x_k) - f_*)
≤2Dd_n+1 + D/d_n+1d_n^2log(1+∑_k=0^nλ_k^2)/∑_k=0^n η_k
≤ D2d_n+1 + d_n+1log(1+∑_k=0^nλ_k^2)/∑_k=0^n η_k.
Notice that
η_k
= d_k^2λ_k/√(d_k^2G^2 + ∑_i=0^kd_i^2λ_i^2‖ g_i‖ ^2)≥d_k^2λ_k/G√(d_k^2 + ∑_i=0^kd_i^2λ_i^2)≥d_k^2λ_k/G√(2λ_n)√(∑_i=0^kd_i^2λ_i).
Sum over k from 0 to n and using λ_i ≤λ_n gives
∑_k=0^n η_k ≥1/√(2λ_n)G∑_k=0^n d_k^2λ_k/√(∑_i=0^kd_i^2λ_i)(<ref>)≥1/√(2λ_n)G√(∑_k=0^n d_k^2λ_k).
Hence,
f(x̂_n) - f_*
(<ref>)≤√(2λ_n)DGd_n+1/√(∑_k=0^n d_k^2λ_k)(2+ log(1+∑_k=0^nλ_k^2)).
Consider Algorithm <ref> with n≥ 2log_2(2D/d_0) and define t = min_k≤ nd_k/√(∑_i=0^k d_k^2). If we choose weights λ_k = 1, then it holds
f(x̂_t) - f_*
≤ 4√(2)DG2 + log(n+2)/√(n)√(log_2(2D/d_0)).
Substituting λ_k in the bound of Theorem <ref>, we get for any n
f(x̂_n) - f_*
(<ref>)≤√(2)DGd_n+1/√(∑_k=0^n d_k^2)log(n + 2).
Using the definition of t, the result of Lemma <ref> and the property d_n≤ D, we obtain
f(x̂_t) - f_*
≤√(2)DGmin_k≤ nd_k+1/√(∑_i=0^k d_i^2)(2+log(n + 2))
≤ 4√(2)DG2 + log(n+2)/√(n)√(log_2(2D/d_0)).
Choose any p≥ 0 ans set the weights to be λ_k = (k+1)^p. Then,
f(x̂_n) - f_* = 𝒪(DG√(p+1)log(n+1)/√(n+1)).
Since the sequence d_0,d_1,… is non-decreasing and upper bounded by D, there exists an index n̂ such that d_k≤ 2 d_n̂ for any k≥n̂. Moreover, we have for n≥ 2(n̂ + 1)
∑_k=n̂^n λ_k
≥1/p+1((n+1)^p+1 - (n̂+1)^p+1)
≥1/2(p+1)(n+1)^p+1
and
∑_k=0^n λ_k^2
= ∑_k=1^n+1 k^2p≤∫_2^n+2x^2pdx
≤1/2p+1(n+2)^2p+1 - 1
≤ (n+2)^2p+1 - 1.
Let us plug this into the bound of Theorem <ref> for n≥ 2(n̂ + 1):
f(x̂_n) - f_*
≤√(2λ_n)DGd_n+1/√(∑_k=0^n d_k^2λ_k)(2+ log(1+∑_k=0^nλ_k^2))
≤2d_n̂√(2(n+1)^p)DG/√(d_n̂^2∑_k=n̂^nλ_k)(2 + (2p+1)log(n+2))
≤4√(p+1)DG/√(n+1)(2 + (2p+1)log(n+2)) = 𝒪(DG√(p+1)log(n+1)/√(n+1)),
which matches our claim.
Notice that the bound in Corollary <ref> does not depend on D/d_0. This is only possible asymptotically for a large enough k and a similar bound without weights was presented by <cit.>.
§.§ DA Analysis
Considering Algorithm <ref>, we have
‖ s_n+1‖≤2d_n+1/γ_n+1 + ∑_k=0^nγ_kλ_k^2g_k^2/2d_n+1.
When studying Dual Averaging, we need to introduce an extra sequence that lower bounds d_n:
d_n+1def=γ_n+1‖ s_n+1‖ ^2-∑_k=0^nγ_kλ_k^2‖ g_k‖ ^2/2s_n+1.
Let us show that d̂_n+1≥d_n+1 by comparing their numerators:
d̂_n+1s_n+1 =∑_k=0^n λ_k⟨ g_k, x_0 - x_k⟩
= ∑_k=0^n λ_kγ_k⟨ g_k,s_k⟩
= ∑_k=0^n γ_k⟨ s_k+1-s_k,s_k⟩
= ∑_k=0^n γ_k/2[s_k+1^2 - s_k+1-s_k^2 - s_k^2]
= γ_n/2s_n+1^2+ 1/2∑_k=0^n (γ_k - γ_k+1)s_k+1^2-1/2∑_k=0^n γ_kλ_k^2g_k^2
γ_k≥γ_k+1≥γ_n+1/2s_n+1^2 - 1/2∑_k=0^n γ_kλ_k^2g_k^2
= d_n+1s_n+1.
Using the definition of d_n+1,
and the property d_n+1≤d̂_n+1≤ d_n+1, we derive
γ_n+1/2‖ s_n+1‖ ^2-1/2∑_k=0^nγ_kλ_k^2‖ g_k‖ ^2
= d_n+1‖ s_n+1‖≤ d_n+1‖ s_n+1‖.
Using inequality 2αβ≤α^2 + β^2 with α^2 = 2d_n+1^2/γ_n+1 and β^2= γ_n+1/2s_n+1^2 and then the bound above, we establish
2d_n+1s_n+1 = 2αβ≤α^2+β^2
= 2d_n+1^2/γ_n+1 + γ_n+1/2s_n+1^2
≤2d_n+1^2/γ_n+1 + d_n+1s_n+1 + 1/2∑_k=0^nγ_kλ_k^2g_k^2.
Rearranging the terms, we obtain
d_n+1s_n+1 ≤2d_n+1^2/γ_n+1 + 1/2∑_k=0^nγ_kλ_k^2g_k^2.
It remains to divide both sides by d_n+1.
The Dual Averaging algorithm (Algorithm <ref>) satisfies
∑_k=0^nλ_k (f(x_k) - f_*)
≤ (D - d̂_n+1)s_n+1.
Summing inequality f(x_k) - f_*≤⟨ g_k, x_k - x_*⟩ with weights λ_k, we get
∑_k=0^n λ_k (f(x_k) - f_*)
≤∑_k=0^n λ_k ⟨ g_k, x_k - x_*⟩
= ∑_k=0^n λ_k ⟨ g_k, x_0 - x_*⟩ + ∑_k=0^n λ_k ⟨ g_k, x_k - x_0⟩.
Using Cauchy-Schwarz on the first product in the right-hand side and then telescoping the second sum, we obtain
∑_k=0^n λ_k (f(x_k) - f_*)
≤ s_n+1x_0 - x_* + ∑_k=0^n λ_k⟨ g_k, x_k - x_0⟩
= s_n+1D - d̂_n+1s_n+1.
Next, we restate and prove Theorem <ref>:
For Algorithm <ref>, it holds that:
f(x_t) - f_*
≤4GD/√(n)√(log_2(2D/d_0)),
where t=min_k≤ nd_k+1/√(∑_i=0^k d_i^2).
Let us sum inequality λ_k(f(x_k) - f_*)≥ 0 and then apply Lemma <ref>:
0
≤∑_k=0^n λ_k (f(x_k) - f_*) (<ref>)≤ (D - d̂_n+1) s_n+1.
Clearly, this implies that d̂_n+1≤ D, and by induction it follows that d_n+1≤ D as well. Now let us upper bound the functional values:
∑_k=0^n λ_k (f(x_k) - f_*)
(<ref>)≤ Ds_n+1 - ∑_k=0^n γ_kλ_k⟨ g_k, s_k⟩
= D s_n+1 - ∑_k=0^n γ_k⟨ s_k+1 - s_k, s_k⟩
= D s_n+1 + 1/2∑_k=0^n γ_k(s_k+1 - s_k^2 + s_k^2 - s_k+1^2)
= D s_n+1 + 1/2∑_k=0^n γ_ks_k+1 - s_k^2 + 1/2∑_k=0^n (γ_k - γ_k-1)s_k^2 - γ_n/2s_n+1^2 .
We can drop the last two terms since γ_k≤γ_k-1:
∑_k=0^n λ_k (f(x_k) - f_*)
≤ D s_n+1 + 1/2∑_k=0^n γ_ks_k+1 - s_k^2
= D s_n+1 + 1/2∑_k=0^n γ_kλ_k^2g_k^2.
The first term in the right-hand side is readily bounded by Lemma <ref>:
∑_k=0^n λ_k (f(x_k) - f_*)
≤ D s_n+1 + 1/2∑_k=0^n γ_kλ_k^2g_k^2
≤2Dd_n+1/γ_n+1 + D/2d_n+1∑_k=0^nγ_kλ_k^2g_k^2 + 1/2∑_k=0^n γ_kλ_k^2g_k^2
d_n+1≤ D≤2Dd_n+1/γ_n+1 + D/d_n+1∑_k=0^nγ_kλ_k^2g_k^2
λ_k≤λ_n≤2Dd_n+1/γ_n+1 + D/d_n+1λ_n∑_k=0^nγ_kλ_kg_k^2.
Then, apply Proposition <ref>:
∑_k=0^n λ_k (f(x_k) - f_*)
≤2D/γ_n+1 + D/d_n+1λ_n∑_k=0^nγ_kλ_kg_k^2
=2D/γ_n+1 + D/d_n+1λ_n∑_k=0^n1/√(λ_kG^2 + ∑_i=0^k-1λ_ig_i^2)λ_kg_k^2
≤2D/γ_n+1 + D/d_n+1λ_n∑_k=0^n1/√(λ_k g_k^2 + ∑_i=0^k-1λ_ig_i^2)λ_kg_k^2
≤2D/γ_n+1 + 2D/d_n+1λ_n√(∑_k=0^nλ_kg_k^2).
Let us now plug-in λ_k=d_k^2 and bound each gradient norm using g_k≤ G:
∑_k=0^n λ_k (f(x_k) - f_*)
≤ 4D d_n+1√(∑_k=0^n d_k^2g_k^2)≤ 4GD d_n+1√(∑_k=0^n d_k^2).
Thus, we get the following convergence rate:
f(x_t) - f_*
≤4GD d_t+1√(∑_k=0^t d_k^2)/∑_k=0^t d_k^2
= 4GD d_t+1/√(∑_k=0^t d_k^2)
= min_t^'<n4GD d_t^'+1/√(∑_k=0^t^' d_k^2)
≤4GD/√(n)√(log_2+(D/d_0)).
§.§ Coordinate-wise Prodigy
Here we study Algorithm <ref>. The theory in this section follows closely the analysis in Section <ref>. There are only a few minor differences such as the use of weighted norms, which we define as ⟨ x, y⟩_^-1 = x^⊤^-1y for any matrix ≽ 0. In addition, we use ℓ_∞ norm for the distance term and for the gradients, see the assumption below.
The gradient norm
We begin with the analogue of Lemma <ref>:
It holds for the iterates of Algorithm <ref>:
‖ s_n+1‖_1
≤ 2d_n+1a_n+1_1 + 1/2d_n+1∑_k=0^nλ_k^2g_k^2__k^-1.
As in the proof of Lemma <ref>, let us introduce an extra sequence d_n:
d_n+1def=‖ s_n+1‖__n+1^-1 ^2-∑_k=0^nλ_k^2‖ g_k‖__k^-1 ^2/2s_n+1_1.
The next step is to show that d̂_n+1≥d_n+1 by comparing the numerators:
d̂_n+1s_n+1_1
= ∑_k=0^n λ_k⟨ g_k,x_0 - x_k⟩
= ∑_k=0^n λ_k⟨ g_k,s_k⟩__k^-1
= ∑_k=0^n ⟨ s_k+1-s_k,s_k⟩__k^-1
= ∑_k=0^n γ_k/2[s_k+1^2__k^-1 - s_k+1-s_k^2__k^-1 - s_k^2__k^-1]
= 1/2s_n+1^2__n^-1+ 1/2∑_k=0^n s_k+1^2__k^-1 - _k+1^-1-1/2∑_k=0^n λ_k^2g_k^2__k^-1
_k^-1≽_k+1^-1≥1/2s_n+1^2__n+1^-1 - 1/2∑_k=0^n γ_kλ_k^2g_k^2__k^-1
= d_n+1s_n+1_1.
Using the definition of d_n+1,
and the property d_n+1≤d̂_n+1≤ d_n+1, we derive
1/2‖ s_n+1‖__n+1^-1 ^2-1/2∑_k=0^nγ_kλ_k^2‖ g_k‖__k^-1 ^2
= d_n+1‖ s_n+1‖_1
≤ d_n+1‖ s_n+1‖_1.
Using inequality 2αβ≤α^2 + β^2 with α^2 = 2d_n+1^2a_(n+1)i and β^2= 1/2a_(n+1)is_(n+1)i^2 for i=1,…, p and then the bound above, we establish
2d_n+1s_n+1_1
= ∑_i=1^p d_n+1|s_(n+1)i|
≤∑_i=1^p(2d_n+1^2a_(n+1)i + 1/2a_(n+1)is_(n+1)i^2)
= 2d_n+1^2a_n+1_1 + 1/2s_n+1__n+1^-1
≤ 2d_n+1^2a_n+1_1 + d_n+1s_n+1_1 + 1/2∑_k=0^nγ_kλ_k^2g_k^2__k^-1.
Rearranging the terms, we get
d_n+1s_n+1_1
≤ 2d_n+1^2a_n+1_1 + 1/2∑_k=0^nλ_k^2g_k^2__k^-1.
It remains to divide both sides by d_n+1.
The next lemma is similar to Lemma <ref> except that it uses ℓ_∞ norm for the distance to a solution and ℓ_1 norm for the weighted gradient sum s_n.
The coordinate-wise version of Prodigy (Algorithm <ref>) satisfies
∑_k=0^nλ_k (f(x_k) - f_*)
≤ (D_∞ - d̂_n+1)s_n+1_1,
where D_∞ = x_0 - x_*_∞.
Summing inequality f(x_k) - f_*≤⟨ g_k, x_k - x_*⟩ with weights λ_k, we get
∑_k=0^n λ_k (f(x_k) - f_*)
≤∑_k=0^n λ_k ⟨ g_k, x_k - x_*⟩
= ∑_k=0^n λ_k ⟨ g_k, x_0 - x_*⟩ + ∑_k=0^n λ_k ⟨ g_k, x_k - x_0⟩.
Using Hölder's inequality on the first product in the right-hand side and then telescoping the second sum, we obtain
∑_k=0^n λ_k (f(x_k) - f_*)
≤ s_n+1_1 x_0 - x_*_∞ + ∑_k=0^n λ_k⟨ g_k, x_k - x_0⟩
= s_n+1_1D_∞ - d̂_n+1s_n+1.
The use of ℓ_1 norm for the term s_n+1 above is motivated by the fact that it naturally arises in other parts of the theory.
Algorithm <ref> converges with the rate
f(x_t) - f_*
≤4pG_∞D_∞/√(n)√(log_2+(D_∞/d_0)),
where t=min_k≤ nd_k+1/√(∑_i=0^k d_i^2).
From Lemma <ref>, we get
0
≤∑_k=0^n λ_k (f(x_k) - f_*) (<ref>)≤ (D_∞ - d̂_n+1) s_n+1_1,
so we can prove by induction that d_n+1≤ D_∞. Using the same bounds as before, we get for the average iterate
∑_k=0^n λ_k (f(x_k) - f_*)
≤ D_∞s_n+1_1 - ∑_k=0^n λ_k⟨ g_k, x_0 - x_k⟩
= D_∞ s_n+1_1 + 1/2∑_k=0^n λ_k^2g_k^2__k^-1 + 1/2∑_k=0^n s_k^2__k^-1 - _k+1^-1 - 1/2s_n+1^2__n+1^-1
≤ D_∞ s_n+1_1 + 1/2∑_k=0^n λ_k^2g_k^2__k^-1.
Let us plug in the bound from Lemma <ref>:
∑_k=0^n λ_k (f(x_k) - f_*)
≤ D_∞2d_n+1a_n+1_1 + D_∞/2d_n+1∑_k=0^nλ_k^2g_k^2__k^-1 + 1/2∑_k=0^n λ_k^2g_k^2__k^-1
d_n+1≤ D_∞≤ D_∞2d_n+1a_n+1_1 + D_∞/d_n+1∑_k=0^nλ_k^2g_k^2__k^-1
λ_k≤λ_n≤ D_∞2d_n+1a_n+1_1 + D_∞/d_n+1λ_n∑_k=0^nλ_kg_k^2__k^-1 .
We now apply Proposition <ref>, substitute λ_k = d_k^2, and use g_kj^2≤ G_∞^2:
∑_k=0^n d_k^2 (f(x_k) - f_*)
≤ D_∞2d_n+1a_n+1_1 + D_∞/d_n+1λ_n∑_j=1^p∑_k=0^nλ_k g_kj^2/√(d_k^2G_∞^2 + ∑_i=0^k-1λ_i g_ij^2)
≤ D_∞2d_n+1a_n+1_1 + 2D_∞/d_n+1λ_n∑_j=1^p√(∑_k=0^nλ_ig_kj^2)
≤ 4D_∞ d_n+1pG_∞√(∑_k=0^nd_i^2).
Using Lemma <ref>, we get the rate for t=min_t'≤ nd_t'+1/√(∑_k=0^t'd_k^2):
f(x_t) - f_*
≤4pG_∞D_∞/√(n)√(log_2+(D_∞/d_0)).
§ ANALYSIS OF D-ADAPTATION WITH RESETTING
In Algorithm <ref> the r counter tracks the epoch. Let
n_r represent the number of steps performed in epoch r. Let
R≤log_2+(D/d_0) denote the total number of epochs performed
before the algorithm returns, where D=‖ x_0-x_*‖.
Consider the steps within a single epoch, dropping the r index,
we have:
‖ x_0-x_*‖≥γ_n+1‖ s_n+1‖ ^2-∑_k=0^nγ_k‖ g_k‖ ^2/2‖ s_n+1‖.
This follows from the single-epoch theory of D-Adaptation of <cit.>.
Consider the steps within a single epoch, dropping
the r index, we have:
‖ s_n+1‖≤3√(G^2+∑_k=0^n‖ g_k‖ ^2).
Same as D-Adaptation without resetting.
For epoch r, we have that:
‖ x_0,r-x_*‖≤4D.
Lets drop the r indexing for notationly simplicity. Within epoch
r., recall that:
∑_k=0^n(f(x_k)-f_*)≤⟨ s_n+1,x_0-x_*⟩ -∑_k=0^nγ_k⟨ g_k,s_k⟩ .
and so:
-⟨ s_n+1,x_0-x_*⟩ ≤-∑_k=0^nγ_k⟨ g_k,s_k⟩
≤-1/2γ_n+1‖ s_n+1‖ ^2+1/2∑_k=0^nγ_k‖ g_k‖ ^2.
Now we can use this inequality to show the result:
‖ x_n+1-x_*‖ ^2 =‖ x_n+1-x_0+x_0-x_*‖ ^2
=‖ x_0-x_*‖ ^2+‖ x_n+1-x_0‖ ^2+2⟨ x_n+1-x_0,x_0-x_*⟩
=‖ x_0-x_*‖ ^2+γ_n+1^2‖ s_n+1‖ ^2-2γ_n+1⟨ s_n+1,x_0-x_*⟩
=‖ x_0-x_*‖ ^2+2γ_n+1(1/2γ_n+1‖ s_n+1‖ ^2-⟨ s_n+1,x_0-x_*⟩)
≤‖ x_0-x_*‖ ^2+2γ_n+1(1/2∑_k=0^nγ_k‖ g_k‖ ^2)
(<ref>)≤‖ x_0-x_*‖ ^2+2γ_n+1d√(G^2+∑_k=0^n‖ g_k‖ ^2)
=‖ x_0-x_*‖ ^2+2d^2.
Using subadditivity of the square root function gives (rounding up
for simplicity):
‖ x_n+1-x_*‖≤‖ x_0-x_*‖ +√(2)d.
Now consider the behavior of this quantity for past epochs. We reintroduce r-sub-scripting, and
chain backwards as follows:
‖ x_0,r-x_*‖ =‖ x_n+1,r-1-x_*‖≤‖ x_0,r-1-x_*‖ +√(2)d_r
≤‖ x_0,r-2-x_*‖ +√(2)d_r+1/2√(2)d_r
≤‖ x_0,r-3-x_*‖ +√(2)d_r+1/2√(2)d_r+1/4√(2)d_r
…
≤‖ x_0-x_*‖ +2√(2)d_r
≤4D,
where we have used the fact that ∑_i=0^k1/2^i≤2
for any k.
For epoch r we have:
∑_k=0^n_r(f(x_k,r)-f_*)≤13D√(G^2+∑_k=0^n‖ g_r,k‖ ^2).
Starting from
∑_k=0^n_r(f(x_k,r)-f_*)≤‖ x_0,r-x_*‖‖ s_n+1,r‖ +1/2∑_k=0^nγ_k,r‖ g_k,r‖ ^2-1/2γ_n+1,r‖ s_n+1,r‖ ^2.
First we apply Lemma <ref>:
∑_k=0^n_r(f(x_k,r)-f_*)≤4D‖ s_n+1,r‖ +1/2∑_k=0^nγ_k,r‖ g_k,r‖ ^2-1/2γ_n+1,r‖ s_n+1,r‖ ^2.
Then we apply Lemma <ref>:
∑_k=0^n_r(f(x_k,r)-f_*)≤12D√(G^2+∑_k=0^n‖ g_r,k‖ ^2)+1/2∑_k=0^nγ_k,r‖ g_k,r‖ ^2-1/2γ_n+1,r‖ s_n+1,r‖ ^2.
Note that 1/2∑_k=0^nγ_k,r‖ g_k,r‖ ^2≤ D√(G^2+∑_k=0^n‖ g_r,k‖ ^2).
We further drop the -1/2γ_n+1,r‖ s_n+1,r‖ ^2
term to give the result.
For Algorithm <ref>, it holds that:
f(x̅_n)-f_*≤13D√((G^2+∑_k=0^n‖ g_k‖ ^2)log_2+(D/d_0))/n+1.
Starting from Lemma <ref>, for epoch r
it holds that:
∑_k=0^n_r(f(x_k,r)-f_*)≤13D√(G^2+∑_k=0^n‖ g_r,k‖ ^2).
We sum over epochs up to the final epoch R:
∑_k=0^n(f(x_k)-f_*)≤13D∑_r=1^R√(G^2+∑_k=0^n_r‖ g_r,k‖ ^2).
Jensen's inequality tells us that for concave φ:
a_iφ(x_i)/∑ a_i≤φ(∑ a_ix_i/∑ a_i).
Applying to our case, we use φ(t)=√(t), a_i=1, ∑ a_i=R, so:
∑_r=1^R√(G^2+∑_k=0^n_r‖ g_r,k‖ ^2) ≤ R√(G^2+∑_k=0^n‖ g_k‖ ^2/R)
=√(R(G^2+∑_k=0^n‖ g_k‖ ^2)).
Noting that: R≤log_2+(D/d_0):
∑_k=0^n(f(x_k)-f_*)≤13D√((G^2+∑_k=0^n‖ g_k‖ ^2)log_2+(D/d_0)).
Applying Jensen's inequality to obtain a bound on the average iterate
completes the proof.
§ LOWER COMPLEXITY THEORY
Consider any Algorithm for minimizing a convex G-Lipschitz function
starting from x_0 at the origin, which has no knowledge of problem constants.
At each iteration k, the algorithm may query the gradient at a
single point x_k. Then for any sequence of x_1,…,x_n,
there exists a convex Lipschitz problem f and constant D≥‖ x_0-x_*‖
for all minimizers x_* of f such that:
min_k≤ nf(x_k)-f_* ≥DG√(log_2log_2(D/x_1))/2√(n+1).
Firstly let g_0=-1. We consider two cases. Case 1) Suppose that
x_k≤1/22^2^n+1x_1 for all k. Then define
x_*=2^2^n+1x_1,
so that |x_0-x_*|=D=2^2^n+1x_1. Our construction uses the gradient sequence g_k=-1 for all k.
This corresponds to the function:
f(x)=|x-x_*|.
Note that for all query points x, the gradient is negative, and
only the left arm of the absolute value function is seen by the algorithm,
so the function appears linear for all test points. Using this construction,
we have:
min_k≤ n[f(x_k)-f_*] =min_k≤ n(x_*-x_k)
=2^2^n+1x_1-max_k≤ nx_k
≥2^2^n+1x_1-1/22^2^n+1x_1
≥1/2D_n.
Now note that:
√(loglog_2(D/x_1)) =√(log_2log_2(2^2^n+1))
=√(n+1).
So combining these two results:
min_k≤ nf(x_k)-f_* ≥1/2DG
=DG√(log_2log_2(D/x_1))/2√(n+1).
Case 2). Otherwise, there exists a k such that x_k+1≥1/22^2^n+1x_1.
Then we will fix x_* to be in the interval I=[2^2^n+1x_1,2^2^n+2x_1].
Our gradient oracle will return g(x_k)=sign(x_k-x_*),
and the corresponding function is f(x)=|x-x_*|. Since
the problem can only become harder with less information, we may assume
that the algorithm knows the end points of the interval I, and we
treat x_k as the first query point in the interval. Without loss
of generality we further assume k=2 since any larger value just
gives fewer query points and thus a worse bound.
For the resisting oracle, we can use the same resisting oracle as
would be used for binary search, but applied to the logarithm of the
points. For root finding on an interval [a,b] the lower complexity
bound for t steps is know to be <cit.>:
|x-x_*|≥1/2^t+1|b-a|.
and so since we are taking n-1 steps (since we start at x_2):
|log_2x_n-log_2x_*| ≥1/2^n(log_22^2^n+2x_1-log_22^2^n+1x_1)
≥1/2^n(2^n+2-2^n+1)
=1/2^n(2·2^n+1-2^n+1)=2^n+1/2^n
= 2.
Therefore either x_n≤1/4x_* or x_n≥4x_*.
Therefore:
f(x_k)-f_* ≥min{|1/4x_*-x_*|,|4x_*-x_*|}
=3/4x_*=3/4DG.
Note that D≤2^2^n+2x_1, so:
√(loglog_2(D/x_1))≤√(log_2log_2(2^2^n+2))=√(n+2),
therefore, 1≥√(loglog_2(D/x_1))/√(n+2),
and so multiplying the two bounds gives:
min_k≤ nf(x_k)-f_* ≥3DG√(log_2log_2(D/x_1))/4√(n+2)
≥DG√(log_2log_2(D/x_1))/2√(n+1).
Consider any exponentially bounded algorithm for minimizing a convex G-Lipschitz function
starting from x_0, which has no knowledge of problem constants G and D. There exists a fixed gradient oracle such that any sequence
of x_1,…,x_n, there exists a convex Lipschitz problem
f with G=1 and ‖ x_0-x_*‖≤ D for all minimizing points x_*, consistent with the gradient oracle such that:
min_k≤ nf(x_k)-f_*≥DG√(log_2(D/x_1))/2√(n+1).
We consider the construction of a 1D oracle
for this problem. Our oracle returns g_0=-1 and f(x_k)=-x_k
for all queries. Without loss of generality we assume that x_k>0
for all k≥1, and G=1.
For each step k≥1 we define:
x_*=2^n+1x_1,
and thus D=|x_0-x_*|=2^k+1x_1. and our construction
uses the following function value and gradient sequence
f(x)=|x-x_*|+x_*.
Note that for all query points x, the gradient is negative, and
only the left arm of the absolute value function is seen by the algorithm,
so the function appears linear for all test points. Using this construction,
we have:
min_k≤ n[f(x_k)-f_*] =min_k≤ n(x_*-x_k)
=2^n+1x_1-max_k≤ nx_k
≥2·2^nx_1-2^nx_1
=2^nx_1
=1/2D_n.
Now note that:
√(log_2(D_n/x_1)) =√(log_2(2^n+1))
=√(n+1).
So:
1≥√(log_2(D_n/x_1))/√(n+1).
Combining these two results:
min_k≤ nf(x_k)-f_* ≥1/2D=1/2DG
=1/2DG√(log_2(D/x_1))/√(n+1).
D-Adaptation, DoG, Prodigy and D-Adaptation with resetting are exponentially bounded.
Consider the D lower bound from D-Adaptation:
d̂_n+1=∑_k=0^nλ_kγ_k⟨ g_k,s_k⟩/s_n+1,
with:
s_n+1=∑_k=0^nd_kg_k.
Recall that
∑_k=0^nλ_kγ_k⟨ g_k,s_k⟩≤γ_n+1‖ s_n+1‖ ^2.
Note also that γ_n+1≤1/G. So:
d_n+1 ≤1/G‖ s_n+1‖ ^2/‖ s_n+1‖≤1/G‖∑_k=0^nd_kg_k‖≤∑_k=0^nd_k.
So the sequence d_n is upper bounded by the sequence:
a_n=∑_k=0^n-1a_k n≥1
d_0 n=0
.
This sequence has the following closed form:
a_n+1=2^nd_0 for n≥ 1.
We can prove this by induction. The base case is by definition a_1=a_0.
Then
a_n+1 =∑_k=0^na_k=∑_k=0^n-1a_k+a_n
=a_n+a_n
=2a_n
=2^nd_0.
Note that for both the Dual Averaging form and the GD form we have, we have:
‖ x_n+1-x_0‖≤‖1/G∑_k=0^nd_kg_k‖≤∑_k=0^nd_k≤ d_n+1≤2^nd_0.
It follows that D-Adaptation is exponentially bounded. The same
argument applies to the resetting variant since the resetting operation
does not increase the rate of accumulation of d.
For Prodigy, note that:
γ_n+1≤1/√(d_n+1^2G^2)
=1/d_n+1G.
Therefore
d_n+1 ≤1/d_n+1G‖ s_n+1‖ ^2/‖ s_n+1‖≤1/d_n+1G‖∑_k=0^nd_k^2g_k‖
≤1/d_n+1∑_k=0^nd_k^2
≤1/d_n+1∑_k=0^nd_kd_n+1
≤∑_k=0^nd_k.
The rest of the argument follows the D-Adaptation case, with:
‖ x_n+1-x_0‖≤‖1/d_nG∑_k=0^nd_k^2g_k‖≤∑_k=0^nd_k≤ d_n+1≤2^nd_0.
For DoG, recall the basic DoG step is gradient descent with step sizes:
γ_k=r̅_k/√(G^2+∑_i=0^k‖ g_i‖ ^2).
Using the triangle inequality we have:
‖ x_k+1-x_0‖ =‖ x_k-γ_kg_k-x_0‖
≤‖ x_k-x_0‖ +
≤‖ x_k-x_0‖ +r̅_k/√(G^2)
≤‖ x_k-x_0‖ +r̅_k
≤ 2r̅_k.
Chaining gives the result.
Suppose that d_k≤ cD and γ_k≤ d_k/G. then:
‖ x_k-x_0‖≤(2c+1)^n‖ x_1-x_0‖ .
Without loss of generality assume that G=1. Firstly, note that using the absolute value function as constructed
in Theorem <ref>, it's clear that there is always exists a function with
D_k≤2‖ x_k-x_*‖ at step k consistent
with the sequence of gradients seen so far. Therefore, it must hold
that
d_k≤ cD_k≤2c‖ x_k-x_0‖ .
We prove the result by induction. For the base case, trivially:
‖ x_1-x_0‖≤(2c+1)^1‖ x_1-x_0‖ .
For the inductive case:
‖ x_k+1-x_0‖ =‖ x_k-γ_kg_k-x_0‖
≤‖ x_k-x_0‖ +
≤‖ x_k-x_0‖ +cD_k/G
≤‖ x_k-x_0‖ +cD_k
≤(2c+1)‖ x_k-x_0‖
≤(2c+1)^n+1‖ x_1-x_0‖ .
|
http://arxiv.org/abs/2306.02781v2
|
20230605111418
|
A survey of Generative AI Applications
|
[
"Roberto Gozalo-Brizuela",
"Eduardo C. Garrido-Merchán"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Reassembling Broken Objects using Breaking Curves
Ali AlagramiEqual contribution. Luca Palmieri[1] Sinem Aslan[ ] Marcello Pelillo Sebastiano Vascon
5cDAIS, Ca’ Foscari University of Venice, Italy
5c
===============================================================================================================================================================================================
Generative AI has experienced remarkable growth in recent years, leading to a wide array of applications across diverse domains. In this paper, we present a comprehensive survey of more than 350 generative AI applications, providing a structured taxonomy and concise descriptions of various unimodal and even multimodal generative AIs. The survey is organized into sections, covering a wide range of unimodal generative AI applications such as text, images, video, gaming and brain information. Our survey aims to serve as a valuable resource for researchers and practitioners to navigate the rapidly expanding landscape of generative AI, facilitating a better understanding of the current state-of-the-art and fostering further innovation in the field.
§ INTRODUCTION
The emergence of groundbreaking generative AI models, such as ChatGPT <cit.> and DALL-E <cit.>, has catalyzed a new era in the synthesis and manipulation of digital content. Concretely, these powerful machine learning algorithms have demonstrated unprecedented capabilities in synthesizing realistic images, audio, text, and other data modalities <cit.>. In particular, these state-of-the-art language and image generation models, leveraging the prowess of deep learning and transformer architectures, have enabled the generation of a vast array of fields.
Generative AI refers to artificial intelligence that can generate novel content, rather than simply analyzing or acting on existing data like expert systems <cit.>. Generative AI models, equipped with vast data sets and intricate designs, have the extraordinary capability to create new and diverse content. They can process and learn from information gathered from a multitude of sources, such as Wikipedia <cit.>, Github <cit.> and others. By tapping into this wealth of data, these models can generate an extensive range of multimedia formats, including video, audio, and text.
During the recent years, the continuous growth in computing power has used deep neural networks <cit.>, transformers and other innovative models like generative adversarial networks <cit.> and variational autoencoders <cit.>. All these models can effectively capture the complexity of data, making them adept at modeling high-dimensional probability distributions of language or images from specific or general domains. By complementing generative models with additional techniques that map the latent high-dimensional semantic space of language or images to multimedia representations of text, audio, or video, it becomes possible to transform any input format, such as text, into a variety of output formats like video. This versatility allows for a seamless conversion between multimedia formats, making generative models invaluable in numerous applications. One of the most significant aspects of generative AI is its potential for endless applications. These models can be trained to generate genuinely different multimedia formats, like video, audio, or text, from various input formats. For instance, generative AI can be used to create realistic images from textual descriptions, produce video content from audio, or even generate music compositions based on specific styles or emotions. Furthermore, generative AI has the potential to revolutionize industries such as advertising, entertainment, and education by automating content creation and providing personalized experiences. With the ability to learn from diverse data sources and generate a wide array of multimedia outputs, these models can help businesses and individuals alike save time and resources while tapping into new creative possibilities.In conclusion, generative AI models, bolstered by their access to extensive data and complex designs, offer unparalleled potential in content creation and transformation. Their ability to learn from various sources, generate diverse multimedia formats, and convert inputs from one format to another opens up a vast array of applications in multimedia generation and conversion, making them indispensable tools in today's technology-driven world.
In more recent work, there have been surveys of LLMs, and of Generative AI, talking about different applications of the technology <cit.>. In contrast to prior surveys, this comprehensive review aims to offer a unique perspective by highlighting not only the most prominent generative models and their underlying technologies but also by emphasizing on all the different uses of this technology. In addition, we give an up-to-date competitive outlook in this growing industry and the models behind this growth.
This resource encompasses 15 categories, which include text, images, video, 3D, code and software, speech, AI understanding, business, gaming, music, biotech, brain, others, and multimodal. Within each section, a thorough taxonomy of the current technologies is presented, detailing both the models and tools available. By offering a systematic exploration of these diverse AI applications, the survey serves as an essential reference for researchers, academics, and professionals, enabling them to comprehend better the evolving landscape of generative AI and its far-reaching implications.
As an example, a 3D game designer may have various generative AI needs for a project of his. He may find a solution for his 3D AI needs under both 3D and gaming, getting more specific results and different answers. He may also find solutions for more business needs of his under both business and text. With this survey, we believe that users will get a very good outlook of how Generative AI is shaping up and where they may find their needed technology.
We introduce, in this article, the proposition of an extensive dictionary centered on the most sought-after generative AI applications, which are notably reshaping industries such as the videogames <cit.>, design <cit.>, and business operations <cit.> sectors. The challenge users experience in identifying the developed programs within each distinct application field substantiates the demand for a comprehensive reference tool.
§ BASIC TAXONOMY OF MODELS
This article explores the burgeoning applications of generative AI, focusing on its transformative potential across diverse sectors, such as art, business, biotechnology, and design. We achieve this by dividing Generative AI into 13 parts by both the output that is produced, the context in which they are being used and the business use that this technology has. The reader may observe that many models could be placed in the text category as the output is text. Or that many copywriting models could also be placed under the text category. The classification that is shown below serves for a prospective user of generative AI technologies to quickly find the technology they shall be using based on the use case. In this first part, we introduce the categories in which we taxonomized current Generative AI technology. Here we present a summary of the different categories:
Regarding text, Generative AI technologies in the text category aim to create and manipulate natural language text. These technologies include language models that can generate human-like text, such as OpenAI's GPT models. While the most famous of these models are chatbots such as OpenAI's ChatGPT or Google's BARD, other types of models are included in this category. These include text-writing assistants, scientific language models or chatbots. The main criterion for this category was that the models produce text as an output.
Concerning images, Generative AI technologies in the images category focus on the creation and manipulation of visual images. The main criteria in this category was that the final output was an image. This can include image creating models that can create images out of textual descriptions and image editing models. For simplicity, the category was divided into artistic image creation, realistic image creation and image editing. Some models that did two or more of these tasks are arbitrarily included in one of the categories. Other models of this category include text-to-layouts and text-to-molecular representations which could not be included in the aforementioned categories.
Dealing with video, Generative AI technologies in the video category aim to create and manipulate video content. The main criteria for this category was that the final output was a video. This mainly includes video creation models that can generate new video content from textual descriptions. Other models include post production, text-to-scene generation, text-to-motion capture, image-to-video as well as video dubbing.
Also working with 3D, Generative AI technologies in the 3D category focus on the creation and manipulation of three-dimensional objects and environments. The main criteria was using the output being a fully formed 3D model. Also included, there is a 4-D model and a 3-D model specially designed for metaverse purposes. Inputs include text, a single image,images and 2D models.
Focusing on code and software, Generative AI technologies in the code and software category aim to automate the process of writing code and creating software. The main criteria was that the final output be a code.This includes a variety of categories: text-to-code,text-to-websites, text-to-software and text-to-apps. Other less frequent models include designs-to-code, text-to-software,text-to-RPA and a code translator. The text-to-sofware category was designed to fit Adept, a company that wants users to communicate with computers via just a text input. This is why it is included here.
Regarding speech, Generative AI technologies in the speech category focus on the creation and manipulation of spoken language. All of these technologies are able to transform an input into a speech output.This is divided into text-to-speech, speech-to-speech and speech editing.
About AI understanding, Generative AI technologies in the AI understanding category are those models that convert an input into a text output. This particular category was drawn because of the need of a category which would summarize models that can turn many inputs into speech. Inputs cover: speech, images audio and video, images, video, metaphors, semi-structured data, structured data, movies and generative regions.
Concerning business, Generative AI technologies in the business category focus on the application of AI to improve business processes and decision-making. Many of the said models in aforementioned categories such as chatGPT in the text category or Midjourney in the image category could very well be used by businesses. Despite, the means for this category is for people in general businesses to find models for their operations. These are divided into: marketing, new business models and business operations.
Dealing with Gaming, Generative AI technologies in the gaming category aim to make game creation much easier for developers. They use text, 3d and image models for their purposes. They are divided into videogame creation and characters.
About music, Generative AI technologies in the music category focus on the creation and manipulation of musical content. This category includes music generation, musical editing and a dance-to-music model.
Regarding biotech, Generative AI technologies in the biotech category aim to apply Generative AI to biological research and medical applications. This can include models that can predict the structure of proteins or DNA sequences, as well as drug discovery tools that can identify new drug candidates. Some of these models could have been included in the Business category, but this category was drawn up because of the abundance of Generative AI applications in this field.
Concerning the human brain, Generative AI technologies in the brain category focus on the application of Generative AI to help people communicate. This includes brain-to-text models and brain-to-images models.
Finally, we include an others category, made in order to fit Alphatensor, a groundbreaking AI technology for the discovery of algorithms <cit.> as well as AutoGPT, an attempt at autonomous GPT <cit.>.
§.§.§ Multimodal
Category created for those models that either intake several kinds of inputs or can output several forms of data. Other mentioned models such as text-to-slides have this characteristic, but these others could not fit in any other of the aforementioned categories.
§ GENERATIVE AI APPLICATIONS
In this section we will introduce a broad overview of generative AI applications divided in subsection according to every different topic.
§.§ Text
Text models specially those centered around conversational chatbots have revolutionized AI since the launch of ChatGPT. Helped by natural language processing and large language models, these models have many very useful capabilities like summarization, writing assistance, code generation, language translation and sentiment analysis. They have been the main focus in Generative AI because of the capabilities of ChatGPT, a application which millions of users are already taking advantage of.<cit.>
§.§.§ Conversational AI
Conversational AI has been one of the most talked about topics in AI. These services act as chatbots capable of a wide variety of tasks converting text prompts into text outputs. They are powered by Large Language Models or LLMs. Large language models (LLMs) refer to Transformer language models that contain hundreds of billions (or more) of parameters, which are trained on massive text data,such as GPT-3, PaLM, Galactica,and LLaMA <cit.>. Some of their capabilities are text generation, common-sense reasoning, spatial reasoning,<cit.>, mathematical reasoning or programming assistance. <cit.><cit.>. In terms of business operations, there are many applications such as demand forecasting, inventory optimization and risk management. <cit.>. Many of the capabilities are being researched at the time of writing these articles, just as the capabilites of LLMs are being discovered.
The most famous example is ChatGPT which was trained with data until 2021 and which now has a beta function for up-to-date data, including plug-ins <cit.>. Other chatbots which do not include updated information include Claude or Stanford Alpaca <cit.>. Models with updated information include Bing AI, Google's BARD powered by LaMDA, the Beta version of ChatGPT, DuckAssist, Metaphor or Perplexity AI. <cit.>
§.§.§ Text-to-Science
Other applications can be seen in Science, where Galactica <cit.> and Minerva <cit.> have merged. Galactica is a large language model that can store, combine and reason aboout scientific language. Minerva is Large language model faocused on quantitative reasoning tasks such as mathematics, science and engineering problems at the college level. Although the models are not at all able to replace human reasoning on these tasks, they show promising results.
§.§.§ Text-to-Author Simulation
The models have recently shown abilities of recreating certain styles of writing. Examples recently show LLMs being able to write as Daniel C. Dennett <cit.> or as H.P. Lovecraft <cit.>. Dennett's paper shows that experts on Dennett's work were succesful at a 51 percent rate at disinguishing between the philosopher's work and the large language model's work. Lovecraft's paper shows that human readers without prior exposure to Lovecraft are unable to distinguish between texts written by the author and those written by ChatGPT. These are remarkable achievements and show great capabilities of Language Models in imitating writing through fine-tuning.
Generative AI can also be used for live-writing assistance. Previously mentioned chatbots such as ChatGPT can be used for this purpose, but specific applications have been created such as GrammarlyGO <cit.> and PEER <cit.>. GrammarlyGO is a writing assistant created by Grammarly, able to write drafts, outlines, replies and revisions. PEER is similar to Grammarly's software but it provides explanation for its actions and it's fine-tuned for an academic article.
§.§.§ Text-to-Medical Advice
Large language models have also been proven useful for preliminary medical advice through fine-tuning. We need to state that these models are still not completely safe for this use and they should not be used for replacing a human at the moment. Some of these models are Chatdoctor <cit.>, GlassAI <cit.>, Med-PaLM 2,<cit.> and YourDoctor AI <cit.>. They have shown promising capabilities to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians. Med-PaLM 2 scored up to 86.5 on the MedQA dataset. These models again show a remarkable ability to create accurate responses through fine-tuning. The biggest found startup in this space in Hippocratic AI <cit.> which has developed LLM's that outperform GPT-4 on medical datasets.
§.§.§ Text-to-Itinerary
Other capabilities include travel itinerary creation, with application examples such as Roam Around <cit.>, TripNotes <cit.> or ChatGPT's Kayak plug-in. <cit.> The first two show the ability of creating a visiting schedule while the Kayak plugin is able to look for hotels, flights and more through natural language.
§.§.§ Doc-to-Text
At last, Generative AI can also use natural language in order to retrieve information from documents. Two applications are ChatDOC <cit.> and MapDeduce <cit.>. They are able to quickly extract, locate and summarize information from PDFs through natural language queries.
§.§ Images
Image Generative AI has only grown since the launch of DALL-E 2 back in 2022. With both artistic and professional purposes, this technology has proved very useful in creating images from text prompts as well as in image editing. In terms of art creation, it has pushed creative boundaries and has been revolutionary. In image creation, photorrealism seems nearer with cutting-edge applications such as Midjourney which have provided with very realistic images.
§.§.§ Image Editing
Generative AI has proven useful in terms of image editing. Some useful applications include Alpaca AI <cit.>, I2SB <cit.> and Facet AI <cit.>. Some capabilites of these applications are inpainting, outpainiting, upscaling, super-resolution, deblurring and depth map generation. An example of an application using Generative AI for image editing is Photoroom AI <cit.>, which is able to erase backgrounds and remove objects in images through this software.
Even face restoration can be achieved through Generative AI, as Tencent Face Restoration tool shows <cit.>. They achieve this through GANs, one of the pillars behind Generative AI and Deep Learning.For the purpose of creativity, Stable Diffusion Reimagine allows users to generate multiple variations of a single image<cit.>.
§.§.§ Artistic Images
In terms of artistic images, many platforms have been created for the creation of artistic images through text prompts. Some examples include OpenART <cit.> which uses DALL-E 2 <cit.>, Midjourney <cit.>, Stable Diffusion <cit.> and create images from text prompts, Mage.Space which uses Stable Diffusion for art generation and NightCafe, which uses Stable Diffusion, DALL-E 2, CLIP-Guided Diffusion, VQGAN+CLIP and Neural Style Transfer for artistic image generation. Other platforms include Wonder <cit.>, which is a mobile application for artistic image creation and Neural.Love<cit.>, an AI-powered platform for audio,video and image editing and enhancement which has the Art Generator, in which you can select from many styles such as Fantasy or Sci-Fi. In contrast to these other platforms, DALL-E <cit.> and Midjourney <cit.> use their own for image generation.
These models have also been proven useful for other artistic image tasks. Tattoo creation can be helped via Tattoos AI <cit.>. Moreover, meme creation is achieved through Supermeme AI <cit.>. Also, artistic avatars can be generated through Profile Picture AI, using samples of yourself.
§.§.§ Realistic Images
In terms of realistic image creation, there has been a plethora of models which enable realistic image generation. They include Bing AI Image Creator <cit.>, Craiyon <cit.>, DALL-E 2 <cit.>, GLIGEN <cit.> <cit.>, Imagen <cit.>, Midjourney <cit.>,Muse <cit.> <cit.>,Parti <cit.>
Runway ML Text-to-Image <cit.> and Stable Diffusion ML <cit.>. Through text inputs, they attempt photorrealistic generations.
Outside of simple text-to-image creation, they are many more uses to Generative AI. Through image samples, Generative AI can create photorrealistic images. Booth AI <cit.>can quickly create lifestyle photographies through sample subject images. Other applications such as Aragon AI <cit.>, Avatar AI <cit.> and PrimeProfile <cit.> can create headshots through sample images.
The process of design can be optimized through Generative AI. PLaY <cit.> shows how text can be converted into layouts using latent diffusion. As well, Autodraw <cit.> is a drawing-to-shapes model that turns simple drawings into shapes. Both of these applications can quickly optimize the design process.
§.§ Video
Video Generative AI helps producers with storytelling. Although still a developing field because of the complexity that video generation poses, listed use cases such as digital human videos, human motion capture and video dubbing are revolutionary uses which can quickly lead to technological change.
§.§ Text-to-Video
§.§.§ General Video Production
Text-to-video models are still at an early stage, but there has been very many applications taht have tried to be succesful at video generation. The biggest models include Imagen Video,<cit.>, Meta Make A Video <cit.>,Phenaki <cit.> and Runway Gen-2 <cit.>. Imagen Video uses a cascade of diffusion models for the creation of video outputs. Meta Make a Video is a video generation model created by Meta Research that can do text-to-video,image-to-video and video-editing.
Although they are far from creating realistic outputs, they have shown promising signs and they have been useful from simple videos. Phenaki creates multiple-minute long videos through text prompts. Moreover, Runway Gen-2 can generate videos through text, video and image inputs. Shorter videos in GIF form can be generated through CogVideo <cit.>, trained by inheriting a pretrained text-to-image model, CogView2.
These video models have many applications in the creation of videos with digital humans. Applications like Colossyan AI <cit.>, Elai AI <cit.>, Heygen AI <cit.>, Hour One AI <cit.>, Rephrase AI <cit.> and Synthesia <cit.>can create proffessional videos through diverse avatars. Some of them such as Synthesia combine this technology with 120 different langauges for speech creation. As well, you can use generative AI from transforming articles into video outputs. SuperCreator <cit.> is a mobile app that generates short videos for TikTok, Reels and Shorts through Generative AI with an article input. As well Synths Video <cit.> transforms articles into YouTube videos.
Generative AI can lead to deeper personalization of the videos, very useful for businesses.A very good example of this is Tavus AI <cit.>, a video generation platform that personalizes videos of you to each audience member, automatically. As well,D-ID <cit.> uses generative AI technologies to create real-time video to create an inmersive human-like experience.
They can also be useful for artistic video generation. An example is Kaiber <cit.>, an application that creates artistic videos through text and image prompts. Even for movie creation, with Opus AI, <cit.> a text-to-video generator focused on everything from scenes, characters, dialogue and visual effects.
Generative AI can also be used for Image-to-Video generation, very useful for Virtual Reality. Two models that have been created through generative AI are GeoGPT <cit.> and SE3DS <cit.>. GeoGPT provided a novel approach to synthesize a consistent long-term video given a single scene image and a trajectory of large camera motions. SE3D is a method for high-resolution images and videos generation from novel viewpoints, including viewpoints that extrapolate far beyond the input images while maintaining 3D consistency via the use of a an image-to-image GAN.
Other notable video generation methods are Riverside AI <cit.> which is an AI-powered video production site with edition capabilities, Scenescape <cit.>, a method for text-driven perpetual view generation and Human Motion Diffusion Model. <cit.>
§.§ 3D
These technologies allow for easier 3D designs just with having a text prompt, an image or a video. They have varied applications such as game creation, the metaverse or urban planning for which 3d designs are fundamental.
§.§ Text-to-3D
3D model generation can be achieved through many types of inputs (text, image, images and 2D models) through generative AI. Regarding text inputs, some of the most important models are Adobe Firefly <cit.>, Dreamfusion <cit.>, GET3D <cit.>, Magic3D <cit.>, Synthesis AI <cit.> and Text2Room <cit.>. They created 3D textured shapes through text inputs. For animated 3D inputs, Mirage <cit.> is a 3D tool to generated animated 3D pieces. We can even be able to generate 4D models through generative AI, as MAV3D <cit.> shows with a dynamic scene generator.
In terms of image inputs, we can create 3d models with both a single image and with many images. For single-image inputs, popular models are GeNVS <cit.>, Kaedim <cit.>, Make-It-3D <cit.> and RealFusion<cit.>. For many image inputs, we have NVIDIA Lion <cit.>, EVA3D <cit.>, Neural-Lift-360 <cit.> and Scenedreamer <cit.>. Particulary for persons, we have PersoNeRF <cit.>, which takes sample human images and generates a 3d model. We can also generate a 3D model through 2D images. We can as well transform video inputs into 3d models through Deepmotion <cit.> and Plask AI <cit.>.Lastly, we can aslso create a 3d model through geometric points through NVIDIA LION <cit.>.
A business this technology can be applied to is the metaverse. Two companies that have combined both Generative AI and the Metaverse are Metaphysic AI <cit.> and Versy AI <cit.>.
§.§ Code and Software
Developers have been greatly assisted by these technologies by both Github Copilot and ChatGPT since this technologies inception. Through Natural Language, these models can help the user program and build websites. They can also help with more repetitive tasks of the programmer such as documentation. The most ambitious app, Adept, even says that NLP can lead to humans just using language to talk to a computer. The democratization of code can help many professionals without a technical background be able to move around these programs with ease, which could be a major technological advance.
§.§ Text-to-Code
§.§.§ Text-to-Multilingual Code
There are many softwares for multilingual code generation through just text-inputs. Although ChatGPT is widely used for coding, there are many more generative AI applications that are being created for that purpose. While most of them work as coding assistants, they are also able to gnerate code through text prompts. Some of them are Alphacode <cit.>, Amazon Codewhisperer <cit.>, BlackBox AI <cit.>, CodeComplete <cit.>, CodeGeeX <cit.>, Codeium <cit.>,Mutable AI <cit.>, GitHub Copilot <cit.>,GitHub Copilot X <cit.>, GhostWriter Replit <cit.> and Tabnine <cit.>. They are used to complete, explain, transform and generate code. They generate new lines based on context and syntax. As we can observe, it is one of the fields with the biggest amount of applications.They can be personalized to your writing styleCodex <cit.> is the model behind GitHub Copilot, the most famous coding assitant. For coding documentation, both Mintlify <cit.> and Stenography <cit.> have emerged as great ways to use Generative AI for code documentation.
In terms of specific languages, excel has been widely explored for spreadsheet code generation through generative AI. Some applications are AI Office Bot <cit.>, Data Sheets GPT <cit.>, Excel Formulabot <cit.>, Google Workspace AI- Sheets <cit.> and Sheets AI <cit.>.They generate formulas quickly through text prompts and AI Office Bot even explains them. Also, there have been applications for SQL code generation like AI2SQL <cit.> and Seek AI <cit.>. Code translation has also been made able through Generative AI, with Vercel AI Code Translator <cit.> being one of the most useful tools. Even cybersecurity can be helped through natural language through Microsoft Security Copilot <cit.>, an AI-powered security analysis tool that enables to responds to threats quickly, process signals and assess risk exposure.
Regarding website creation, Durable <cit.> and Mutiny <cit.>. Both applications generate a website with images and text through text prompts. Specifically for User Interface generation, we have three applications, Diagram AI <cit.>, Galileo AI <cit.> and Uizard AI<cit.>, which use Generative AI for generating good user interfaces and optimize the customer's experience. The.com <cit.> even automates web page generates, so companies create personalized pages for each of their customers.
Concerning app creation, there are many applications which are very useful for app generation. With reference to apps, Flutterflow <cit.>, Imagica AI <cit.> and Google Generative App Builder <cit.> generate enterprise-grade AI applications for users without a technical background. As for web apps, Debuild AI <cit.>, Literally Anything IO <cit.> and Second AI <cit.> are examples of Generative AI technologies with which users can easily create web apps through text prompts. Even LLM app creation has become easily available to non-technical professionals through text and data inputs as we can observe with Berri AI<cit.> and Scale Spellbook <cit.>. Lastly, apps with private data can now be designed through natural language through Zbrain <cit.>.
Other technologies have emerged in the field of coding. An example is design-to-code technogies, through Locofy <cit.>, a tool that turns designs into code for mobile apps and web. Furthermore, text-to-automation tools through Drafter AI <cit.>, a platform to automate even the most advanced analytical tasks and Lasso AI <cit.> which builds any robotic process automation using natural language. Even Adept<cit.> has emerged with the project of making natural language able to interact with everything in your computer.
§.§ Speech
Speech technologies try to imitate human speech. Text-to-speech technologies have made it easier to develop speeches. Other speech-to-speech technologies have made voice cloning very easy through generative AI. This technology has endless future possibilities in podcasts, youtube videos or helping mutes to communicate.
§.§ Text-to-Speech
In terms of speech creation, Generative AI has made it easy to create speech recordings through a text prompts. A plethora of platforms have been created including Coqui <cit.>, Descript Overdub <cit.>, ElevenLabs <cit.> Listnr <cit.>, Lovo AI <cit.>, Resemble AI <cit.>, Replica Studios <cit.>, Voicemod <cit.> and Wellsaid <cit.>. The most important model is AudioLM <cit.>,Google's framework for high-quality audio generation with long-term consistency.
As to speech-to-speech models, ACE-VC <cit.> and VALL-E <cit.> are the most important models. VALL-E specifically can take a three-second recording of someone’s voice, and replicate that voice, turning written words into speech, with realistic intonation and emotion depending on the context of the text.
Other technolgies that produce a speech output include Supertone AI <cit.>, that is able to provide speech editing and Dubverse <cit.>, with turns video recordings into speech, very useful for video dubbing.
§.§ AI Understanding
AI has reached a good level in terms of translating different types of information in texts, videos, speech and more into natural language. This is very useful because of the ability of AI to communicate and the ability to transform complex forms of communication into easier text. If we can transform any input into text, then we can easily understand it, and we can even use that output as an input in other technologies, making for much more complete AI models.
§.§ Speech-to-Text
One of the main fields has been speech-to-text technologies, as subtitles and transcriptions are very useful. Applications include Cogram AI <cit.>,Deepgram AI <cit.>, Dialpad AI <cit.>, Fathom Video <cit.>, Fireflies AI <cit.>,GoogleUSM <cit.>, Papercup <cit.>, Reduct Video <cit.>, Whisper <cit.> and Zoom IQ <cit.>. This technologies do not only do speech-to-text tasks, as some of them do much more. Deepgram AI identifies the speaker, the language and keyworks. Dialpad AI includes real-time recommendations, call summaries and the automation of customer touchpoints. Papercup even translates and creates human-sounding voices over. Lastly, Zoom has integrated AI into their systems, including features such as chat summaries and e-mail drafts.By combining many generative AI technologies, we can observe how workflows can be optimized.
Other technologies even turn images into text. These technologies can be used through many fields such as computer vision and help AI better understand human-generated content. As for these technologies, some examples of applications are Flamingo<cit.>, Segment Anything <cit.> and VisualGPT <cit.>. Flamingo is even able to achieve this task on video inputs. For video inputs, we have found TwelveLabs <cit.> and MINOTAUR <cit.>. TwelveLabs extracts key features from a video input such as action, object, text on screen, speech and people and it transforms all of that into vector representations. These vectors enable for quick search. Minotaur tackles query-based video understanding in long-form videos. In this space, another model called MOVIECLIP <cit.> was found very useful as it models the accurate recognition of visual scenes in movies. Through this technology, we can observe how computers are starting to understand unstructured sets of data effectively.
There are even platforms in which we can transform multiple forms of input into text. Primer AI <cit.> is tool to understand and act on vast amounts of text, images, audio and videos in real time. It helps in understanding and acting on this information to protect security and democracy. As for Speak AI <cit.>, it helps marketing and research teams turn unstructured audio, video and text into competitive insights using transcription and NLP. Through both technologies, we can see how generative AI can help us quickly analyze big and unstructured sets of data. We can even get to understand and act on it through Primer and quickly obtain insights through Speak AI.
Generative AI has also be found useful to transform tables of data into text. Some applications of Generative AI for this purpose are Defog AI <cit.>, MURMUR <cit.> and TabT5 <cit.>. MURMUR specifically is capable of understanding unstructured data. If we are able to perfect this technology, this could have major effects on optimizing business decision-making, through quickly understanding table data.
This technology has also been applied to generative region-to-text modelling. GriT <cit.> is a transformer that aims for object understnding with region,text pairs, where region locates objects and text describes objects. This can be useful for object detection tasks.
§.§ Business
Generative AI has clear implications, using many of the listed technologies such as text, image and video in order to apply it to business. It can help businesses to cut costs reducing repetitive tasks, or even automating other more creative, costly processes such as designs, marketing documents or slidedecks. It can even make new types of AI-powered businesses appear such as Harvey, which automates law, or Truewind, which automates accounting. Although young, we can only imagine how much generative AI will change the way in which businesses operate through the manners listed below.
§.§ Marketing
For marketing, generative AI has had a huge effect, as it is able to make creative region and image generation easier. In terms of copywriting, a plethora of applications have already been developed including Anyword <cit.>, Copy AI <cit.>, Google Workspace- Gmail and Docs <cit.>, Hyperwrite <cit.>, Jasper <cit.>, Letterdrop <cit.>, Regie AI <cit.>, Simplified AI <cit.>, Type AI <cit.>and Writesonic <cit.>. Some of the capabilites are writing emails, website contents, drafts, replies marketing content and product descriptions. We can easily see how the optimization of these processes would be very useful for many businesses. In fact, Regie AI even adapts the tone of the LLM to your company's tone, adapting even more to the business's needs.Here we again observe how businesses can combine many generative AI technologies in order to optimize their processes with Jasper, which does social media posts, emails, blogs and reports.
More specifically for social media content creation, there are some applications like Clips AI <cit.>, Pictory AI <cit.>, Predis AI <cit.>, Tweethunter <cit.> and Tweetmonk <cit.>. Clips AI and Pictory AI repurposes long-form content into social media posting. Predis AI generated video and image posts in your brand language. Both Tweethunter and Tweetmonk generate tweets of your brand's content. We can observe how Generative AI adapts to your brand and quickly automates these processes. Enterprises can too use Generative AI to generate podcasts through Bytepods <cit.>.
Advertisements can as well be created through generative AI, as we can see through many apps such as Ad Creative AI <cit.>, Clickable <cit.>, Omneky <cit.>, Pencil <cit.> and Waymark <cit.>. The last of them Waymark is very useful as it generates videos based on a scan of the web for local business data. As well, LensAI <cit.> is as well useful as it fine-tunes ads by targeting through identifying objects, logos, actions, and context and matching them with relevant ads. Storytelling in these ads can as well be powered by Generative AI, with applications such as AI 21 Labs <cit.> and Subtxt <cit.> that help in this matter.
Generative AI can as well be used to automate communication with the customer. A series of apps achieve personalized chatbots to your business: One Reach AI <cit.>, OpenSight AI <cit.>, Brainfish <cit.> and Yuma AI <cit.>. E-mails can also be automatized through Generative AI with tools like InboxPro <cit.>, Lavender <cit.>, Smartwriter <cit.> and Twain <cit.>. Some of these technologies even include social media data and e-mail analytics that can optimize operations.Even platforms with voice assistance such as Poly AI <cit.> have been created.
Sales can be as well powered by Generative AI through the plethora of applications which have already been created. Contact centers can be optimized through applications like Cresta <cit.>, Forethough AI <cit.>, Grain AI <cit.> and Replicant <cit.> which transform the customer experience. Replicant can solve customer service over the phone, text and chat. Others sich as Cresta and Grain provide live help to contact centers. Cresta transforms real-time insights into real-time actions and Grain AI automates note-taking, record-keeping and insight-capture for customer conversations. As for Forethought, it aims to automate the customer experience. For the sales preparation, an application Tennr <cit.> was created to generate the perfect meeting prep before every sales call. There are is even an app, Copy Monkey AI <cit.>, created for optimizing amazon listings and your product's ranking organically.
We can observe how companies are investing resources into AI with EinsteinGPT <cit.>, a platform created by Salesforce that creates personalized content across every Salesforce cloud. It will generate content across every sales, service, marketing, commerce, and IT interaction, transforming customer experience.
Visual content can be powered through Generative AI. Designs can be quickly created through only text prompts as seen with Microsoft Designer <cit.> which creates invitations, digital postcards, graphics and more. Even logos are able to be created through Generative AI, as it can be observed through Brandmark <cit.> and Looka AI <cit.>. Brandmark does too create other business-related content such as business cards. For name ideas you can use Namelix <cit.>, Brandinition <cit.> and Brandsnap <cit.> in order to come up with business names.
Generative AI can as well help companies automate repetitive tasks. This can be achieved through many applications like Bardeen AI <cit.>, Magical AI <cit.> and Notion AI <cit.>. These applications specifically designed for repetitive tasks are specially useful for companies that want to automate relatively simple processes through Machine Learning.
Generative AI can also be helpful for more strategic, high-level departments of a company. Applications like Rationale AI <cit.> can help in the creation of several business analysis through GPT. Applications like can help massively in employee management through applications like Albus ChatGPT <cit.>, ChatGPT in Slack <cit.> and Moveworks <cit.> through conversation summaries and employee support automation. Product creation can also be optimized through generative AI through an application like Cohere AI <cit.> that offers LLM in order to retrieve, generate and classify text in order to create the best products. Generative AI can also be useful in order to receive inmediate feedback into our businesses ideas through the applications Venturus AI <cit.> and Mixo AI <cit.> which analyze business ideas.
The analyst's workflows can also be made easier through Generative AI. This is achieved through helping both in slide generation and in market research. In terms of slide generation, there is several apps that can create presentations through natural language. Some of them are Autoslide AI <cit.>, Canva Docs to Decks <cit.>, ChatBA <cit.>, Decktopus AI <cit.>, Gamma AI <cit.>, Google Workspace AI- Slides <cit.>, Tome AI <cit.> and Slide AI <cit.>. Some of them work with just a small text prompt, like Tome AI and other work by introducing long texts like Canva Docs, which converts documents into slide presentations. As well, Decktopus even cerates slide notes which can be quite useful.
In terms of research, there are already applications in which you can generate real-world data backed answers through simple natural language queries. These answers come in the form of charts and visualizations to integrate processes and make market search even quicker. Several companies of this type are Alphawatch <cit.>, Dataherald <cit.>, OpenAxis AI <cit.> and Maya <cit.>.
An application that integrates all processes into one is AI Intern IO <cit.> which offers AI for most operations around a business: text, reports, code, marketing, HR documents, legal documents, documentation and translations.
Generative AI can massively help the finance industry's tedious process. BloombergGPT <cit.> is a Large Language Model built from scratch for finance. It can be used for sentiment analysis, named entity recognition, news classification, and question answering, among others. We can see how this could be of massive help for finance professionals. Even for modelling, Quilt Labs AI <cit.> is an AI-powered tool for the transformation of financial data into financial models. Finance can be seen as a great example of how applying generative AI to an industry can help automate processes.
It can also help tedious processes in scientific research. We can see this through applications such as Agolo AI <cit.>, ArxivGPT <cit.>, ConsensusNLP <cit.>, Elicit AI <cit.> and Koala <cit.>. Some capabilities of these applications are finding papers, extracting key claims, spotlighting insights. Specifically ConsensusNLP and Koala are chatbots personalized to scientific research. Fact-checking has also been explored through Generative AI through Golden <cit.>.
Specific industries have been deeply affected by Generative AI. A great example of an industry which has been affected is law. Some applications and companies are Casetext CoCounsel <cit.>, Darrow AI <cit.>, Harvey AI <cit.> and Spellbook Legal <cit.>. Harvey AI assists with contract analysis, due diligence, litigation, and regulatory compliance and can help generate insights, recommendations, and predictions based on data. As for Darrow AI, it does Case sourcing and Due Dilligence in order to get law cases for your firm. Regarding Spellbook Legal, it uses GPT-4 to review and suggest the terms of your contract. We can observe how Generative AI is occupying many spaces in law which have the potential to be automated. In this space, there is an application called TaxGPT <cit.>that takes advantage of GPT to fill out tax documents.
Other industries have also been affected. An example is accounting, with Truewind <cit.> which applies AI to bookeeping in order to make less errors and empower transparency. Moreover, education has been affected, firstly with the advent of ChatGPT in students work <cit.> and with companies such as Broadn <cit.>, which uses language models and generative AI to help you create your own private learning course, unique to your learning style. Even modelling could be affected by Generative AI with companies such as LA LA LAND <cit.> providing an AI-powered digital model studio to show your 3D designs as lifelike models. Lastly, Voice acting can be helped by Generative AI through Sonantic <cit.>, a Text-to-voice acting platform which provides with editing and direction.
Another example of an industry that has had some generative AI applications is architecture with SWAPP AI <cit.> and Autodesk Spacemaker <cit.>. SWAPP applies intelligent, advanced algorithms to deliver accurate, detailed, and complete Architectural construction documents and BIM models. Regarding Autodesk Spacemaker, it is a cloud-based AI software that empowers architects, urban planners and real estate developers to design high-quality site proposals. In fact, Generative AI can as well help in the Real Estate part of the process, which is shown by Zuma <cit.>,AI-powered real estate assitance that automates lead generation.
(https://www.getzuma.com/)
Lastly, Generative AI can be used in order to create realistic synthtetic data for testing environments. This can be achieved through websites such as Hazy <cit.>, Mostly AI <cit.>, Octopize <cit.> and Tonic <cit.>.
§.§ Gaming
The gaming industry will be greatly helped by Generative AI technologies, because of being able to use it from image, text and 3d models. 3D models can help with creation and text models with storytelling and characters. We can view gaming as a very clear case study as to how Generative AI can be used through all parts of the value chain in a certain industry.
Generatie AI can be used for videogame creation. This can be seen through applications like CSM <cit.>, Illiad AI <cit.> and Latitude <cit.>. Explictly for game assets, Pixelvibe <cit.> helps in the creation of them through Generative AI. Moreover, for game textures, Armorlab is a software designed for AI-powered texture authoring.There is even now a model called MarioGPT <cit.> designed for Open-Ended Text-to-Level Generation with LLMs.
Specifically for game characters, we have found Character AI <cit.>, ConvAI <cit.>, InWorld AI <cit.> and RCT AI Chaos Box <cit.>. ConvAI and InWorld AI craft characters through natural language. Just by inserting character settings, you are able to obtain full characters. As for RCT AI Chaos Box, this engine uses the Chaos Box algorithm to analyze real-time player inputs and dynamically generate NPC responses and new storylines based on Deep Reinforcement Learning.
§.§ Music
Music creation can also be greatly helped by Generative AI. This can be achieved by basic text prompts or by other music. This helps artists with song creation and can help with basic songs via just text prompts.
In terms of music generation through natural language, there have been many applications to do so. They include Aiva <cit.>, ERNIE-music <cit.>, Harmonai <cit.>, Infinite Album <cit.>, Jukebox <cit.>, Mubert <cit.>, Musico <cit.>, Noise2Music <cit.>, Sonify <cit.>, soundful <cit.> and Splash AI Beatbot <cit.>. They have the capability of generating music through simple natural language. Musico even reacts to gesture, movements,code and other sounds. Even dance is starting to be able to be transformed into music through a model called EDGE <cit.>. Lastly, musical editing can too be powered by Generative AI with applications such as Moises AI <cit.> and SingSong <cit.>
§.§ Biotech
Biotech is helped by Generative AI technologies helping in the process of molecule modelling. This can help with both drug discovery and protein modelling advancing the field. As these technologies advance, biotech could as well see their advancements made much easier. Absci Corporation <cit.>, a listed company in the NASDAQ, already uses generative AI in their drug creation process.
§.§ Drug Discovery
Regarding drug discovery, NVIDIA Bionemo <cit.> is a cloud service for generative AI in drug discovery researches are provided with generative and predictive biomolecular AI models at scale. There are a plethora of companies that use Generative AI for drug creation including Absci, Atomic AI <cit.>, BigHat AI <cit.>, Exscientia <cit.>, Menten AI <cit.> and ProteinQure <cit.>. They combine Machine Learning and biological knowledge in order to create drugs.
In terms of protein modelling, found models include BARTSmiles <cit.>, a generative language model for molecular representation and Alphafold <cit.>, a computer program that predicts protein structures for the whole human genome. As well, two companies have been found that center their business operations around protein design with Generative AI are Cradle <cit.> and Profluent <cit.>.
§.§ Brain
Brain models can help mute people communicate through Generative AI. Although young technologies, some promising results already can be seen in this field.
Regarding models that have been created to transform brain signals into text, we have found Meta AI's Speech From Brain <cit.> and Non-Invasive Brain Recordings <cit.>. They both try to decode speech from non-invasive brain recordings. Using Stable Diffusion for Brain Images <cit.> is a new method based on a diffusion model (DM) called Stable Diffusion to reconstruct images from human brain activity.
§ OTHERS
Category made to fit other models. Firstly, Alphatensor <cit.> is an AI system for algorithm discovery based on reinforcement learning. The task given to Alphatensor was to improve the efficiency of matrix multiplications, which occur in many fundamental computations. Automating the algorithm discovery procedure is intricate, as the space of possible algorithms is enormous. That is why this model uses AlphaTensor, which is trained to play a single-player game where the objective is finding tensor decompositions within a finite factor space. AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes.
Also, AutoGPT <cit.> has become a very famous model in the Generative AI community. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set.
§.§ Multimodal
Models can take advantage of many of the listed technologies and combine them into one application. These listed applications take multiple inputs which can greatly help AI advancements. Also, projects of multi-tasking agents, such as GATO, could be the future of Generative AI. Although some models specifically like text-to-slides do take advantage of many generative AI technologies, these models have been selected because of not fitting anywhere else.
Although it has not yet been released to the public, the fourth version of GPT, GPT-4, can accept image and text inputs and produce text output, as the technical report of GPT-4 <cit.> shows. In the realm of chatbots that can take many forms of data as inputs, ERNIE bot <cit.> is the chatbot created by Baidu that will include features such as solving math questions, writing marketing copy, answering questions about Chinese literature, and generating multimedia responses. Also, answering in many dialects.
Regarding multimodal language models,Kosmos-1 <cit.>,is a Multimodal Langauge Model with several capabilities. Some capabilities come in the form of language understanding and generation, perception-language tasks, including multimodal dialogue, image captioning, visual question answering, and vision tasks, such as image recognition with descriptions. Regarding Prismer <cit.>, it is a vision language model with multi-modal experts. Some tasks include image captioning, question answering, object detection and segmentation. This model is competitive with current state-of-the-art vision models, whilst requiring up to two orders of magnitude less training data. As for PALM-E <cit.>, it is an embodied multimodal langauge model. On the one hand, PaLM-E was primarily developed to be a model for robotics, and it solves a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). At the same time, PaLM-E is a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code.
As for attempts at generalist agents, GATO <cit.> is a single agent beyond text ouputs. It follows a multi-modal, multi-task, multi-embodiment generalist policy. The same network can play games, chat and press buttons at the same time. Regarding Generally Intelligent <cit.>, it is a company in charge of the development of generally capable agents. Their aim is to deploy aligned human-level AI systems that can generalize to a wide range of economically useful tasks and assist with scientific research.
As for multimodal cloud services in generative AI, NVIDIA Picasso <cit.> is a cloud service for building and deploying generative AI-powered image, video, and 3D applications. It integrates text-to image, text-to-video and text-to-3d models.
There even is a A framework called HuggingGPT <cit.> that leverages LLMs (e.g., ChatGPT) to connect various AI models in machine learning communities (e.g., Hugging Face) to solve AI tasks. It makes LLMs act as a controller to manage existing AI models to solve complicated AI tasks and language could be a generic interface to empower this. It achieves impressive results in language, vision, speech, and other challenging tasks, which paves a new way towards advanced artificial intelligence.
Concerning Adobe Firefly <cit.> it is a family of Adobe models that uses text to create images, vectors, videos and 3D models out of text prompts. It is now in Photoshop, where it allows users to add, extend, and remove content from your images with simple text prompts.
§ CONCLUSIONS AND FURTHER WORK
In conclusion, generative AI has already demonstrated its immense potential in revolutionizing various industries and reshaping our interactions with digital content. As these models continue to advance, they offer businesses and individuals unprecedented capabilities in content creation, problem-solving, and decision-making. Their capacity to generate realistic images, audio, text, and other data modalities unlocks novel opportunities for innovation and growth, while also enabling more personalized and efficient experiences. However, as we embrace this powerful technology, it is crucial to address the ethical implications and potential pitfalls associated with its use. For example, ethical implications applications such as ChatDoctor which can deliver a medical diagnosis. By fostering responsible development and adoption of generative AI, we can harness its transformative potential to shape a more creative, efficient, and prosperous future for businesses and individuals alike.
For future work, this survey is to be updated. From the launch of ChatGPT-3, a big part of these apps have launched. As more technologies are released, this survey will only grow bigger.
acm
|
http://arxiv.org/abs/2306.09661v1
|
20230616073619
|
The Intrinsic Alignment of Galaxy Clusters and Impact of Projection Effects
|
[
"Jingjing Shi",
"Tomomi Sunayama",
"Toshiki Kurita",
"Masahiro Takada",
"Sunao Sugiyama",
"Rachel Mandelbaum",
"Hironao Miyatake",
"Surhud More",
"Takahiro Nishimichi",
"Harry Johnston"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"astro-ph.GA"
] |
firstpage–lastpage
Resonant Cancellation Effect in Ramsey Experiments
Florian M. Piegsa
July 31, 2023
==================================================
Galaxy clusters, being the most massive objects in the Universe, exhibit the strongest alignment with the large-scale structure. However, mis-identification of members due to projection effects from the large scale structure can occur. We studied the impact of projection effects on the measurement of the intrinsic alignment of galaxy clusters, using galaxy cluster mock catalogs.
Our findings showed that projection effects result in a decrease of the large scale intrinsic alignment signal of the cluster and produce a bump at r_p∼ 1, most likely due to interlopers and missed member galaxies. This decrease in signal explains the observed similar alignment strength between bright central galaxies and clusters in the SDSS cluster catalog. The projection effect and cluster intrinsic alignment signal are coupled, with clusters having lower fractions of missing members or having higher fraction of interlopers exhibiting higher alignment signals in their projected shapes. We aim to use these findings to determine the impact of projection effects on galaxy cluster cosmology in future studies.
galaxies: clusters: general – large-scale structure of Universe – cosmology: observations – cosmology: theory
§ INTRODUCTION
Galaxy clusters are a major probe of dark energy <cit.>. Their abundance and time evolution are sensitive to the growth of structure in the Universe, since they form from rare highest peaks of the initial density field. Cluster cosmology is a major science of many surveys, including Hyper Suprime-Cam (HSC) survey [https://hsc.mtk.nao.ac.jp/ssp/], the Dark Energy Survey (DES) [https://www.darkenergysurvey.org/], the Kilo Degree Survey (KiDS) [https://kids.strw.leidenuniv.nl/], the Rubin Observatory Legacy Survey of Space and Time (LSST) [https://www.lsst.org/], Euclid [https://www.euclid-ec.org/], and the Nancy Grace Roman Telescope [https://roman.gsfc.nasa.gov/].
Cluster shapes are triaxial, originating from the anisotropic matter field and accretion. As a result, cluster shapes are expected to align with the matter field, i.e. intrinsic alignment (IA) (see review papers by ). IA are distinct from the alignments of galaxy shapes that originate from gravitational lensing by foreground attractors. The IA signal has been observed for massive red galaxies <cit.>, but no clear detection has been claimed for blue galaxies <cit.>.
The alignment of galaxy clusters have also been detected <cit.>. <cit.> studied the cluster shape - density correlation using clusters from Sloan Digital Sky Survey-Data Release 8 (SDSS DR8), finding a higher IA amplitude of galaxy clusters than luminous red galaxies (LRGs). As clusters are the most massive bound structures, studies on cluster shapes offer the unique opportunity to yield insight into dark matter halo shapes <cit.>.
However, the IA amplitude of galaxy clusters are found to be lower than predictions from numerical N-body simulations based on Λ cold dark matter (ΛCDM) cosmology. <cit.> discussed various systematic observational uncertainties that may have caused this discrepancy, including photometric redshift error, cluster centroiding error, uncertainty in cluster shape estimation using a limited subsample of galaxy members, and inclusion of spherical clusters. However, one of the major systematics for optically identified clusters, the so-called “projection effect", has not been properly discussed for measurement of IA for galaxy clusters.
Projection effects refer to the fact that interloper galaxies along the line-of-sight (LOS) are mistakenly identified as members of galaxy clusters <cit.>. This is a major systematics for optical clusters whose mass proxy is a number of member galaxies (called richness). It can also boost cluster lensing and clustering signals on large scales, since clusters with a filamentary structure aligned with the LOS direction are preferentially identified by optical cluster finders, which typically detect clusters using red galaxy overdensities in photometric catalogs <cit.>.
To obtain unbiased cosmological constraints using galaxy clusters, the projection effect has to be corrected or modelled accurately <cit.>.
In this work, we will study the impact of projection effects on measurements of cluster IA with the aim to understand the measured IA of the most massive objects. We also search for new perspectives on projection effects and possible ways to mitigate the impacts on cluster observables. We found that the projection effects can largely explain the lower signal of observed cluster IA compared to that of simulated dark matter halos.
The structure of the paper is organized as follows. In
Section <ref>, we introduce our methodology for measuring the correlation function and modeling the signals. In Section <ref>, we introduce the observational data and mock simulation used in this paper. The results on measured IA in observation and mocks — including the impact of projection effects — are presented in Section <ref> and Section <ref>. In Section <ref>, we summarize our results.
§ METHODOLOGY - LINEAR ALIGNMENT MODEL
In this section we briefly describe the leading theory of IA, i.e. the linear alignment model <cit.>, and then define the model to use for the comparison with the IA measurements of the clusters.
The linear alignment model predicts that the intrinsic shape of dark matter halos, and galaxy clusters in this paper, is determined by the gravitational tidal field at the time of formation of the halo or galaxy cluster. That is, the intrinsic “shear”, which characterizes the shape of galaxy cluster, is given as
(γ_1,γ_2)=-C_1/4π G(∂_x^2-∂_y^2,∂_x∂_y)Φ_p,
where Φ_p is the primordial gravitational field and C_1 is a constant. Here
we take the (x,y) coordinates to be on the 2D plane perpendicular to the LOS direction.
Throughout this paper, we employ a distant observer approximation, and in the above equation we take the LOS direction to be along the z-axis direction.
In this paper, we consider the cross-correlation between the IA shear of galaxy clusters and the galaxy density field.
For the latter, we will use the spectroscopic sample of galaxies in the measurement.
We can define the coordinate-independent cross-correlation function as
ξ_ g+( r) ≡γ_+( x; x')δ_g( x'),
with γ_+ being defined as
γ_+( x; x')≡[(γ_1( x)+ iγ_2( x))e^-2iϕ_ r].
Here denotes a notation to take the real part of the cluster shear, r≡ x- x',
and ϕ_ r is the angle measured from the first coordinate axis to the projected separation vector r_p on the sky plane perpendicular to the LOS direction.
Since we can measure only the projected shape of each cluster and the positions of clusters and galaxies are modulated by redshift-space distortion (RSD) <cit.>, the 3D cross-correlation function is generally given as
a function of the 3D separation vector r=(r_∥, r_p), where r_∥ is the component parallel to the LOS direction and r_p is the 2D separation vector perpendicular to the LOS.
Following the formulation in <cit.> <cit.> and as derived in Appendix <ref>,
it is convenient to use the multipole moments of the cross-correlation function
using the associated Legendre polynomials with m=2, denoted as L_ℓ^2:
ξ_ g+(r_p,r_∥)≡∑_ℓ≥ 2ξ_ g+^(ℓ)(r) L^2_ℓ(μ_ r),
where μ_ r is the cosine angle between r and the line-of-sight direction and
ξ^(ℓ)_g+ is the ℓ-th order multipole moment.
Note that the multipole index ℓ starts from 2 (ℓ=2,3,…) and ℒ_2^2(x)=3(1-x^2), ℒ_4^2(x)=15(1-x^2)(7x^2-1)/2, and so forth.
The multipole moments ξ^(ℓ)_g+ can also be expressed in terms of the cross power spectrum using
ξ^(ℓ)_ g+(r)=
i^ℓ∫k^2dk/(2π)^2P^(ℓ)_ gE(k)j_ℓ(kr),
where P^(ℓ)_ gE(k) is the corresponding multipole moments of the IA cross power spectrum P_ gE( k).
Assuming the linear alignment model (Eq. <ref>) and the linear Kaiser RSD, the cross-power spectrum is given as
P_ gE( k,z)=
b_g b_K(1-μ_ k^2)/2(1+βμ_ k^2)
P^ NL_ mm(k,z),
where b_K is the linear shape bias parameter <cit.>, b_g is the linear bias parameter of the density sample, β≡ f(z)/b_g,
f is the logarithmic of linear growth rate,
and μ_ k is the cosine angle between k and the LOS direction. In In ΛCDM cosmology/Universe, for a wide range of redshifts, f(z) ∼Ω_m(z)^0.55.
In the above equation, we used the nonlinear matter power spectrum, P^ NL_mm, including the effect of nonlinear structure formation, which is the so-called
nonlinear alignment model (NLA) <cit.>.
Also note that we assumed the linear Kaiser RSD factor (1+βμ^2), but we will below consider the projected correlation function to minimize the RSD contribution.
The shape bias parameter b_K is related to the IA amplitude parameter A_ IA that is often used in the literature as
b_K=-2A_ IAC_1ρ_ critΩ_ m/D(z),
where D(z) is the linear growth factor and we take C_1ρ_ crit=0.0134 following the convention
<cit.>. Throughout this paper we focus on A_ IA to discuss the IA amplitude of redMaPPer clusters.
Using Eq. (<ref>),
the multipole moments of the cross-correlation function can be found, as derived in
Appendix <ref>, as
ξ^(2)_g+(r)=b_g b_K/6(1+β/7)ξ^(2)_ mm(r),
ξ^(4)_g+(r)=b_g b_K/105βξ^(4)_ mm(r),
and zero otherwise. The multipole moments of the matter two-point correlation function is defined similarly to Eq. (<ref>) using P_ mm^ NL.
When there is no RSD effect, only the lowest order moment (ℓ=2) carries all the IA cross-correlation information, which can be realized by the use of the associated Legendre polynomials <cit.>.
In this paper we consider the projected IA cross-correlation function defined as
w_ g+(r_p)=2∫ dz W(z) ∫^Π_ max_0dr_∥ ξ_ g+(r_∥,r_p; z).
We adopt Π_ max=100 as our fiducial choice.
To estimate the linear bias parameter of the density sample, b_g, we model the galaxy clustering signal using
w_ gg(r_p) = 2∫ dz W(z) f_ corr(r_p, z)∫^Π_ max_0 dr_∥ b_g^2ξ^ NL_ mm(√(r_p^2+r^2_∥), z),
where f_ corr(r_p, z) is Kaiser correction factor given by <cit.>,
f_ corr(r_p, z) = ∫_0^Π_ maxξ_ gg^ lin(r_p, r_∥, z)dr_∥/∫_0^Π_ maxξ_ gg^ lin(√(r_p^2+r^2_∥), z)dr_∥.
ξ_ gg^ lin (r_p, r_∥, z) and ξ_ gg^ lin(r≡√(r_p^2+r_∥^2), z) here are the linear two-point galaxy correlation function in redshift space and real space, respectively, where ξ_ gg^ lin(r, z)= b_g^2 ξ_ mm^ lin(r, z) and the linear galaxy correlation function in redshift space is
ξ_ gg^ lin(r_p, r_∥, z) = ∑_l=0^2ξ_2l(s, z) 𝒫_2l (μ).
s=√(r_p^2+r_∥^2) is the real space separation, μ = r_∥/s, and 𝒫_2l (x) is the lth Legendre polynomial. ξ_0, ξ_2, and ξ_4 are given by
ξ_0(r, z) = (1+2/3β + 1/5β^2) ξ_ gg^ lin(r, z),
ξ_2(r, z) = (4/3β + 4/7β^2)[ ξ_ gg^ lin(r, z) - 3J_3(r, z) ],
ξ_4(r, z) = 8/35β^2 [ ξ_ gg^ lin(r, z) + 15/2 J_3(r,z) - 35/2J_5(r, z) ],
where
J_n(r, z) = 1/r^n∫^r_0 ξ_ gg^ lin(y, z) y^n-1dy.
To compute the model predictions of the projected IA cross correlation (Eq. <ref>),
We assume the ΛCDM cosmology with Ω_ DM=0.236, Ω_b = 0.046, Ω_Λ = 0.718, n_s = 0.9646, σ_8 = 0.817, h = 0.7 (WMAP9 cosmology, ). For the nonlinear matter power spectrum, we employ Halofit[https://pyhalofit.readthedocs.io/] for the ΛCDM model <cit.>.
We vary the linear bias parameters b_g and b_K (equivalently A_ IA) and estimate the best-fit values by comparing the model predictions with the measurements for the ΛCDM model.
W(z) is the redshift window function <cit.>,
W(z) = p_A(z)p_B(z)/χ^2 dχ /dz[ ∫p_A(z)p_B(z)/χ^2 dχ /dz dz ]^-1,
here p_A, B(z) are the redshift probability distributions of the samples and χ(z) is the comoving distance at redshift z.
§ DATA
§.§ BOSS DR12 LOWZ Galaxies
We use SDSS-III BOSS DR12 LOWZ galaxies with spectroscopic redshifts in the range of 0.1≤ z≤0.33 as a biased tracer of the matter field. This is due to their significant overlap with clusters. We utilize the large-scale structure catalogues[https://data.sdss.org/sas/dr12/boss/lss/] for BOSS <cit.>.
Table <ref> provides an overview of the properties of the density sample. We apply a weighting scheme to sample, using w=w_ FKP× w_ tot, where w_ tot=w_ sys× (w_ cp+w_ noz-1) for density data and w=w_ FKP for density random.
§.§ Cluster
We use galaxy clusters identified with algorithm <cit.> on SDSS DR8 photometry data <cit.>, over an area of about 10,000 deg^2. The algorithm finds optical clusters via identifying overdensity of red sequence galaxies. We use the publicly available version, v6.3.
For each cluster, the algorithm provides potential brightest central galaxy (BCG) candidates, cluster richness λ which is the sum-up of p_ mem over all candidates members, photometric redshift z_λ, and spectroscopic redshift z_ spec if available. p_ mem gives the membership probability of each galaxy belonging to a cluster in the redMaPPer catalog. We choose the galaxies with the highest p_ cen as BCGs.
In this paper we use galaxy clusters that have available z_ spec, and select clusters with 20≤λ≤200 and 0.1≤ z_ spec≤ 0.33. We further divide the sample into sub-samples with 20≤λ<30, 30≤λ<40, 40≤λ<55, 55≤λ<200, in order to study the richness dependence of A_ IA. The statistical properties of the clusters are summarized in Table <ref>.
We use the public random catalog of cluster, which includes cluster positions, redshift, richness λ and weight. The weighted z and λ distributions are the same as in the data. We apply the same z and λ cuts in the random catalog for each cluster sample.
§.§.§ Cluster shape characterization – BCG versus member galaxy distribution
We quantify the shape of each cluster by two ways: the shape of BCGs, and
the distribution of the member galaxies relative to BCGs. The BCG shape can be obtained by cross matching with SDSS DR8 shear catalog <cit.>.
4,325 clusters have BCG shape measurement, out of 6,345 selected clusters with 20≤λ≤200.
Alternatively, we follow the method in <cit.> to quantify
the cluster shape using member galaxy positions with respective to the BCG.
Using all cluster members with p_ mem>0.2, the second moments of the projected shape are given as
I_ij=∑_k (θ_i, k-θ_i^ BCG)(θ_j, k-θ_j^ BCG)p_ mem, k/∑_k p_ mem, k,
where i,j ∈1, 2.
The ellipcitity components are then defined as
ϵ_1 = I_11-I_22/I_11+I_22, ϵ_2 = 2I_12/I_11+I_22.
The “shear” of cluster shape is estimated as
γ_1,2=ϵ_1,2/(2 R),
where ℛ≡ 1-⟨ϵ_i^2⟩ is the shear responsivity <cit.>.
§.§ Correlation Function Estimator
For the BOSS LOWZ sample and the specp-z matched redMaPPer cluster, we measure the auto-correlation function
of LOWZ galaxies, ξ_ gg( r),
and the projected IA cross-correlation function between the LOWZ galaxy and the redMaPPer cluster shapes,
ξ_ g+( r).
We use a generalized Landy-Szalay estimator <cit.> for estimating the correlation functions:
ξ̂_g+ = S_+ D - S_+ R_D/R_S R_D,
ξ̂_gg =DD-2DR+RR/RR,
where S_+ is the shape field for the cluster sample, D is the density field for the LOWZ galaxy sample,
and R_S and R_D are random points corresponding to shape sample and density sample, respectively.
S_+ is the +-component of cluster shear with respect to the vector ≡ x- x' connecting the cluster position
and the LOWZ galaxy or the density random point (see Eq. <ref>).
For the IA cross-correlation, we consider the projected correlation function:
ŵ_ g+ (r_p) = ∫^Π_ max_-Π_ maxdΠ ξ̂_ g+ (r_∥,r_p).
We compare the measured w_g+ with the theory prediction (Eq. <ref>).
§.§ redMaPPer Cluster Mock
To study the impact of projection effects on IA of galaxy clusters, we use the cluster mock catalog constructed in <cit.> (see also ). Here we briefly summarize the mock construction procedures, and refer the readers to <cit.> for more detailed information.
To construct the cluster mock, N-body simulations from <cit.> are used, which were performed with 2048^3 particles in a comoving cubic box with side length of 1. The simulations adopt
the Planck Cosmology <cit.>.
The particle mass is 1.02× 10^10. Halos are identified using Rockstar halo finder <cit.>, and M_200m is adopted for halo mass, which is the total mass within R_200m. R_200m is the radius within which the mean density is 200 times the mean mass density ρ̅_m. For our purpose, we use the simulation snapshot and halo catalogs at z=0.25, which is the mean redshift of the clusters. We have 19 realizations of N-body simulation and cluster mock.
Mock galaxies are populated into halos with mass M_200m>10^12 using halo occupation distribution (HOD) prescription <cit.>.
The HOD parameters are chosen to match with the abundance and lensing measurements of the clusters.
Instead of distributing the satellite galaxies using Navarro-Frenk-White profile <cit.>, the satellites are populated using the positions of randomly selected member particles in each halo.
As a result, the satellites distribution within the halo traces the non-spherical halo shape, which is also used as one of the validation tests in Appendix <ref>.
The photometric redshift uncertainty, which is the main source of the projection effects, is modeled by assuming a specific projection length, d_ proj. In this work, we use the mock with d_ proj=60.
The cluster finder which mimics the redMaPPer algorithm <cit.> is then run on the red-sequence mock galaxies, producing the mock cluster catalog that includes the true richness λ_ true, the observed richness λ_ obs, and the membership probability p_ mem. The galaxy in the most massive halo in each identified cluster is considered as the central galaxy of the cluster.
Similar as in observation, we divide the mock cluster sample into subsamples with various richness bins, using both λ_ obs and λ_ true. We use halos with M_200m>10^12 as density tracers δ_h of the matter field. The properties of the selected cluster samples are shown in Table <ref>.
§.§.§ Cluster shape characterization
For each galaxy cluster in the mock, we calculate the observed cluster shape γ_ obs using the member galaxies with p_ mem>0.2, using Eq. (<ref>).
Unlike observation, mock cluster catalogs provide the true positions of the satellite galaxies as well as the dark matter particles.
So, we can calculate the intrinsic cluster shear γ_ true using satellite galaxy positions and γ_ DM using DM particles distributions (see Appendix <ref> for details of the calculation). The IA signal measured from γ_ true agrees with that from γ_ DM very well (see Appendix <ref>). So in the following, we take mock clusters selected using λ_ true and shape calculated using
γ_ true as the “mock true" sample, while mock cluster selected using λ_ obs and γ_ obs as the “mock observe" sample.
We use TreeCorr <cit.> to compute the correlation functions. We measured the signal as a function of transverse comoving separation in 25 logarithmic bins between 0.1 and 200.
We take Π_ max=100 and 20 linear bins for r_∥∈[-100, 100].
To estimate the covariance matrix, we divide the Cluster sample into 50 jackknife regions of approximately equal area on the sky, and compute the cross-correlation function by excluding one region each time <cit.>. For the mock cluster sample, we divide the simulation box into 64 sub-boxes of equal volume for jackknife covariance matrix estimation.
We restricted the analysis to mildly non-linear scales of r_p>6. The size of the jackknife patch is 14 deg., which roughly corresponds to 70 at z=0.1. So we take 70 as the maximum scale in the fitting.
§ RESULTS
§.§ IA of Clusters in SDSS
The measured cross correlation functions of the galaxy density field and the cluster shape field are shown in Figure <ref>. Here we used the cluster shapes measured using positions of the member galaxies relative to the BCG in each cluster. We obtain a clear detection of IA signal in all richness bins, meaning that cluster shapes have correlations with the surrounding large-scale structures.
The IA amplitude, A_ IA, is obtained by fitting NLA model to the measurement, as introduced in Section <ref>.
However, A_ IA is degenerate with bias parameter b_g of the galaxy density sample. We obtain b_g=1.73± 0.05 by measuring and fitting the projected clustering signal of LOWZ galaxies to the model (Eq. <ref>), as shown in Figure <ref>. We have good fits of the model prediction, with reduced χ^2 value of 1.02.
Our result for the LOWZ galaxy bias is consistent with the previous measurement, b_g=1.77± 0.04, in
<cit.>.
We ascribe the slight difference to the different redshift range, where they used 0.16<z<0.36 compared to our range, 0.10≤ z≤0.33.
The IA amplitude of each subsample can be found in Table <ref>. The NLA model gives a good fit to the measured w_g+ in the fitting range of 6<r_p<70 for each cluster sample. However, at small scales, the model predictions are much lower than the measured signal. The IA amplitude, A_ IA, does not show a clear dependence on cluster richness.
This contradicts with the results found from the shapes of halos in simulations <cit.>; they found that A_ IA increases with halo mass. We found this is mainly caused by the projection effects, as we will discuss in Section <ref> in detail.
§.§.§ Tests for systematics
In Figure <ref> we study potential systematic effects in our IA measurements. The upper panel shows the measured correlation function between the cross-component of the cluster shape, γ_×, and the galaxy density field, w_g×, for the sample with 20≤λ<200.
This cross correlation should be vanishing due to parity symmetry if the measurements is not affected by an unknown systematic effect.
We also show the IA cross-correlation function, w_g+, measured by integrating the original 3D IA correlation function only over the large line-of-sight separation, 150<|Π|<500. This cross-correlation is expected to have a very small signal, if the redshift of clusters is accurate or if there is no significant contamination of fake clusters
due to the projection effect.
The measured w_g+ for the large |Π| separation shows a very small signal. Hence we conclude that our measurements are not affected by the ×-component or the fake clusters.
There are other potential systematic effects that affect our IA measurements. These include photometric redshift errors, errors in cluster shape estimation arising due to a limited number of member galaxies, miscentering effect, contamination of merging clusters, and incompleteness of cluster sample or selection function.
<cit.> presented the tests of above systematic effects for the cluster sample, and showed that the most significant systematic effect arises from photo-z errors for the cluster sample.
Since we use only the clusters that have spectroscopic redshifts, we conclude that our IA measurements are not affected by the photo-z errors.
However, we below show that the projection effect due to large-scale structure surrounding the clusters causes a systematic contamination to the IA measurements.
§.§ IA of Clusters in Mock - Impact of Projection Effect
In Figure <ref> we study the impact of the projection effect on the IA correlation functions using the mock catalog
of clusters. To do this, we compare the IA correlation functions for clusters using the true or “observed” richness (λ_ true or λ_ obs) and/or the true or “observed” shape estimates (γ_ true or γ_ obs), where the observed quantities are affected by the projection effect.
The figure shows that the IA correlation function using the observed quantities (λ_ obs and
γ_ obs) displays about factor of 2 smaller amplitudes than that for non-contaminated clusters (λ_ true and γ_ true).
The solid orange curve shows the result when using the clusters for λ_ obs and γ_ true, which show almost similar amplitudes to that for the non-contaminated clusters (λ_ true
and γ_ true). The comparison tells that the smaller amplitude for the case of (λ_ obs, γ_ obs) is caused mainly by the projection effect on the shape measurement (γ_ obs against γ_ true).
The A_ IA values estimated from w_h+ for the different samples are given in Table <ref>. Figure <ref> only shows the result for the cluster sample with 20≤λ<200, the measurement and fitting results for other richness bins are shown in Appendix <ref>.
When comparing the solid and dashed lines in Figure <ref>, we notice the existence of a bump in w_h+ around r_p ∼ 1 for the case with projection effects. Here 1 h^-1 Mpc roughly corresponds to the aperture size used in the cluster finder <cit.>.
We will show later that this specific imprint of projection effects is likely caused by the non-member interlopers, which are however identified as cluster members by the method, and the real member galaxies that are missed by the cluster finder.
§.§.§ f_ true and f_ miss
As we have found, the projection effect
impacts the shape estimation of clusters.
There are two effects: one is caused by including interlopers (non-member galaxies) in the cluster members, and the other
is caused by missing real member galaxies, when estimating the cluster shape.
To study how these two effects cause a contamination to the IA correlation function, we define the following quantities:
* f_ true = ∑_d_i≤ R_c p^ true_ mem, i/λ_ obs, which is the true member fraction of identified members in each cluster. This quantity is the same as that used in <cit.>,
* f_ miss = 1. - n_ true, mem(<R_c)/λ_ true, which is the fraction of true members missed in the membership identification in each cluster.
Here p_ mem,i^ true is the membership probability of the i-th true member galaxy identified by the
finder, R_c is the cluster radius used in the finder, and n_ true, mem is the number of true member galaxies among all member galaxies. Note
0<f_ true≤ 1 by definition, and f_ true=1 means that the finder-identified member galaxies are true member galaxies that belong to the cluster, and no interlopers contaminate the true membership (however, all the true members are not necessarily identified). On the other hand,
a low f_ true indicates a higher contamination fraction of interlopers. f_ miss informs how many true member galaxies are not identified as member galaxies by the cluster finder.
In Figure <ref>, we show the ratio of w_h+ (γ_ obs) versus w_h+(γ_ true) for samples with low f_ true (f_ miss) and high f_ true (f_ miss) separately.
If the ratio between w_h+(γ_ obs) and w_h+(γ_ true) is close to 1 for a sub-sample, it means the measured cluster shape/IA are less affected by the projection effects. On contrary, if the ratio deviates from unity more, it means the projection effect is making the measured shape/IA deviates from the underlying true signals. Figure <ref> shows that the impact on large-scale IA signal of projection effects is weaker for clusters with high f_ true and low f_ miss, compared to the clusters with low f_ true and high f_ miss.
The amplitude of the bump at r_p∼ 1 is significantly decreased for samples with higher f_ true and higher f_ miss.
As shown in Figure <ref>, the bump only appears when the projection effect is included in the mock, i.e. for w_h+ (γ_ obs).
§.§.§ Coupling between cluster IA and projection effects
Cluster IA and projection effects are coupled with each other. In Figure <ref>, we compare the IA signal of low f_ true (f_ miss) and high f_ true (f_ miss) sub-samples. The large-scale IA amplitude is higher when f_ miss or f_ true is higher, for both γ_ obs and γ_ true. The coupling between cluster IA and projection effects are illustrated by the cartoons shown in Figure <ref>.
For clusters with their major axis (orientation) perpendicular to the LOS direction, the measured IA is higher, since their projected shapes appear more elliptical and we measured the cross correlation between the projected shapes and the density field. These clusters also tend to have LSS structures, such as filaments, that are perpendicular to the LOS direction. The missed member galaxy fraction f_ miss is higher, since the projected member galaxies distribution is more dispersed; and the contamination from interlopers is lower, since there are less galaxies outside the cluster along the LOS, thus f_ true is higher. In contrary, for clusters with their major axis along the LOS direction: the measured IA is lower, and it is more likely to have LSS structures along the LOS; they are less likely to miss galaxy members (lower f_ miss) since they are concentrated in the inner region; the contamination from interlopers along the LOS is higher (lower f_ true). In both cases, the outer region of the cluster is affected more, since the member number density decreases with the distance from the cluster center. This likely explains the existence of the bump at r_p ∼ 1, which is also the typical cluster boundary.
The above picture is supported by Figure <ref> in Appendix <ref>, where we show that clusters with lower f_ true and lower f_ miss tend to have their major axis parallel with the LOS direction.
In summary, the above picture explains the coupling between cluster IA and f_ miss, f_ true.
§.§ Dependence on Cluster Richness
The impact of the projection effects on cluster IA is independent of the cluster richness, as shown in Figure <ref>. The ratios of w_h+ (obs) with projection effects versus w_h+(true) without projection effects at scales of 6<r_p<70 is roughly constant and doesn't depend on the richness of the clusters.
In Figure <ref>, we plot the measured A_ IA versus cluster mean richness for clusters in observation, and clusters in the mock with w_h+(true) (filled squares) and w_h+(obs) (open squares). The A_ IA from observation agree with results using w_h+(obs) from mock pretty well, indicating that our mock construction and inclusion of the projection effects is quite reasonable. A weak increase of A_ IA with respect to cluster richness can be seen for clusters free of projection effects. However, such dependence can not be seen once projection effects are included.
we further derived the A_ IA - halo mass relation for galaxy clusters and compared it with the prediction from N-body simulation, which is shown in Appendix <ref>.
§ DISCUSSION
§.§ Cluster IA using BCG shape versus member galaxy positions
The IA of BCGs are shown in Figure <ref>. BCGs show a similar IA amplitude as the clusters that they lie in, indicating the good alignment of BCG orientations with respect to the member galaxies distribution of clusters. If we assume that the member galaxy distributions trace well the dark matter halo shapes, then the results in Figure <ref> could hint a rather good alignment between BCG and dark matter halos.
However, previous studies of <cit.> showed that central LRGs are not perfectly aligned with the dark matter halos, with a misalignment angle of ∼ 35deg. Recent work by <cit.> further showed that misalignment angles are likely to be mass dependent. Nevertheless, the good alignment shown in Figure <ref> seems to be in contradiction with expectations from previous studies. We found this is mainly caused by the projection effects on the observed IA of clusters, which decreases the measured cluster IA signal using member galaxy positions. If the impact on cluster IA is uncorrected, the inferred misalignment angle between BCGs and clusters are smaller than they should have been.
§ SUMMARY
We measured the IA of galaxy clusters by cross correlating the shapes of clusters with the LOWZ galaxie at 0.1≤ z≤0.33. We detected a positive IA signal, indicating that clusters point towards the density field. We also divide the samples into four richness samples, enabling us to study the dependence on cluster richness.
We investigated the impact of projection effects on the measured IA of clusters using mock cluster catalogues. The inclusion of the projection effects decrease the measured IA signal by a factor of ∼ 2.5, which is almost independent of the cluster richness. The projection effects predominantly impact the measured cluster shapes, including interlopers that are not members of the clusters and missing true members. Consequently, projection effects lead to a smaller observed misalignment angle between BCG and clusters than the underlying one.
In our study, we discovered a correlation between cluster IA and projection effects. Clusters oriented parallel to the LOS are less likely to have undetected members and more likely to have interlopers, and their projected shapes are less elliptical and exhibit weaker alignment signals. This can be attributed to their likely location within a filamentary structure along the LOS direction. Conversely, clusters oriented perpendicular to the LOS direction display a more elliptical projected shape and a stronger IA signal, they also tend to have a higher fraction of missed cluster members and a lower fraction of interlopers.
The measured IA strength, A_ IA, in the cluster mock with projection effects agrees well with observation. The observed A_ IA in both real data and mock observe clusters barely depends on cluster richness, while a weak dependence on richness does exist if we can correctly identify the true cluster members without any contamination.
Our work showed that IA measurements of galaxy clusters can be improved by identifying interlopers and by including the true member galaxies in the outer region, leading to a much higher signal-to-noise detection of cluster IA. High signal-to-noise detection of cluster IA is crucial for applying IA as a novel cosmological probe.
With more and more incoming spectroscopic data, we expect to suppress (or reduce) the impact of projection effects significantly. We will leave the efforts on removing projection effects for galaxy clusters to the future work.
§ ACKNOWLEDGEMENTS
We thank Teppei Okumura, Elisa Chisari, Ravi K. Sheth, Atsushi Taruya, and Khee-Gan Lee for enlightening discussion/comments on this work.
This work was supported in part by World Premier Inter- national Research Center Initiative (WPI Initiative), MEXT, Japan, and JSPS KAKENHI Grant Numbers JP19H00677, JP20H05850, JP20H05855, JP20H05861, JP21H01081, and JP22K03634,
and by Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo.
The authors thank the Yukawa Institute for Theoretical Physics at Kyoto University. Discussions during the YITP workshop YITP-W-22-16 on “New Frontiers in Cosmology with the Intrinsic Alignments of Galaxies” were useful to complete this work.
J. Shi and T. Kurita also thank Lorentz Center and the organizers of the hol-IA workshop: a holistic approach to galaxy intrinsic alignments held from 13 to 17 March 2023.
figuresection
§ NUMERICAL IMPLEMENTATION OF TWO-POINT CORRELATION FUNCTION
We here review the three-dimensional two-point statistics of shear.
The goal of this section is to derive Eq. (<ref>) in the main text.
§.§ Two-point Statistics
We assume the distant-observer (plane-parallel) approximation throughout this section.
The shear of a galaxy at a position is given by
γ() = γ_1() + iγ_2().
This is a spin-2 quantity on the sky plane perpendicular to the line-of-sight direction (LOS).
To obtain the coordinate-independent shear for the two-point correlation function, we define the rotated shear with the radial and cross components towards the other galaxy in a pair at a position ' as
γ_+, ×(;') ≡γ() e^-2iϕ_,
where ≡-' is the separation vector and ϕ_ is the angle measured from the first coordinate axis to the projected separation vector _p on the sky plane.
The two-point cross-correlation function of the galaxy density and shear is defined by
ξ_γ()
≡γ_+, ×(;') (')
= γ() (') e^-2iϕ_,
where the radial and cross components correspond to the real and imaginary parts, ξ_g+= ξ_γ and ξ_g×= ξ_γ, respectively.
In Fourier space, we start with the Fourier transform of Eq. (<ref>):
γ() = γ_1() + iγ_2().
As in the case of configuration space, we define the coordinate-independent quantities in Fourier space, called E/B modes, with a similar rotation as
E()+iB() ≡γ() e^-2iϕ_,
where ϕ_ is the angle measured from the first coordinate axis to the wave vector on the sky plane.
The cross power spectrum of the galaxy density and shear is thus given by
(2π)^3 δ_ (+') P_γ()
≡[E()+iB()] (')
= γ() (') e^-2iϕ_,
where the E- and B-mode spectra correspond to P_gE = P_γ and P_gB = P_γ, respectively.
From Eqs. (<ref>) and (<ref>), we obtain the relation between the correlation function and the power spectrum,
ξ_γ()
= ∫/(2π)^3
P_γ() e^2i(ϕ_ - ϕ_) e^i·.
These statistics are anisotropic with respect to the LOS due to the RSD and the projection of galaxy shape to the sky plane: ξ_γ() = ξ_γ(r_p, r_∥) and P_γ() = P_γ(k_p, k_∥), respectively.
The projected correlation function is defined by the integral of the correlation function over the LOS:
w_γ(r_p; Π_ max, z)
= ∫_-Π_ max^Π_ max r_∥ ξ_γ(r_p, r_∥, z),
where Π_ max is the projection length of the LOS direction for which an observer needs to specify; as our default choice, we adopt Π_ max=100 h^-1 Mpc.
This expression corresponds to the projected correlation at a single, representative redshift.
If we take into account the redshift dependence, we can follow the method in <cit.> as
w_γ(r_p; Π_ max, z̅)
≡∫ z W(z) w_γ(r_p; Π_ max, z),
where W(z) is the redshift distribution of the galaxy density and shape tracers, defined as
W(z)=p_g(z)p_γ(z)/χ^2dχ/dz[∫dz p_g(z)p_γ(z)/χ^2dχ/dz]^-1.
§.§ Expression with Spherical Bessel Function
To numerically evaluate the correlation function ξ_γ, one has to compute the transform in Eq. (<ref>) from the input model P_γ.
The standard method is to use the isotropy around the LOS on the sky plane and integrate it in the cylindrical coordinates <cit.>.
In this work, we employ the spherical coordinates and use an alternative expression with the spherical Bessel function derived in <cit.>.
We here briefly review the derivation.
First, we decompose the model power spectrum into the multipoles of the associated Legendre polynomials with m=2, ℒ_ℓ^2, as
P_γ(k_p, k_∥) =
P_γ(k, μ_) =
∑_ℓ≥ 2 P_γ^(ℓ)(k) ℒ_ℓ^2(μ_),
where μ_≡· = k_∥/k is the cosine between the wave vector and the LOS .
Note ℒ_2^2(x)=3(1-x^2), ℒ_4^2(x)=15(1-x^2)(7x^2-1)/2, and so forth.
Substituting Eq. (<ref>) into Eq. (<ref>) and employing the spherical coordinates, we have
ξ_γ()
= ∑_ℓ≥ 2∫k^2 k/2π^2 P_γ^(ℓ)(k)
∫Ω_/4π ℒ_ℓ^2(μ_)
e^2i(ϕ_ - ϕ_) e^i·.
Recalling the definition of the spherical harmonics,
Y_ℓ^m() = N_ℓ^m ℒ_ℓ^m(μ_) e^imϕ_,
with N_ℓ^m being the normalization factor
N_ℓ^m = √((2ℓ+1)/4π(ℓ-m)!/(ℓ+m)!),
and using the plane-wave expansion
e^i· = 4π∑_ℓ,m i^ℓ j_ℓ(kr) Y_ℓ^m*() Y_ℓ^m(),
we carry out the angle average of the wave vector as
∑_ℓ≥ 2∫k^2 k/2π^2 P_γ^(ℓ)(k)
∫Ω_/4π ℒ_ℓ^2(μ_)
e^2i(ϕ_ - ϕ_) e^i·
= ∑_ℓ≥ 2∫k^2 k/2π^2 P_γ^(ℓ)(k)
∫Ω_/4π (N_ℓ^2)^-1 Y_ℓ^2()
× 4π∑_ℓ',m' i^ℓ' j_ℓ'(kr) Y_ℓ'^m'*() Y_ℓ'^m'() e^-2iϕ_
= ∑_ℓ≥ 2 i^ℓ∫k^2 k/2π^2 P_γ^(ℓ)(k)
j_ℓ(kr) (N_ℓ^2)^-1 Y_ℓ^2() e^-2iϕ_
= ∑_ℓ≥ 2[i^ℓ∫k^2 k/2π^2 P_γ^(ℓ)(k)
j_ℓ(kr)] ℒ_ℓ^2(μ_).
In the second equation, we have used the orthogonality
∫Ω_ Y_ℓ^m() Y_ℓ'^m'*() = δ_ℓℓ'δ_mm'.
By comparing this result and the multipoles of the correlation function defined by
ξ_γ(r_p, r_∥) = ∑_ℓ≥ 2ξ_γ^(ℓ)(r) ℒ_ℓ^2(μ_),
where μ_≡· = r_∥/r, we obtain the expression of
the multipoles
ξ_γ^(ℓ)(r) = i^ℓ∫k^2 k/2π^2 P_γ^(ℓ)(k)
j_ℓ(kr),
which can be computed by the use of FFTlog algorithm <cit.>.
Let us consider the linear model, i.e. linear alignment model <cit.> with Kaiser formula <cit.>, as an example.
The model power spectrum is given by
P_γ(k, μ_) = 1-μ_^2/2(1+βμ_^2) b_gb_K P_mm(k),
where β≡ f/b_g, P_mm is the linear matter power spectrum, b_g and b_K ≡ -2A_IAC_1ρ_critΩ_/D̅ are the linear bias of the density sample and shape bias, respectively.
The multipole coefficients of the associated Legendre polynomials then become
P_γ^(2)(k) = 1/6(1+β/7) b_g b_KP_mm(k),
P_γ^(4)(k) = 1/105β b_g b_K P_mm(k),
and zero otherwise.
Plugging these into Eq. (<ref>), we obtain the multipoles of correlation function with the Hankel transforms of the input matter power spectrum:
ξ_γ^(2)(r) = 1/6(1+β/7) b_g b_K ξ_mm^(2)(r),
ξ_γ^(4)(r) = 1/105β b_g b_K ξ_mm^(4)(r),
where we have defined the multipoles of matter correlation function:
ξ_mm^(ℓ)(r) ≡ i^ℓ∫k^2 k/2π^2 P_mm(k)
j_ℓ(kr),
Once we prepare these multipoles, we can obtain the projected correlation function by integrating over the LOS as in Eq. (<ref>),
w_γ(r_p; Π, z)
= ∫_-Π_ max^Π_ max r_∥ ξ_γ(r_p, r_∥, z)
= ∑_ℓ≥ 2∫_-Π_ max^Π_ max r_∥ ξ_γ^(ℓ)(r)
ℒ_ℓ^2(μ_),
with μ_ = √(r_∥^2 / (r_p^2 + r_∥^2)).
§ IA OF CLUSTERS WITH VARYING SHAPE ESTIMATORS IN MOCK
We checked how different shape estimators affect the measured IA of galaxy clusters in mock simulation. The shape of galaxy clusters are measured using
* dark matter particle distribution (DM),
I_ij=∑_n m_n x_ni x_nj/r_n^2/∑_n m_n,
where m_n is the mass of the nth particle within the halo, x_ni, x_nj (i,j=1,2) are the position coordinates of this particle with respect to the centre of cluster, and r_n is the distance of the particle to the cluster center;
* satellite distribution within dark matter halos (Halo Sat),
I_ij=∑_n x_ni x_nj/N_g,
where x_ni, x_nj (i,j=1,2) are the positions of nth satellite galaxy with respect to the centre of cluster, and N_g is the total number of satellite galaxies used for the calculation;
* identified member galaxy distribution (RM Mem), I_ij is calculated using Eq. <ref>, except that we use member galaxies identified by the cluster finder;
* identified members that truly belong to the clusters (RM True Mem), also using Eq. <ref>.
Figure <ref> showed that, w_h+ measured using γ (DM) shows the strongest signal, and satellite distributions trace the DM distribution rather well, showing only a slightly weaker IA signal, as shown by the blue line. This is expected since the satellite galaxies are populated following the dark matter distribution.
IA measured using identified member galaxy distribution γ (RM Mem) show the lowest signal, with a bump at r_p ∼ 0.8. If interlopers are removed for the shape calculation, the bump disappears and the IA signal increases a little bit, shown by the green line. However, the IA signal is still much lower than the one measured using DM and satellite galaxy distribution, indicating that another factor, i.e. the satellites that are missed by algorithm, is also responsible for decreasing the IA signal.
§ CLUSTERS OF VARIOUS RICHNESS BINS IN THE MOCK
Figure <ref> shows the IA of clusters in the mock in various richness bins and the corresponding NLA fitting results. The IA signal of mock true samples are obtained by selecting clusters using λ_ true and measuring shapes γ_ true using satellites within halos. The IA signal of mock observe samples are gotten by selecting clusters using λ_ obs and measuring shapes γ_ obs using identified cluster members as in observation. The IA of mock observe is lower than that of mock true in all richness bins. The NLA model fits the signal well in the range of 6<r_p<70, and the resulting A_ IA are summarized in Table <ref>.
§ CLUSTER ORIENTATION AND PROJECTION EFFECTS
Figure <ref> shows the distribution of the orientation of clusters with respect to LOS direction for clusters with lower and higher f_ true (f_ miss) separately. The cluster orientation is obtained by calculating the major eigen vectors from the 3-dimensional inertia tensor using dark matter particle distribution,
I_ij=∑_n m_n x_ni x_nj/r_n^2/∑_n m_n,
where x_ni, x_nj (i,j=1,2,3) are the positions of nth particle with respect to the centre of cluster. The angle between major axis of the halo and LOS direction is characterized by μ≡ |cosθ|. Clusters selected using f_ true≤ 0.75 or f_ miss≤ 0.1 tend to have their major axis parallel with the LOS direction. On the other hand, clusters with f_ true>0.75 do not show a strong orientation preference. Cluster with f_ miss>0.1 show a clear tendency of major axis perpendicular to the LOS direction. Figure <ref> shows the distribution for clusters with 20≤λ_ obs<200 only. The results stay the same when we use different λ_ obs ranges.
§ DEPENDENCE ON HALO MASS AND REDSHIFT OF A_ IA
Figure <ref> shows how A_ IA varies with halo mass and redshift. The lines are results obtained from simulations, where the halo shapes are measured using Eq. <ref>, the dots with error bars are results from observation. The halo mass of clusters are obtained using the mass - richness relation from <cit.>, where weak lensing analysis was preformed for the clusters at 0.1<z≤ 0.33. <cit.> Parameterized the relation as M=M_0 (λ/λ_0)^α, where logM_0 = 14.344 ± 0.031, α=1.33^+0.09_-0.10, and λ_0=40. We use the mean richness value of each sub-sample to do the conversion. The simulation shows that A_ IA increases with halo mass and redshift. However, the redshift dependence is very weak/almost gone for halos with M_h>10^14. The observed A_ IA-M_h relation is clearly much lower than that from dark matter halo simulation, which is mainly due to the projection effects.
mnras
|
http://arxiv.org/abs/2306.04560v1
|
20230607160721
|
The lifted functional approach to mean field games with common noise
|
[
"Mark Cerenzia",
"Aaron Palmer"
] |
math.OC
|
[
"math.OC",
"math.AP",
"math.PR"
] |
Object Detection with Transformers: A Review
Mark Cerenzia, Aaron Palmer[Electronic contact: ]
=====================================================
We introduce a new path-by-path approach to mean field games with common noise that recovers duality at the pathwise level. We verify this perspective by explicitly solving some difficult examples with linear-quadratic data, including control in the volatility coefficient of the common noise as well as the constraint of partial information. As an application, we establish the celebrated separation principle in the latter context. In pursuing this program, we believe we have made a crucial contribution to clarifying the notion of regular solution in the path dependent PDE literature.
§ INTRODUCTION
This paper offers a new perspective on certain
classes of forward-backward systems
of stochastic partial differential equations
that arise naturally in mean field game theory and the theory of optimal control with partial information.
The systems arising from either of these fields share the following major difficulty:
although
the noise is exogenously given in the forward equation describing the state dynamics,
the noise is endogenously determined in the backward HJB equation characterizing optimality.
We propose a novel path-by-path interpretation that exhibits duality
between the equations of such systems at the pathwise level.
This paper
introduces and verifies this approach through significant examples, some of which
we have not yet found solved explicitly elsewhere in the literature.
Mean field games with common noise have attracted much attention
due to their practical and theoretical interest.
Indeed, it is a natural modeling assumption that all agents in a game are subject to common random shocks in addition to possible individual shocks.
On the other hand, the problem is notoriously
difficult because the corresponding mean field game consistency condition
now features a stochastic equilibrium measure flow that must coincide with the flow of conditional laws of an optimally controlled process given the common noise.
For the PDE approach, the breakthrough work <cit.> of Cardaliaguet-Delarue-Lasry-Lions
interprets the mean field game system with common noise (see the system (<ref>) below)
as the characteristics
for the so-called master equation, a certain PDE on Wasserstein space.
For the probabilistic approach,
Carmona-Delarue <cit.> interpret a similar class of such PDEs on Wasserstein space
as determining decoupling fields for forward-backward systems of stochastic differential equations that characterize mean field equilibria, whether for a probabilistic representation of the value function or of its gradient (the latter being the content of the Pontryagin maximum principle).
Either of these perspectives offers ways of achieving
wellposedness for the mean field game problem in the presence of common noise,
and further can yield explicit solutions for certain data; see
Sections 3.5 and 4.5 of Carmona-Delarue <cit.> for some linear-quadratic examples featuring a common noise.
By contrast,
the topic of control in the volatility coefficient of the
common noise has not been explored much in the mean field game theory literature.
The only paper we have found on the topic is
the recent work of Barasso-Touzi <cit.>;
otherwise, some general expressions and equations in Carmona-Delarue <cit.> account for the possibility
of controlled volatility coefficients, so
the abstract theory still applies insofar as one
can characterize equilibria based on dynamic programming (leading to a system of stochastic PDEs) or based on the Pontrygin maximum principle (leading to an FBSDE).
However, wellposedness results and explicit solutions do not seem to be available yet in the literature.
The topic of optimal control with partial information has a long history and an accordingly large literature.
We refer the reader to the book <cit.> of Bensoussan
and references therein.
Mean field games with common noise and with partial information
seems to be largely unexplored,
even though the recent paper of Bensoussan-Yam <cit.> that motivated our calculations clearly takes
inspiration from these authors' own work on mean field games.
See also the earlier paper of Bandini-Cosso-Fuhrman-Pham <cit.> that approaches the partial information problem (without mean field interactions) using viscosity solutions on Wasserstein space.
The unpublished work of Huang-Wang <cit.> attempts to pursue this problem
via the Pontryagin maximum principle, and although we believe this probabilistic approach can work,
the authors' calculations here do not appear to satisfy the separation principle, a standard litmus test for such a solution. Roughly speaking, this principle says that
to go from the optimal feedback control in the case of full information to the case of partial information,
one just needs to replace the state with the best guess of the state given
the common noise and the partial observation.
A main result of this paper is that the lifted functional approach
can be used to establish this principle for mean field games with common noise and partial information; see
the end of the
final Section <ref>
for the theorem statement and discussion.
One apparent difficulty with the dynamic programming approach to mean field games with partial information is that one must account for both the common and observational noises, so each of these must be endogenously determined
in the stochastic backward HJB equation to ensure non-anticipativity of the value function and optimal feedback control.
Another, more subtle, difficulty that arises here is that the probability measure
with respect to which one formulates a typical player's control problem with a partial information constraint differs
from the probability measure with respect to which one derives and articulates the forward-backward
system of stochastic PDEs; see the system (<ref>) below for how one may handle this issue.
Finally, if one drops the mean field coupling and partial information constraint,
the resulting
backward stochastic HJB equations of the various systems (<ref>), (<ref>), and (<ref>) that we consider
are well-known to be related to so-called path dependent PDEs (see Section 11.3.5 of Zhang <cit.>).
We refer the reader to the early work
of Ekren-Keller-Touzi-Zhang <cit.> and Ekren-Touzi-Zhang <cit.>
for the first accepted notion of viscosity solution for path dependent
PDEs, but otherwise point to
the bibliographical notes of Chapter 11 of Zhang <cit.>.
On the one hand, the main concepts
in this paper were inspired by careful manipulations involving the functional Itô formula
for path dependent functionals (see Dupire
<cit.> and Cont-Fournié
<cit.>).
On the other hand, we do not know of references from the path dependent PDE literature that systematically
explore explicit solutions.
We believe this gap speaks to one of the main benefits of the lifted functional approach
as a complementary perspective on path dependent PDEs, namely,
that it more concretely and quickly emphasizes the connection to classical PDE theory.
To our best knowledge,
such a connection in the same spirit
was only otherwise attempted by Bion-Nadal <cit.>
(see the definition of “regular solution” in Section 2.2 therein), but this work omits the
crucial compensator term, defined in (<ref>) below.
This omission is unfortunately
a significant error;
indeed, consider a simple example, e.g., the path dependent heat equation
with terminal condition G(ω) = ∫_0^T ω_s ds at time T ≥ 0 (see (<ref>) of the appendix).
The correct lifted functional solution here
is well-known to be given by û(t,ω, y) = ∫_0^t ω_s ds + (T - t) y (see Example 11.1.2 of Zhang <cit.>), which is consistent with our compensated heat equation (<ref>) but does not satisfy equation (5) in <cit.>.
However,
our main desire is for the lifted functional perspective to help bring important insights from the well-developed
deterministic mean field game theory
to bear on strong solutions for
mean field games with common noise of various types.
§.§ Reader's Guide
To review our program in a nutshell,
we first aim to show how the lifted functional approach can recover known
results in mean field games with common noise (Section <ref>).
Emboldened by this consistency,
we next pursue more substantial and uncharted examples
of mean field games with controlled common noise and partial information
(Sections <ref> and <ref>, respectively).
As a sanity check after some admittedly grueling calculations
in Section <ref>,
we are rewarded by confirmation of the separation principle, extending its reach into new territory.
A more detailed outline of the paper is as follows.
Before we can articulate the lifted functional approach, we briefly review some notations in Section <ref> that are commonly used throughout the paper.
In Section <ref>,
after recalling the fundamental forward-backward system of stochastic PDEs (<ref>)
that characterizes a mean field game equilibrium in the presence of common noise,
we state the lifted functional approach for this prototype problem.
In Section <ref>,
we present in straightforward settings the problem formulations associated with the various stochastic PDE systems (<ref>), (<ref>), and (<ref>) studied in the paper; a reader experienced in the interpretations of such systems may wish to skip this section.
Section <ref> can be considered a warm-up in a simpler setting for the more involved calculations of later sections;
nevertheless, this example also confirms the consistency of the lifted functional approach with more classical approaches of the optimal control theory literature.
At last, Section <ref> employs the lifted functional
approach
to explicitly solve a linear quadratic mean field game with common noise;
a reader that is pressed for time may wish to focus on Sections <ref> and <ref>, once acquainted with the notation of the
compensator (<ref>) and compensated time derivative (<ref>) below.
Finally, turning to applications that constitute new results,
Sections <ref> and <ref> adapt the lifted functional approach
to solve mean field games with common noise featuring, respectively, control in the volatility coefficient and the constraint of a partially observed state.
§ NOTATION
Throughout the paper, we work on a filtered probability space
(Ω', , , ) supporting independent standard
d-dimensional Brownian motions
= (W_t)_ and ^0 = (W_t^0)_.
We write ^ := (^_t)_
with ^_t := σ(∪_0 ≤ s ≤ t σ(Y_s) )
for the filtration generated by a given stochastic process = (Y_t)_.
Finally, we write
Ω := C_0([0,T]; ^d) = {ω∈ C([0,T]; ^d) : ω_0 = 0 }
for the path space,
whose elements serve as fixed realizations of the common noise ^0 = (W^0_t)_.
For the linear-quadratic data, we deliberately adopt similar notation to Section 3.5 of Carmona-Delarue <cit.> for the sake of ease of comparison later.
More specifically, we introduce constant
d × d volatility matrix coefficients
σ, σ^0,
deterministic continuous ^d × d-valued functions
(b_t,b̅_t,s_t)_,
deterministic symmetric nonnegative semi-definite d× d
matrix valued continuous functions (q_t, q̅_t)_, and
deterministic symmetric nonnegative semi-definite d× d
parameters q, q̅, s.
In the case of controlling the volatility coefficient of
the common noise,
we will also need a deterministic continuous ^d-valued function (a̅_t)_.
We say
that a functional ψ̂(t,ω)
on [0,T] ×Ω is strictly non-anticipative
if for all t ∈ [0,T] and for all paths ω, η∈Ω,
ψ̂(t,ω) = ψ̂(t,η)
whenever ω_s = η_s for all 0 ≤ s < t.
With a slight abuse of notation,
we sometimes indicate this by writing
ψ̂(t,ω) = ψ̂(t,(ω_s)_0 ≤ s < t).
The functional ψ̂(t,ω) is merely non-anticipative
if ψ̂(t,ω) = ψ̂(t,η)
whenever ω_s = η_s for all 0 ≤ s ≤ t.
Suppose (u_t(x))_ is an ^^0-adapted random field on [0,T] ×^d,
and suppose further that it can be written
as a functional of the form
u_t(x) = û(t,x,^0,W^0_t) := û(t,x,(W^0_s)_0 ≤ s < t,W^0_t ),
where as indicated û(t,x,ω,y) is a strictly non-anticipative function on ^d ×Ω×^d.
This way of writing such functionals
goes back to works of Dupire <cit.> and Peng <cit.> on the functional Itô formula and path dependent PDE theory, respectively,
though we follow the more recent work <cit.> of Cosso-Russo in referring to û(t,x,ω,y) as a lifted functional.
Note also how we indicate the dependence on the path variable ω∈Ω to be strictly
non-anticipative
by adorning the functional with a “hat” or “tilde”, such as “û(t,x,ω,y)” or “r(t,x,ω)” appearing in (<ref>) below.
Note then that the variable y will typically represent the present value of the common noise.
With this discussion, we can now introduce
the compensator and compensated time derivative that play a fundamental role in this paper.
For a given strictly non-anticipative functional ψ̂(t,ω) = ψ̂(t,(ω_s)_0 ≤ s < t) on [0,T] ×Ω,
the compensator of ψ̂(t,ω) is defined by
_ω^y ψ̂(t,ω) := lim_ϵ↓ 0ϵ^-1 [ψ̂(t+ϵ, ω_· + [y-ω_·] 1_[t,t+ϵ)(·)) - ψ̂(t+ϵ,ω) ], y ∈^d.
As we will see below, the name derives from the interpretation that it is exactly the term to “compensate” the naive classical backward HJB equation to enforce strict non-anticipativity.
We remark that although ψ̂(t,ω) is strictly non-anticipative,
the compensated derivative _ω^y ψ̂(t,ω) will in general extract
the present value ω_t; indeed, one expects _ω^ω_tψ̂(t,ω) = 0, i.e.,
_ω^y ψ̂(t,ω)=0
when y = ω_t.
The astute reader will notice that the functional
ψ̂(t,ω) is defined
on the space of continuous paths, yet the key definition
(<ref>) requires evaluating on a path
with a jump.
This common occurrence in the path dependent PDE
literature can be handled in a few different ways.
For example, earlier literature here suggests showing the limit
(<ref>) is independent of
the chosen extension of ψ̂(t,ω)
to Skorokhod space.
We instead refer the reader
to the appendix, which adapts and extends the more recent seminorm topology approach of Section 2.2 from Cosso-Russo <cit.>,
which constitutes a convenient way to restrict
to a unique extension of the functional
when evaluated at a path with a single jump.
This latter perspective is also
convenient because some natural
expressions for the limit (<ref>)
involve evaluating the functional
at a path with a “double jump” at a point (see the Fréchet derivative expression (<ref>) in the appendix),
which would even be outside the scope of Skorokhod space.
However, given the concrete
spirit of this paper, we do not pursue
this technical point further here.
For the sake of simplifying calculations,
we will often find it convenient to combine the normal time derivative
and the new compensator into a single operator ∂_t^y := ∂_t + _ω^y, which we refer to as the compensated time derivative:
∂_t^y ψ̂(t,ω) := lim_ϵ↓ 0ϵ^-1 [ψ̂(t+ϵ, ω_· + [y-ω_·] 1_[t,t+ϵ)(·)) - ψ̂(t,ω) ], y ∈^d.
In particular, we will consider integral representations of solutions to stochastic differential equations, which are straightforward to differentiate using (<ref>).
§ THE LIFTED FUNCTIONAL APPROACH
A typical mean field game system with common noise
can be stated as follows: given any probability measure λ∈_2(^d)
with density ℓ(x), find an ^^0-adapted triple
(u_t(x),v_t(x),m_t(x))_ of random fields on [0,T] ×^d satisfying the forward backward system of
stochastic PDEs
d_t u_t(x) = ( - 1/2 [ a ∂_xx^2 u_t(x) ] -[ σ^0 ∂_x v_t(x) ]dt
+ H(t,x,∂_x u_t(x)) - f(t,x,μ_t) ) dt + v_t(x) · dW^0_t,
d_t m_t(x) = ( 1/2 [ a ∂_xx^2 m_t(x) ] + div_x [ m_t(x) ∂_p H(t,x,∂_x u_t(x) ) ] )dt - div_x [ m_t(x) σ^0 dW^0_t ],
u_T(x) = g(x,μ_T), m_0(x) = ℓ(x), μ_t(dx) := m_t(x)dx,
where we write a := σσ^⊤ +σ^0 (σ^0)^⊤
and “d_t" emphasizes that the total Itô differential is taken in time.
Intuitively, the forward conservation law in (<ref>)
describes how the mass density m_t(x) of some agents, such as a flock of birds,
evolves in time when subject to a random environment W^0_t,
while the backward HJB equation
determines the value function “u_t(x)” of a typical agent responding optimally
to the random evolution of the mass.
The somewhat mysterious random field v_t(x)
is part of the unknowns and
plays the role of ensuring that u_t(x) is ^^0-adapted; e.g., for a flock of birds buffeted by wind,
a typical bird at time t has only observed
the behavior of the wind (W_s^0)_0 ≤ s ≤ t,
but is not allowed to anticipate the future
behavior of the wind (W_s^0)_t < s ≤ T when optimizing.
A solution to the system (<ref>) can naturally be cast as a fixed
point and admits
the interpretation as characterizing a continuum version of Nash optimality.
Motivated by the literature on the functional Itô formula (see Dupire <cit.> and Cont-Fournié <cit.>) and
path dependent PDE theory (see Chapter 11 of Zhang <cit.> and references therein), we have discovered that
if û(t,x,ω,y), m̂(t,x,ω,y) are lifted functionals
of the ^^0-adapted random fields (u_t(x), m_t(x))_0≤ t ≤ T
from (<ref>), then
for each fixed path ω,
the functions (t,x,y) ↦û(t,x,ω,y), m̂(t,x,ω,y)
are determined by a rough forward conservation law
coupled with a classical HJB equation that is “compensated”
by the operator (<ref>) applied to û(t,x,ω,y).
More precisely,
the lifted functional approach
to mean field games with common noise asserts that
the solution
of (<ref>) can be reduced
to a pair of (strictly non-anticipative) lifted functionals
(û(t,x,ω,y), m̂(t,x,ω,y))
satisfying the system
-∂_t û(t,x,ω,y) - 1/2[ A D^2_(x,y)û(t,x,ω,y)] + H(t,x, ∂_x û(t,x,ω,y)) - f(t,x,m̂(t,·, ω,y)) = _ω^y û(t,x,ω,y)
∂_t m̂(t,x,ω, y) - 1/2 [ A D^2_(x,y)m̂(t,x,ω,y) ] - div_x [ m̂(t,x,ω,y) ∂_p H(t,x,∂_x û(t,x,ω, y)) ] = - _ω^y m̂(t,x,ω,y)
û(T,x,ω,y) = g(x,m̂(T,·,ω,y)), m̂(0,x,ω,y) = ℓ(x-σ^0 y),
where we write D^2_(x,y) for the Hessian matrix in both variables (x,y)
and where
A :=
[ σσ^⊤ +σ^0 (σ^0)^⊤ σ^0; σ^0 I_d ].
In the backward equation of (<ref>), the compensator term “_ω^y û(t,x,ω,y)” serves to enforce the strict non-anticipativity condition in
the path variable.
However, the rather unexpected appearance of “_ω^y” in the forward equation exactly serves to exhibit
duality
between the equations.
To our knowledge,
this duality at the pathwise level
appears to be new and is nonobvious to illustrate otherwise.
To be more precise,
exploiting the duality of the
original system (<ref>)
requires taking an expectation, i.e.,
averaging over the path.
More classically, as indicated above, for each fixed ω∈Ω, the functional m^ω(t,x):= m̂(t,x,ω, ω_t)
is known to satisfy, in a path-by-path sense, the rough conservation law
d_t m^ω(t,x)
= ( 1/2[σσ^⊤∂_xx^2 m^ω(t,x)] + div_x [ m^ω(t,x) ∂_p H(t,x,∂_x û(t,x,ω, ω_t)) ] ) dt
- div_x [m^ω(t,x) σ^0 d ω_t ]
m^ω(0,x) = ℓ(x),
which in turn can be solved by the flow transformation method of
Lions-Souganidis <cit.>. More precisely,
one looks for a solution of the form
m^ω(t,x)=m̂(t,x,ω, ω_t):= r(t,x-σ^0 ω_t,ω),
where r(t,x,ω) solves a classical (though ω-dependent) PDE without a “dω_t” term:
∂_t r(t,x,ω)
= 1/2 [ σσ^⊤∂_xx^2 r(t,x,ω) ] + div_x [r(t,x,ω) ∂_p H(t,x+σ^0ω_t,∂_x û(t,x+σ^0ω_t,ω, ω_t)) ]
r(0,x,ω) = ℓ(x).
As indicated by the notation, r(t,x,ω)
is readily seen to depend only on the strict prior history (ω_s)_0 ≤ s < t
of the fixed path ω, allowing
us to identify m̂(t,x,ω, y) = r(t,x-σ^0y,ω).
However, this classical
perspective does not showcase
the duality with
the backward equation, as the new system (<ref>) exhibits.
We next claim that once a fixed point solution pair (û^*(t,x,ω,y), m̂^*(t,x,ω,y))
is found for the solution loop of
(<ref>),
the triple of random fields is defined as
(u^*_t(x),v^*_t(x),m^*_t(x)):=
(û^*(t,x,^0,W^0_t), ∂_y û^*(t,x,^0,W^0_t), m̂^*(t,x,^0,W^0_t)),
and is easily seen to be a strong solution of the original mean field game system with common noise (<ref>).
Indeed, as long as the lifted functional û(t,x,ω,y) is “nice enough,” the principles behind the so-called functional Itô formula (see Dupire <cit.> and Cont-Fournié <cit.>) suggest we can compute the total differential in time as[Roughly speaking,
the functional Itô formula is just the ordinary Itô formula in the variables (t,y) of a lifted functional û(t,x,^0, W^0_t), i.e., the dependence
in the strict history variable ω can be held infinitesimally fixed in time.]
d_t u^*_t(x) Functional Itô= ( ∂_t û^* + 1/2Δ_y û^* )(t,x,^0, W^0_t) dt + ∂_y û^*(t,x,^0, W^0_t) · dW^0_t
= ( - 1/2[ a ∂_xx^2 u^*_t(x)] - [σ^0 ∂_x v^*_t(x) ] + H(t,x,∂_x u^*_t(x)) - f(t,x,m^*_t) ) dt + v^*_t(x) · dW_t^0,
where we used in the second equality the fact that _ω^ω_tû(t,x,ω,ω_t) = 0;
similarly, since m^*_t(x) has the form m^*_t(x) = r(t,x-σ^0 W^0_t,^0)
where r(t,x,ω) solves
(<ref>), we have
d_t m^*_t(x) = ( ∂_t r + 1/2 [ σ^0 (σ^0)^⊺∂_xx^2 r ] )(t,x-σ^0W_t^0,^0) dt - ∂_x r(t,x-σ^0W_t^0,^0) ·σ^0 dW^0_t
= ( 1/2[a ∂_xx^2 m^*_t(x)] + div_x [ m^*_t(x) ∂_p H(t,x,∂_x u^*_t(x)) ] ) dt - div_x [m^*_t(x) σ^0 dW^0_t ],
where in the last equality we implicitly performed an Itô-Stratonovich conversion.
Thus the triple of random fields
(<ref>) can serve
as a strong solution of (<ref>).
For readers familiar with the notion of the master equation from Cardaliaguet-Delarue-Lasry-Lions <cit.>,
the main compensator term
in the backward equation of (<ref>)
takes on a particularly nice form.
To see this, suppose u_t(x) = (t,x,m_t) for some nice : [0,T] ×^d ×_2(^d) → and that we know
m_t(x) = m̂(t,·,^0, W^0_t), where we recall the form m̂(t,x,ω,y) = r(t,x-σ^0 y,ω).
Recall
the basic relationship
∂_μ(t,x,μ)(v) =∂_v (δ_μ)(t,x,μ)(v),
where we write “δ_μ” for the linear functional derivative
and “∂_μ” for the Wasserstein gradient.
Then, recalling v_t(x) = ∂_yû(t,x,^0,W^0_t) in our setting,
we can compute
v_t(x) = ∂_y û(t,x,^0, W^0_t) = ∫_^d (δ_μ)(t,x,m_t)(v)
· (∂_y m̂)(t,v,^0, W^0_t) dv
= -∫_^d (δ_μ)(t,x,m_t)(v)
· (σ^0)^⊤(∂_xr)(t,v-σ^0 W^0_t,^0)] dv
= ∫_^dσ^0 (∂_μ)(t,x,m_t)(v) m_t(v) dv,
where the last equality is integration by parts.
Observe this is exactly the formula from Corollary 2.12 of Cardaliaguet-Delarue-Lasry-Lions <cit.> for the process v_t(x).[The factor σ^0 is due to scaling differently than the corresponding system (31) of Cardaliaguet-Delarue-Lasry-Lions <cit.>.]
The punchline of the above is we have the formula
∂_yû(t,x,ω,y) = ∫_^dσ^0 (∂_μ)(t,x,m̂(t,·,ω,y))(v) m̂(t,v,ω,y) dv.
Next recall we can identify the compensator of m̂(t,x,ω,y) as
_ω^y m̂(t,x,ω,y) = -div_x [ m̂(t,x,ω,y) ( F(t,x,ω,y) - F(t,x,ω,ω_t) ) ]
where
F(t,x,ω,y) := ∂_x û(t,x+σ^0(ω_t - y),ω,y).
In turn, these items imply the compensator in the backward equation of (<ref>) can
be expressed as
_ω^y û(t,x,ω,y) = ∫_^d (∂_μ)(t,x,m̂(t,·,ω,y))(v) m̂(t,v,ω,y) ( F(t,v,ω,y) - F(t,v,ω,ω_t) ) dv.
The main issue with this formula is that one in general may not have access to ∂_μ(t,x,μ)(v).
Adopting a combination of the perspectives of rough path theory and path dependent PDEs, one could introduce an alternative
notion of “pathwise solution” that consists of a pair of merely non-anticipative
functionals (u(t,x,ω), m(t,x,ω)) on [0,T] ×^d ×Ω
such that, for almost every (with respect to Wiener measure) α-Hölder geometric rough path
= (ω, ) (i.e., t ↦ω_t is a fixed realization of ^0 and (s,t) ↦_s,t
is a fixed realization of the iterated Stratonovich integral ∫_s^t W^0_s,r⊗∘ d W^0_r),
the pair of functions (t,x) ↦ u(t,x,ω), m(t,x,ω) satisfies
the rough MFG system[See, e.g., Cosso-Russo <cit.> for the definition of the vertical ∂_ω = ∂_ω^V, which is simply the spatial path dependent derivative found in most any reference from the path dependent PDE literature.]
{ d_t u(t,x,ω) = ( - 1/2[ a ∂_xx^2 u(t,x,ω)] - [σ^0 ∂_x ∂_ω u(t,x,ω) ] + H(t,x , ∂_x u(t,x,ω) ) - f(t, x,m(t,·,ω)) ) dt
- 1/2[∂_ωω^2 u(t,x,ω)] dt + ∂_ω u(t,x,ω) · d_t,
d_t m(t,x,ω) = ( 1/2[σσ^⊺∂_xx^2 m(t,x,ω)] + div_x [ m(t,x,ω) ∂_p H(t,x , ∂_x u(t,x,ω) ) ] ) dt - div_x [ m(t,x,ω) σ^0 d_t ],
u(T,x,ω) = g(x, m(T,·,ω)), m(0,x,ω) = ℓ(x),
.
where, as indicated the bold differential, “d_t” can be understood in the rough path theory sense
(see, e.g., Friz-Victoir <cit.>).
In particular,
the stochastic term “v_t(x) · dW^0_t”
from the backward equation in (<ref>)
corresponds to the two terms “- 1/2[∂_ωω^2 u(t,x,ω)] dt + ∂_ω u(t,x,ω) · d_t”
in (<ref>).
Fortunately, our compensated solutions
of (<ref>) will furnish such an intermediate notion of pathwise solution to
(<ref>)
by calculations parallel to
(<ref>), (<ref>) above, but
based instead on a pathwise (lifted) functional Itô formula of the form
d_t φ(t,ω) = (∂_t φ̂)(t,ω,ω_t) dt + (∂_y φ̂)(t,ω,ω_t) · d_t,
given a suitable lifted functional φ̂(t,ω,y) of φ(t,ω).
This formula follows as a consequence of Keller-Zhang <cit.>, recited as (2.5)
and (2.11)
of Buckdahn-Keller-Ma-Zhang <cit.>.
However, getting to the point of this remark, we otherwise omit this intermediate path-by-path notion since (besides being less straightforward for calculations in our opinion)
it does not exhibit that there is an underlying duality between the two equations at a pathwise level,
as our compensated system (<ref>) does.
Indeed,
eliminating the “d_t”
term would seem to require averaging the paths over Wiener measure, thus leaving the pathwise formulation.
§ PROBLEM FORMULATIONS
Now that we have reviewed the lifted functional approach in the setting of a typical mean field game with common noise,
we step back to review the various settings where we will apply the lifted functional method.
For the sake of clarity, we state these formulations somewhat informally and with straightforward data (in particular, these problems will be solved with more general data below).
More precisely, we illustrate the lifted functional approach for four problems, each of which admits an exact solution when the data fits into the framework of linear-quadratic-Gaussian control theory:
* a stochastic control problem with a path-dependent terminal cost
* a mean field game with common noise
* a mean field game with controlled common noise
* a mean field game with common noise and partial information
Problem 1:
As a warm-up, we start by considering a
stochastic control problem with a path-dependent terminal cost as follows:
given an initial condition x_0 ∈^d,
minimize 𝔼[∫_0^T 1/2|α_t|^2 dt + X_T·∫_0^T W_s^0 ds]
over ^,^0-adapted processes (α_t)_, subject to the dynamical constraint
dX_t = α_t dt + dW_t, X_0 = x_0.
Intuitively, the controller will drive the process away from the anticipated random cost, ∫_0^t W_s^0 ds. Indeed, we find the controller to be given as a linear feedback of ∫_0^t W_s^0 ds.
The explicit solution to this problem is covered in Section <ref>.
Problem 2:
We consider a linear-quadratic mean field game in the spirit of Section 3.5 of Carmona-Delarue <cit.>:
given an initial law λ∈_2(^d) and a ^^0-adapted flow of probability measures = (μ_t)_,
we first solve, writing μ̅_t = ∫_ x μ_t(dx) for the mean position of players,
minimize 𝔼[∫_0^T 1/2|α_t|^2 dt + 1/2(X_T- s μ̅_T)^2 ]
over ^,^0-adapted processes (α_t)_, subject to the dynamical constraint
dX_t = (b_t X_t+ b̅_t μ̅_t + α_t) dt + σ dW_t + σ_0 dW_t^0, X_0 ∼λ.
We denote by (X_t^α^*)_ the solution of the dynamical constraint with optimal control (α^*_t)_ and second solve the fixed point problem μ_t = (X_t^α^*|^^0_t), t ∈ [0,T], i.e., μ_t will be the conditional law of an optimally controlled
process X_t^α^* given the common noise ^0 = (W^0_t)_.
In this problem, the mean position of players μ̅_t is translated by a Brownian common noise. The solution we find is a linear function of the player's position and the mean position of players.
The explicit solution of this problem for a class of linear-quadratic data is covered in Section <ref>.
Note this Problem 3 implicitly involves a term of the form “X_T μ̅_T,”
and in turn we will see μ̅_T will involve “∫_0^T W^0_s ds”. Thus,
this problem features the basic structure of the path dependent cost problem
of Section <ref>, which motivated its inclusion in this paper.
Problem 3:
We consider a similar setting as the previous problem but with a controlled volatility coefficient of the common noise:
first, given an initial law λ∈_2(^d), a parameter (a̅_t)_, and a flow of probability measures = (μ_t)_,
we first solve
minimize 𝔼[∫_0^T 1/2|α_t - a̅_t|^2 dt + 1/2(X_T- s μ̅_T)^2 ]
over ^,^0-adapted processes (α_t)_, subject to the dynamical constraint
dX_t = (b_t X_t + b̅_t μ̅_t) dt + σ dW_t + α_t dW_t^0, X_0 ∼λ.
Second, we solve the fixed point problem μ_t = (X_t^α^*|^^0_t), t ∈ [0,T].
The solution we find is a deterministic time dependent multiple of the parameter a̅_t, similar to examples in the literature (see, e.g., Proposition 5.1 of Ankirchner-Fromm <cit.>).
However, the factor we get reflects parameters not only from the diffusion coefficient,
but also from
the so-called Itô-Wentzell correction term, which involves the control against the unknown process “v_t(x)” that enforces the ^^0-adaptivity constraint in the stochastic backward HJB in (<ref>).
The explicit solution of this problem for a class of linear-quadratic data is covered in Section <ref>.
Problem 4:
Our final problem considers a mean field game with common noise and partial
information: first, given an initial law λ∈_2(^d) and a ^^0-adapted flow of probability measures = (μ_t)_,
we solve
minimize 𝔼[∫_0^T f(t,X_t,μ_t, α_t) dt + g(X_T, μ_T)],
subject to a dynamical constraint
dX_t = b(t,X_t,μ_t, α_t) dt + σ dW_t + σ^0 dW^0_t, X_0 = x_0;
however, there is an additional constraint that one must optimize over controls = (α_t)_ that are progressively measurable with respect to ^^0,, where = (Z_t)_ is the so-called observation process
dZ_t = h(t,X_t, μ_t) dt + dθ̃_t
with = (θ̃_t)_ a Brownian motion with positive definite covariance Θ̃ and independent of = (W_t)_.
Second, one solves the fixed point problem μ_t = (X_t^α^*|^^0_t), t ∈ [0,T].
Finally, we recall the mean field problem with common noise and partial information above can be interpreted as the limit of an N-player dynamical game: given a strategy profile ^N = (^N,i)_i=1^N,
the ith player, 1 ≤ i ≤ N, in the search for Nash optimality, solves the optimal control problem
minimize 𝔼[∫_0^T f(t,X_t^N,i,μ_^N_t,β_t) dt + g(X_T^N,i,μ_^N_T)].
over ^^0,-adapted controls = (β_t)_,
subject to the dynamical constraint
dX_t^N,k =
b(t,X_t^N,i,μ_^N_t ,β_t) dt + σ dW^i_t + σ^0 dW_t^0, k=i,
b(t,X_t^N,k,μ_^N_t,α^N,k_t) dt + σ dW^k_t + σ^0 dW_t^0, k≠ i,
and subject to the observation process
dZ_t^i = h(t,X_t^i, μ_^N_t) dt + dθ̃_t^i,
where μ_^N_t := 1/N∑_j=1^N δ_X_t^N,j is the empirical measure of players.
We emphasize that players have knowledge of the common noise and their individual observation process.
Also, one can reason from this N-player setting that we
expect the N →∞ limit of
the empirical measures μ_^N_t := 1/N∑_j=1^N δ_X_t^N,j should
converge to the conditional law of the state given the common noise ^0 = (W^0_t)_ with respect to ,
thus justifying the formulation made above.
As just reviewed, the partially observed control problem is made difficult by the necessity to consider non-Markovian controls that incorporate the entirety of the history of the observation process.
As such, the problem does not satisfy an ordinary dynamic programming principle. With the compensated HJB equation,
a dynamic programming principle is recovered in some sense.
Despite the mean field coupling,
we illustrate how the solution for a linear-quadratic-Gaussian problem is still solved by the Kalman filter and the separation principle, as classically expected.
See Section <ref>, especially equation (<ref>) and nearby discussion, for more on these concepts and the explicit solution of this problem for a class of linear-quadratic data.
§ A PATH-DEPENDENT COST PROBLEM
As a warm-up, we first consider a simple scenario
where there is no coupling between the
forward and backward equations of (<ref>),
which thus reduces to a classical optimal control problem.
The interest in this example
is that we can observe, in a simple setting, how our method
is consistent with the classical optimal control theory literature.
Accordingly, we first consider the solution to the path dependent cost Problem 1
reviewed in the previous Section <ref>.
In the compensated HJB approach, we will solve for the lifted functional determining
the random value function.
The lifted value function is expected to satisfy a dynamic programming principle, i.e.,
û(t,x,ω,y) = inf_(α_s)_t≤ s≤ T𝔼[∫_t^T 1/2|α_s|^2ds + X_T^α·(∫_0^t ω_s ds + ∫_t^T[y+W^0_s - W_t^0] ds)],
where
dX_s^α = α_s ds+dW_s, X^α_t = x, t ≤ s ≤ T.
Recalling the compensated time derivative ∂_t^y
of (<ref>),
the compensated HJB equation will have the form
- ∂_t^y û(t,x,ω,y) -1/2Δ_x û(t,x,ω,y) -1/2Δ_y û(t,x,ω,y) + 1/2|∇_x û(t,x,ω,y)|^2= 0
û(T,x,ω, y)= x·∫_0^T ω_s ds.
Now, we make the ansatz
û(t,x,ω,y) = a_t x^2 + b_t y^2 + 2 c_t x y + d_t + e_t (∫_0^t ω_s ds)^2 + 2 f_t x ∫_0^tω_s ds + 2 g_t y ∫_0^tω_s ds.
Note the terminal condition û(T,x,ω, y)= x·∫_0^T ω_s ds is satisfied with the parameter terminal conditions
a_T = b_T = c_T = d_T = e_T = g_T = 0, f_T = 1/2.
We then compute
∂_t^y ( ∫_0^t ω_s ds ) = y,
plugging the ansatz into the compensated HJB equation we get
0= -a_t' x^2 - b_t' y^2 - 2 c_t' x y - d_t' - e_t' (∫_0^t ω_s ds)^2 - 2 f_t' x ∫_0^tω_s ds - 2 g_t' y ∫_0^tω_s ds
- 2 e_t y ∫_0^tω_s ds -2 f_t x y - 2 g_t y^2 - a_t - b_t
+2 a_t^2 x^2 + 2 c_t^2 y^2 + 2 f_t^2(∫_0^tω_s ds)^2 +4 a_t c_t x y + 4 a_t f_t x ∫_0^tω_s ds + 4 c_t f_t y ∫_0^tω_s ds.
By collecting terms corresponding to x^2, y^2, xy,(∫_0^tω_s ds)^2, x ∫_0^tω_s ds, y ∫_0^tω_s ds, we arrive at the following system of ordinary differential equations:
* |x|^2 : a'_t = 2 a_t^2,
* |y|^2 : b'_t = -2 g_t + 2 c_t^2,
* |x y| : c'_t = -f_t + 2 a_t c_t,
* 1 : d_t' = -a_t - b_t,
* (∫_0^tω_s ds)^2 : e_t' = 2 f_t^2,
* x∫_0^tω_s ds : f_t' = 2 a_t f_t,
* y∫_0^tω_s ds : g_t' = -e_t + 2 c_t f_t.
We first solve a_t=0, thus f_t=1/2 is constant. Now we can see that c_t' = -1/2 so c_t=1/2(T-t), and e_t' = 1/2 so e_t = -1/2(T-t).
We can solve for g_t'=(T-t) as g_t = -1/2(T-t)^2.
Now b_t' = 3/2(T-t)^2 and b_t = -1/2(T-t)^3. We finally have that
d_t' = 1/2(T-t)^3 so d_t=-1/8(T-t)^4.
û(t,x,ω,y) = -1/2 (T-t)^3 y^2 + (T-t) x y - 1/8(T-t)^4
-1/2 (T-t)(∫_0^tω_s ds)^2 + x ∫_0^tω_s ds - (T-t)^2 y ∫_0^tω_s ds,
so the optimal ^^0-adapted feedback control = (α^*_t)_ is given by
α^*_t = -∇_x û(t,x,𝐖^0, W_t^0) =-(T-t)W_t^0 - ∫_0^tW_s^0 ds.
Then the optimal expected value at time zero is u(0,x_0,ω,0) = d_0 =-1/8T^4,
which is notably independent of the initial position X_0=x_0.
§.§ Comparison with the literature
A more classical approach to the path-dependent cost problem might be to make the problem Markovian by introducing the new state
variables Y_t:=W_t^0 and Ξ_t := ∫_0^t Y_s ds.
In these variables, the problem turns into a stochastic control problem with value function v(t,x,y,ξ) solving the degenerate HJB equation
-∂_t v(t,x,y,ξ) -1/2Δ_x v(t,x,y,ξ) -1/2Δ_y v(t,x,y,ξ) - y·∇_ξ v(t,x,y,ξ) + 1/2|∇_x v(t,x,y,ξ)|^2= 0
v(T,x,y,ξ)= x·ξ.
Observe the correspondence between this approach
with the lifted functional approach is
û(t,x,ω,y) = v(t,x,y, ∫_0^tω_s ds).
Then we can note that the compensated time derivative satisfies
∂_t^y û(t,x,ω,y) = ∂_t v(t,x,y, ∫_0^tω_s ds)+ y·∂_ξ v(t,x,y, ∫_0^tω_s ds),
establishing consistency between the two approaches.
We remark, however, that this more classical reasoning does not seem to work in general for
the other more complicated problems we study. Indeed, the desired structure to make the problem Markovian as above cannot be easily determined in advance.
Finally, given the lifted functional approach was motivated
by concepts from the literatures on the functional Itô formula
and path-dependent PDE theory,
we mention that there is a path-dependent PDE that
the functional u(t,x,ω) := û(t,x,ω, ω_t)
will satisfy that one may work with instead to arrive at the same solution.
Again, we refer the reader to Chapter 11 of Zhang <cit.>.
§ MEAN FIELD GAME WITH COMMON NOISE
§.§ The linear-quadratic data for the MFG problem
Let us recall the linear-quadratic data
from Problem 2 in Section <ref>:
writing μ̅:= ∫_^dξ μ(dξ), we set[We write b(t,x,μ,α) for the drift coefficient of the state process, as in
(<ref>).]
b(t,x,μ,α) : = b_t x+b̅_t μ̅ + α,
f(t,x,μ,α) := 1/2 ( |α|^2 + x^⊤ q_t x + (x - s_t μ̅)·q̅_t (x - s_t μ̅) ),
g(x,μ) := 1/2 ( x^⊤ q x + (x - s μ̅)·q̅ (x - s μ̅) ),
where we refer to Section <ref> for the description of these given parameters.
Now we make the ansatz that the solution of (<ref>) has the form
u_t(x) =
1/2 ( x ·Γ_t x + μ̅_t ·Γ^0_t μ̅_t +
2x ·Λ^0_t μ̅_t )
+ Δ_t
so that the optimal feedback function is given by
- ∂_x u_t(x)
= - Γ_t x - Λ^0_t μ̅_t.
Hence, we have
dX_t = ( (b_t - Γ_t) X_t +(b̅_t - Λ^0_t ) μ̅_t ) dt + σ dW_t + σ^0 dW^0_t,
and taking expectations of this equation conditional on ^^0_t
yields
dμ̅_t = ( b_t + b̅_t - Γ_t - Λ^0_t ) μ̅_t dt + σ^0 dW^0_t,
which has an explicit solution
of the form μ̅_t = μ̅(t,^0, W^0_t)
where
μ̅(t,ω, y)
= Φ_t (μ̅_0
+ ∫_0^t Φ_s^-1 (b_s + b̅_s - Γ_s - Λ^0_s) σ^0 ω_s ds )
+ σ^0 y,
where (Φ_t)_ is the solution of the matrix-valued ODE
Φ̇_t = (b_t + b̅_t -Γ_t - Λ^0_t) Φ_t, Φ_0 = 1.
Thus, the ansatz for the lifted value function becomes
û(t,x,ω,y) =
1/2 ( x ·Γ_t x + μ̅(t,ω, y) ·Γ^0_t μ̅(t,ω, y) +
2x ·Λ^0_t μ̅(t,ω, y) )
+ Δ_t.
Now we may begin computing
the terms appearing in the lifted functional
backward equation (<ref>).
As mentioned there, we will find it convenient for explicit calculations
to combine the time derivative and compensator
into the compensated time derivative ∂_t^y := ∂_t + _ω^y
defined in (<ref>).
We first compute
∂_t^y μ̅(t,ω, y)
=
(b_t + b̅_t - Γ_t - Λ^0_t) μ̅(t,ω, y).
∂_y μ̅(t,ω, y)
= σ^0
Then we have
∂_t^y û(t,x,ω,y)
= 1/2 ( x ·Γ̇_t x + 2x ·Λ̇^0_t μ̅(t,ω, y) + μ̅(t,ω, y) ·Γ̇^0_t μ̅(t,ω, y) )
+ Δ̇_t
+ (b_t + b̅_t - Γ_t - Λ^0_t) μ̅(t,ω, y) · ( Γ^0_t μ̅(t,ω, y)
+ Λ^0_t x )
and can further compute
∂_x û(t,x,ω,y) =
Γ_t x
+ Λ^0_t μ̅(t,ω,y), ∂_xx^2 û(t,x,ω,y) =
Γ_t
∂_y û(t,x,ω,y)
= σ^0 ( Γ^0_t μ̅(t,ω,y)
+ Λ^0_t x ), ∂_yy^2 û(t,x,ω,y)
= (σ^0)^⊤Γ^0_t σ^0,
∂_x ∂_y û(t,x,ω,y)
=
(σ^0)^⊤Λ^0_t
Now, the compensated HJB equation will take the form
- ∂_t^y û(t,x,ω,y) -1/2 ( [a ∂_xx^2 û(t,x,ω,y) ] + Δ_y û(t,x,ω,y) ]
+ 2 [σ^0 ∂_x ∂_y û(t,x,ω,y) ]
)
+ 1/2 | ∂_x û(t,x,ω,y) |^2
- ∂_x û(t,x,ω,y) · ( b_t x + b̅_t μ̅(t,ω, y) )
= 1/2 ( x^⊤ q_t x + (x - s_t μ̅(t,ω, y))^⊤q̅_̅t̅ (x - s_t μ̅(t,ω, y)) ),
with terminal condition
û(T,x,ω,y) = 1/2 ( x^⊤ q x + (x - s μ̅(T,ω, y))·q̅ (x - s μ̅(T,ω, y)) ).
Inputting the above calculations in the compensated equation gives
- 1/2 ( x ·Γ̇_t x + μ̅(t,ω, y) ·Γ̇^0_t μ̅(t,ω, y) +
2x ·Λ̇^0_t μ̅(t,ω, y) )
- Δ̇_t
-(b_t + b̅_t - Γ_t - Λ^0_t) μ̅(t,ω, y) · ( Γ^0_t μ̅(t,ω, y)
+ Λ^0_t x ) - 1/2 ( [a Γ_t ] + [ σ^0 (σ^0)^⊤Γ^0_t]
+ 2 [σ^0 (σ^0)^⊤Λ^0_t]
)
+ 1/2 | Γ_t x + Λ^0_t μ̅(t,ω,y) |^2 - ( Γ_t x + Λ^0_t μ̅(t,ω,y) )· ( b_t x + b̅_t μ̅(t,ω,y) )
= 1/2 ( x^⊤ q_t x + (x - s_t μ̅(t,ω,y))·q̅_t (x - s_t μ̅(t,ω,y)) ).
We now collect terms (symmetrizing for the squared terms) to arrive at the following closed system of Riccati equations:
* |x|^2 : Γ̇_t = Γ_t^⊤Γ_t - Γ_t^⊤ b_t - b_t^⊤Γ_t - ( q_t + q̅_t ) , Γ_T = q+q̅ ,
* x μ̅_t : Λ̇_t^0 = (Λ^0_t)^⊤Λ^0_t -(Λ^0_t)^⊤ (b_t + b̅_t - Γ_t) + Γ_t^⊤Λ^0_t - Γ_t^⊤ b̅_t - b_t^⊤ Λ_t^0 + q̅_t s_t, Λ^0_T = - q̅ s,
* μ̅_t^2 : Γ̇^0_t = -(b_t + b̅_t - Γ_t - Λ^0_t)^⊤Γ^0_t -(Γ^0_t)^⊤(b_t + b̅_t - Γ_t - Λ^0_t)
+ (Λ^0_t)^⊤Λ^0_t - (Λ^0_t)^⊤b̅_t - b̅_t^⊤Λ^0_t - s_t^⊤q̅_t s_t , Γ^0_T = s^⊤q̅ s,
* 1 : Δ̇_t = - 1/2[(σσ^⊤ + σ^0 (σ^0)^⊤) Γ_t ] - 1/2[ σ^0 (σ^0)^⊤Γ^0_t]
- [σ^0 (σ^0)^⊤Λ^0_t], Δ_T = 0.
Notice that the equations for Γ_t, Λ^0_t are quadratic
Ricatti equations, while the equation for Γ^0_t
is linear.
§.§ Discussion of the Solvability of the Ricatti Equations
Standard ODE theory applies to guarantee there exists a unique solution to the system of equations for at least a short time. The only barrier to global existence is if the matrices Γ_t or Λ^0_t diverge (since the μ^2 equation is linear in Γ_t^0 is does not pose a barrier to global existence). An upper bound, in the sense of positive semidefinite matrices, for Γ_t will always hold by a Gronwall argument: that Γ_t≤ M_t where M_t solves the linear ODE
Ṁ_t = -M_t b_t - b_t^⊤ M_t -(q_t+q̅_t), M_T=q+q̅.
A lower bound of Γ_t≥ 0 holds so long as q+q̅ and q_t+q̅_t remain positive semidefinite.
For Λ_t^0, we consider Λ̃_t= Γ_t+Λ_t^0, which solves:
Λ̇̃̇_t = Λ̃_t^⊤ Λ̃_t -b_t^⊤ Λ̃_t - Λ̃_t^⊤ (b_t+b̅_t) - (q_t+q̅_t- q̅_t s_t) = 0,
with Λ̃_T = q+q̅- q̅ s. We assume that q_t+q̅_t- q̅_t s_t is symmetric, and b̅_t is a scalar times the identity matrix, so that Λ̃_t remains symmetric.
Similar to the argument for Γ_t, there is a global solution so long as q_t+q̅_t- q̅_t s_t and q+q̅- q̅ s are positive semidefinite. This same result appears in <cit.>, where an example is also given that shows how solutions exist only for a finite time period if the positive semidefinite condition fails for the problem data (that is, q_t+q̅_t-q̅_t s_t≱0).
§.§ Comparison with the literature
The mean field game system with common noise
can be interpreted as the system of characteristics
for the master equation set on the Wasserstein space _2(^d)
of probability measures with finite second moment.
For the linear-quadratic data of (<ref>),
the master equation has the form
(see display (4.41) of Carmona-Delarue <cit.>):
- ∂_t U(t,x,μ) - 1/2 [(σσ^⊤ + σ^0 (σ^0)^⊤) ∂_xx^2 U(t,x,μ) ] - (b_t x + b̅_t μ̅_t) ·∂_x U(t,x,μ) + 1/2| ∂_x U(t,x,μ) |^2
- ∫_^d [ σ^0 (σ^0)^⊤∂_x∂_μ U(t,x,μ)(v) ] μ(dv) - 1/2∫_^d [(σσ^⊤ + σ^0 (σ^0)^⊤) ∂_v∂_μ U(t,x,μ)(v) ] μ(dv)
- 1/2∫_^d∫_^d [σ^0 (σ^0)^⊤∂_μμ^2 U(t,x,μ)(v,v') ] μ(dv) μ(dv')
- ∫_^d∂_μ U(t,x,μ)(v) · ( b_t v + b̅_t μ̅ -(∂_x U)(t,v,μ) ) μ(dv)= 1/2 x· q_t x + 1/2(x - s_t μ̅)·q̅_t (x - s_t μ̅) ,
(t,x,μ) ∈ [0,T) ×^d ×_2(^d)
U(T,x,μ) = 1/2 x· q x + 1/2 (x - sμ̅)·q̅ (x - sμ̅), (x,μ) ∈^d ×_2(^d)
Here, “∂_μ” is the gradient on the Wasserstein space P_2(),
which can formally be interpreted as “∂_v δ/δμ U(t,x,μ)(v),” with δ/δμ denoting the linear functional (i.e., Fréchet) derivative
in the vector space of all finite signed measures.
As mentioned above and as in display (22) of Cardaliaguet-Delarue-Lasry-Lions <cit.>,
the relationship between a solution (u_t(x), v_t(x), m_t(x))_ of the characteristic equations (<ref>)
and a solution U(t,x,μ) of
(<ref>) should be given by u_t(x) = U(t,x,m_t).
Hence, we expect to have the same ansatz
U(t,x,μ)=1/2 ( x ·Γ_t x + μ̅·Γ^0_t μ̅ +
2x ·Λ^0_t μ̅ )
+ Δ_t
We then compute
∂_t U(t,x,μ)
=
1/2 ( x ·Γ̇_t x + μ̅·Γ̇^0_t μ̅ +
2x ·Λ̇^0_t μ̅ )
+ Δ̇_t
∂_x U(t,x,μ)
=
Γ_t x + Λ^0_t μ̅, ∂_xx^2 U(t,x,μ)
=
Γ_t,
∂_μ U(t,x,μ)(v) =
Γ_t^0 μ̅ + x ·Λ^0_t, ∂_μμ^2 U(t,x,μ)(v) =
Γ_t^0,
∂_x ∂_μ U(t,x,μ)(v) =
Λ^0_t, ∂_v ∂_μ U(t,x,μ)(v) = 0.
Plugging these calculations
in the equation gives
- 1/2 ( x ·Γ̇_t x + μ̅·Γ̇^0_t μ̅ +
2x ·Λ̇^0_t μ̅ )
- Δ̇_t
- 1/2 [(σσ^⊤ + σ^0 (σ^0)^⊤) Γ_t ] - (b_t x + b̅_t μ̅_t) · ( Γ_t x + Λ^0_t μ̅ ) + 1/2| Γ_t x + Λ^0_t μ̅ |^2
- [ σ^0 (σ^0)^⊤Λ^0_t ]
- 1/2 [σ^0 (σ^0)^⊤Γ_t^0 ] - ( Γ_t^0 μ̅ + x ·Λ^0_t ) · ( b_t + b̅_t - Γ_t - Λ^0_t )μ̅
= 1/2 x· q_t x + 1/2 (x - s_t μ̅)·q̅_t (x - s_t μ̅).
We then arrive at the same set of equations as in Section <ref>.
§ MEAN FIELD GAME WITH CONTROLLED COMMON NOISE
Suppose we have a more general state process (X^_t)_ with dynamics of the form
dX_t = b(t,x,μ_t, α_t) dt
+ σ(t,x,μ_t,α_t) dW_t
+ σ^0(t,x,μ_t,α_t) dW^0_t.
Write a(t,x,μ,α):= ( σσ^⊤ +σ^0(σ^0)^⊤ ) (t,x,μ,α)
and define
α^*(t,x,μ, p,X,Q):=
_α{1/2 [a(t,x,μ,α) X ] + [σ^0(t,x,μ,α) Q ] + p · b(t,x,μ,α) + f(t,x,μ,α) }.
Then, given an ^^0-adapted measure flow = (μ_t)_, the stochastic HJB
will have the form: find a pair (u_t(x),v_t(x))_ of ^^0-adapted random fields such that
d_t u_t(x) = - 1/2 [a(t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) ∂_xx^2 u_t(x) ] dt
- [σ^0(t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) ∂_x v_t(x) ] dt
- ∂_x u_t(x) · b( t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) dt
- f( t,x,μ_t,α^*(t,x,μ_t, ∂_x u_t(x), ∂_xx^2 u_t(x) ,∂_x v_t(x)) ) dt + v_t(x) · dW^0_t
Besides being fully nonlinear, this stochastic HJB poses a new difficulty
of the optimizer α^* potentially introducing additional nonlinearities based
on the unknown random field v_t(x).
Fortunately, the lifted functional approach shows how to reduce consideration to a more
classical-looking scenario.
Indeed, the (fully nonlinear) compensated HJB equation involves finding a lifted functional û(t,x,ω,y) satisfying
- ∂_t û(t,x,ω,y) - 1/2 [A(t,x,m̂,α^*(t,x,m̂, ∂_x û, ∂_xx^2 û ,∂_xyû) ) D^2_(x,y)û ]
- ∂_x û· b( t,x,m̂,α^*(t,x,m̂ , ∂_x û, ∂_xx^2 û ,∂_xyû) ) = f( t,x,m̂,α^*(t,x,m̂, ∂_x û, ∂_xx^2 û ,∂_xyû) ) + _ω^y û ,
where D^2_(x,y) is the Hessian in (x,y) and where
A(t,x,μ,α) :=
[ ( σσ^⊤ +σ^0 (σ^0)^⊤)(t,x,μ,α) σ^0(t,x,μ,α); σ^0(t,x,μ,α) I_d ].
§.§ Linear Quadratic data for controlled volatility
For simplicity, we work in dimension d=1, though the manipulations below may be generalized to higher dimensions.
Set the linear-quadratic cost data similarly as before to
f(t,x,μ,α) := 1/2 ( |α -a̅_t|^2 + x^⊤ q_t x + (x - s_t μ̅)^⊤q̅_t(x - s_t μ̅) ),
g(x,μ) := 1/2 ( x^⊤ q x + (x - sμ̅)^⊤q̅ (x - sμ̅) ),
(so the only difference is that we add the given parameter a̅_t).
For the dynamics, we take
b(t,x,μ,α) : = b_t x+b̅_t μ̅, σ(t,x,μ,α):= σ, σ^0(t,x,μ,α):= α.
The optimality condition then becomes
α^*(t,x,μ, p,X,Q):=
_α{1/2α^2 X + α Q + 1/2 |α-a̅_t|^2 }
= a̅_t-Q/1+X,
in the case that X>-1 and a minimizer exists. The compensated HJB becomes
- ∂_t^y û -
1/2 (σ^2 + ( a̅_t-∂_xyû/1+∂_xx^2 û )^2 ) ∂_xx^2 û
- a̅_t-∂_xyû/1+∂_xx^2 û∂_xyû - 1/2Δ_y û
- ∂_x û ( b_t x+b̅_t μ̅ ) = 1/2 ( |a̅_t-∂_xyû/1+∂_xx^2 û -a̅_t |^2 + x^⊤ q_t x + (x - s_t μ̅)^⊤q̅_t(x - s_t μ̅) ).
Now let us suppose we adopt a similar ansatz as before, namely,
û(t,x,ω,y) =
1/2 ( x Γ_t x + μ̅(t,ω, y) Γ^0_t μ̅(t,ω, y) +
2x Λ^0_t μ̅(t,ω, y) )
+ Δ_t,
so that the optimal feedback function has the lifted form
α̂^*(t,ω,y)=
a̅_t-Λ^0 ∂_y μ̅(t,ω,y)/1+Γ_t
But this expression is a bit problematic because the term “∂_y μ̅(t,ω,y)”
will likely involve the control itself.
To resolve this issue, let us search for the optimal control among
deterministic C^1 functions of time = (β_t)_.
Indeed, given such a function, the associated state dynamics will have the form
dX^β_t = ( b_t X^β_t +b̅_t μ̅^β_t) dt
+ σ dW_t
+ β_t dW^0_t.
As before, we can take expectations of
this equation conditional on
^^0_t to get
dμ̅^β_t = (b_t + b̅_t) μ̅^β_t dt + β_t dW^0_t.
And again, as before, the lifted functional of μ̅^β_t = μ̅^β(t,^0,W^0_t) can be solved explicitly as
μ̅^β(t,ω, y)
= Φ_t (μ̅_0
+ ∫_0^t Φ^-1_s [ (b_s + b̅_s) β_s - β̇_t ] ω_s ds )
+ β_t y,
where (Φ_t)_ is the solution of
Φ̇_t = (b_t + b̅_t)Φ_t, Φ_0 = 1.
From this last expression, we can then compute directly
∂_t^y μ̅^β(t,ω,y)= (b_t + b̅_t) μ̅^β(t,ω,y), ∂_y μ̅^β(t,ω,y) = β_t.
Hence, given a flow of measures = (μ_t)_ determined by a deterministic C^1 control = (β_t)_,
the optimal control will satisfy (now removing the dependence on ω,y)
α̂^*_t =
a̅_t-Λ^0 β_t /1+Γ_t
But the mean field game consistency condition suggests we will have β_t = α̂^*_t, resulting in a readily solved equation for α̂^*_t, namely,
α̂^*_t =
a̅_t-Λ^0 α̂^*_t /1+Γ_t, so that α̂^*_t = (1+Γ_t + Λ^0_t)^-1a̅_t.
In particular, the optimal control α̂^*_t is a deterministic function of time
and the lifted function “μ̅(t,ω,y)” appearing in the ansatz for û(t,x,ω,y) may be taken to satisfy:
∂_t^y μ̅(t,ω,y)= (b_t + b̅_t) μ̅(t,ω,y), ∂_y μ̅(t,ω,y) = α̂^*_t = (1+Γ_t + Λ^0_t)^-1a̅_t.
At last, we can plug all these considerations into the compensated HJB to get
- 1/2 ( x Γ̇_t x + μ̅(t,ω, y) Γ̇^0_t μ̅(t,ω, y) +
2x Λ̇^0_t μ̅(t,ω, y) )
- Δ̇_t
- Γ^0_t (b_t + b̅_t) μ̅^2(t,ω, y) -
Λ^0_t (b_t + b̅_t) xμ̅(t,ω, y)
- 1/2 (σ^2 + ( a̅_t/1+Γ_t + Λ^0_t )^2 ) Γ_t
- ( a̅_t/1+Γ_t + Λ^0_t )^2 Λ^0 - 1/2Γ^0_t ( a̅_t/1+Γ_t + Λ^0_t )^2
- ( Γ_t x + Λ^0_t μ̅(t,ω, y) ) ( b_t x+b̅_t μ̅ ) = 1/2 ( |a̅_t/1+Γ_t + Λ^0_t -a̅_t |^2 + x q_t x + (x - s_t μ̅) q̅_t(x - s_t μ̅) ) ,
with terminal condition
û(T,x,ω,y) = 1/2 ( x^⊤ q x + (x - sμ̅(t,ω,y))^⊤ q̅ (x - sμ̅(t,ω,y)) ).
This leads to the following system of ODEs (that can be solved in the order presented):
* |x|^2 : Γ̇_t = -2 b_t Γ_t - ( q_t + q̅_t ) , Γ_T = q+q̅,
* x μ̅_t : Λ̇_t^0 = -Λ^0_t b_t - Γ_t b̅_t - Λ^0_t(b_t + b̅_t) + s_t q̅_t, Λ^0_T = - s q̅,
* μ̅_t^2 : Γ̇^0_t = -2 Γ_t^0(b_t + b̅_t) - 2 Λ^0_t b̅_t - s_t q̅_t s_t , Γ^0_T = s q̅ s,
* 1 : Δ̇_t = - 1/2 ( a̅_t/1+Γ_t + Λ^0_t )^2 ( Γ_t + 2Λ^0_t + Γ_t^0 ) - 1/2 ( σ^2 + a̅_t^2) , Δ_T = 0.
§.§ Discussion of Solvability of Ricatti Equations
As the system of ODEs for Γ, Λ^0, and Γ^0 is linear, there always exists a unique solution. We require Γ_t>-1 in order for α^* to correspond to the minimum in the Hamiltonian. We then require Γ_t+Λ_t^0≠-1 so that there exists a fixed point. Both of these conditions hold in the case considered in Section <ref>, where we assume that q_t+q̅_t≥0, q+q̅≥ 0 and q_t+(1-s_t)q̅_t≥0, q+(1-s)q̅≥ 0, which implies that Γ≥ 0 and Λ^0≥ 0.
§.§ Comparison with the literature
As mentioned in the introduction, we do not know many references on mean field games
with control in the volatility
coefficient of the common noise except for the recent theoretical paper of Barasso-Touzi
<cit.>
and sporadic statements throughout Carmona-Delarue <cit.>.
However, we can still compare with an existing
explicitly solvable
model of controlled
volatility in a more
classical stochastic control setting.
For example,
Proposition 5.1 of Ankirchner-Fromm <cit.>
arrives at an optimal control that in our notation
would correspond to “a̅_t/1+Γ_t”.
It is interesting
that we instead arrive at a slightly
modified form “a̅_t/1+Γ_t + Λ^0_t,”
since the control is entangled with the additional unknown process “v_t(x)”, as is clear from the stochastic HJB equation (<ref>).
§ MEAN FIELD GAME WITH COMMON NOISE AND PARTIAL INFORMATION
Recall we formulated the mean field game problem with common noise and partial information
as Problem 4 of Section <ref> with as an independent Brownian motion with covariance Θ̃. We will refer to the probability measure where is an independent Brownian as .
We will now work with a probability measure , where ^0 is still a standard Brownian motion,
but now is an independent Brownian motion with covariance Θ̃.
For given measures μ, η∈(^d) and a function p(x) on ^d,
we define the optimal feedback function under partial information as
α(t,μ,η,p) := _α∈^d{∫_^nη(dξ) [f(t,ξ,μ,α)+ p(ξ) · b(t,ξ,μ,α) ] }.
Next, for given flows = (μ_t)_, η = (η_t)_ of probability measures in (^d)
and of functions = (p_t(x))_ on ^d,
let ^, η, = (X^, η,_t)_ denote the solution to
dX_t = b(t,x,μ_t,α(t,μ_t, η_t,p_t)) dt + σ dW_t + σ^0 dW^0_t.
Lastly, define
M_t^, η, := exp{Θ̃^-1∫_0^t h(s,X_s^, η,) dZ_s -1/2Θ̃^-1∫_0^t h(s,X_s^, η,) ds },
which is a martingale under .
Then define an equivalent probability measure ^, η, by
d ^, η, = M_t^, η, d, = (μ_t)_, η = (η_t)_, = (p_t(x))_.
We may now articulate the mean field game system with common noise and partial information:
Given any probability measure λ∈_2(^d),
find an ^^0,-adapted quintuple
(u_t(x),v_t(x),k_t(x), μ_t(x), η_t(x))_
of random fields on [0,T] ×^d satisfying the following system, consisting of a stochastic HJB equation coupled with a forward Kushner equation:
d_t u_t(x) = ( - 1/2 [ a ∂_xx^2 u_t(x) ] - ∂_x u_t(x) · b(t,x, α̂(t,μ_t,η_t,∂_x u_t),μ_t) - f(t,x,α̂(t,μ_t, η_t,∂_x u_t),μ_t) ) dt
+ ( v_t(x) ·σ^0 W^0_t - [ σ^0 ∂_x v_t(x) ]dt ) + k_t(x) ·Θ̃^-1 ( dZ_t - h(t,x,μ_t)dt )
d_t η_t(dx) = ( 1/2 [∂_xx^2 ( a η_t(dx) ) ] - div_x [η_t(dx) b(t,x, α̂(t,η_t,u_t),μ_t) ] )dt
- div_x [ η_t(dx) σ^0 dW^0_t ] + η_t(dx) ( h(t,x,μ_t) - h̅(t,μ_t) )^⊤Θ̃^-1 ( dZ_t - h̅(t,μ_t) dt )
u_T(x) = g(x,μ_T), η_0 = λ, μ_t(dx) = ^, η,∂_x u [ η_t(dx)| _t^^0], h̅(t,μ_t):= ∫_^d h(t,ξ,μ_t) η_t(dξ).
The relatively explicit fixed point condition “μ_t(dx) = ^, η,∂_x u [ η_t(dx)| _t^^0]” can be seen as a consequence of the so-called Kallianpur-Streibel formula, which realizes η_t as the conditional law of the state given _t^^0,.
It is considered part of the implicit consistency condition
required of the system (<ref>).
Indeed, compared with the concrete control formulation of Problem 4 in Section <ref>,
we are trading the implicit condition required of the partial information constraint on the controls for the fixed point condition required of the solution loop of the system (<ref>), just as Bensoussan-Yam <cit.>.
§.§ Zakai-Stratonovich equation
We now assume that h(t,x,μ_t) = h(x), i.e., independent of t and μ_t.
To begin, we first trade the
nonlinear forward Kushner equation of (<ref>)
for its unnormalized counterpart, the so-called Zakai-Stratonovich equation:
d_t q_t(x) = ( 1/2 [ ∂_xx^2 ( σσ^⊤ q_t(x) ) ] - div_x [q_t(x) b(t,x, α(t,μ_t,η_t,u_t),μ_t) ] - q_t(x) 1/2 h(x)^⊤Θ̃^-1 h(x) )dt
- div_x [ q_t(x) σ^0 ∘ dW^0_t ] + q_t(x)h(x)Θ̃^-1∘ dZ_t
q_0(x) ∈ L^1_+(^d), η_t(dx) := q_t(x)/∫_^d q_t(ξ) dξ dx,
where ∘ denotes Stratonovich integration.
The point of the flow transformation method is to remove the noises in the above equation
via a suitable change of variables, thus reducing its solution
to a more classical, albeit random, PDE.
Following Section 3.4.2 of Souganidis <cit.>,
we will look for a solution of the form q_t(x) = S(t)w_t(x),
where S(t) is the solution map of the linear equation
d_t 𝔭_t(x) = - div_x [ 𝔭_t(x) σ^0 ∘ dW^0_t ] + 𝔭_t(x) h(x)Θ̃^-1∘ dZ_t.
More explicitly, this solution map is given by
S(t)f(x) = f(x-σ^0W^0_t) exp ( ∫_0^t h(x+σ^0(W^0_s - W^0_t))Θ̃^-1∘ dZ_s )
and thus
S^-1(t)f(x) = f(x+σ^0W^0_t) exp ( - ∫_0^t h(x+σ^0W^0_s)Θ̃^-1∘ dZ_s )
Now define
F(X,p,u,x,t) := 1/2 [σσ^⊤ X ] - p · K_t(x) + u · ( div_x K_t(x) - 1/2 h(x)^⊤Θ̃^-1 h(x) )
where here we employ the generic notation K_t(x) := b(t,x,μ_t, α(t,μ_t,η_t,∂_x u_t)).
Then Section 3.4.2 of Souganidis <cit.> shows that w_t(x) is a solution of the random PDE
∂_t w_t(x) = S^-1(t) F(∂_xx^2 [S(t)w_t(x)] , ∂_x [S(t)w_t(x)] , S(t)w_t(x), x,t ).
We thus have a functional dependence of the form
w_t(x) = w(t,x,^0,) := w(t,x,(W^0_s)_0≤ s < t,(Z_s)_0≤ s < t);
in particular, this line of reasoning shows how w_t(x) can be expressed as a functional of the paths of the strict prior history of the noises.
We will write out this dependence more explicitly in the system (<ref>) below,
where we will expand out the equation (<ref>).
Now further assume h(x) = Hx. Then the normalized measure takes the form
n_t(x) =
q_t(x)/∫_^d q_t(ξ) dξ
=
w_t(x-σ^0W^0_t) exp (Hx · Z_t ) /∫_^d w_t(ξ-σ^0W^0_t)
exp ( Hξ· Z_t ) dξ,
where it is significant that the stochastic integrals
arising from the solution map S(t) of (<ref>)
have canceled out.
Indeed, now that these stochastic integrals are gone,
we can conclude that n_t(x) admits the lifted functional
representation n̂(t,x,^0,, W^0_t,Z_t),
where
n̂(t,x,ω,γ,y,z) =
w(t,x-σ^0y,ω,γ) exp (Hx · z ) /∫_^dw(t,ξ-σ^0y,ω,γ)
exp ( Hξ· z ) dξ
Altogether, we expect the lifted functional form of the solution quintuple of the system (<ref>) to be
u_t(x) = û(t,x,^0,,W^0_t,Z_t), v_t(x) = (∂_y û)(t,x,^0,,W^0_t,Z_t), k_t(x) = (∂_z û)(t,x,^0,,W^0_t,Z_t),
m_t(x) = m̂(t,x,^0,W^0_t) with m̂(t,x,ω,y) = r(t,x-σ^0 y, ω),
n_t(x) = n̂(t,x,^0,,W^0_t,Z_t) with n̂(t,x,ω,γ,y,z) = w(t,x-σ^0y,ω,γ) exp (Hx · z ) /∫_^dw(t,ξ-σ^0y,ω,γ)
exp ( Hξ· z ) dξ,
where the triple (û(t,x,ω,γ,y,z), r(t,x,ω), w(t,x,ω,γ)) solves the following system of equations:
-∂_tû(t,x,ω,γ,y,z) -1/2 ( [a ∂_xx^2 û(t,x,ω,γ,y,z) ] + Δ_y û(t,x,ω,γ,y,z) )
- [σ^0 ∂_x ∂_y û(t,x,ω,γ,y,z) ] - 1/2[ Θ̃ ∂_zz^2 û(t,x,ω,γ,y,z)] - (∂_zû)(t,x,ω,γ,y,z) · Hx
= f (t,x, m̂(t,·,ω,y), α̂(t,m̂(t,·,ω,y),n̂(t,·,ω,γ,y,z), ∂_x û(t,·,ω,γ,y,z)) ) + _ω^y û(t,x,ω,γ,y,z),
∂_t w(t,x,ω,γ) = 1/2[σσ^⊤∂_xx^2 w(t,x,ω,γ)] + [σσ^⊤γ_t^⊤ H^⊤Θ̃^-1∂_x w(t,x,ω,γ)]
- div_x [ w(t,x,ω,γ) · b(t,x+σ^0 ω_t,m̂,α̂)] - w(t,x,ω,γ)
b(t,x+σ^0 ω_t,m̂,α̂) · H^⊤Θ̃^-1γ_t
+ w(t,x,ω,γ) ( 1/2[σσ^⊤γ_t^⊤ H^⊤Θ̃^-1 H γ_t] - 1/2 (x+σ^0ω_t)^⊤ H^⊤Θ̃^-1 H (x+σ^0ω_t) ),
∂_t r(t,x,ω)
= 1/2 [ σσ^⊤∂_xx^2 r(t,x,ω) ] + div_x [r(t,x,ω) b̅(t,x+σ^0ω_t,ω,ω_t) ],
u(t,x,ω,γ,y,z) = g(x, m̂(t,·,ω,y)), w(0,x,ω,γ)=r(0,x,ω) = ℓ(x).
Here, the “α̂” is given by
α̂(t,m̂(t,·,ω,y),n̂(t,·,ω,γ,y,z), ∂_x û(t,·,ω,γ,y,z))
:= _α∈^d{∫_^nn̂(t,ξ,ω,γ,y,z) [f(t,ξ,α,m̂(t,·,ω,y))+ ∂_x û(t,ξ,ω,γ,y,z) · b(t,ξ,α,m̂(t,·,ω,y)) ] dξ}.
and
b(t,x,m̂,α̂) :=b(t,x,m̂(t,·,ω,y),α̂(t,x,m̂(t,·,ω,y),n̂(t,·,ω,γ,ω_t,γ_t), ∂_x û(t,·,ω,γ,ω_t,γ_t))).
b̅(t,x,ω,y):=
^m̂,n̂,∂_x û [
b(t,x,m̂(t,·,ω,y),α̂(t,x,m̂(t,·,ω,y),n̂(t,·,ω,,ω_t,Z_t), ∂_x û(t,·,ω,,ω_t,Z_t))) |_t^^0 ]
This last definition
indicates that, in contrast to (<ref>),
the system (<ref>) is not quite path-by-path
in the sense that it ostensibly
requires an average to determine m̂(t,x,ω,y).
Also, despite how involved this last expression might seem,
it is straightforward to compute the conditional drift
b̅(t,x,ω,y) in our linear-quadratic setting.
§.§ Linear-quadratic MFG with common noise and partial information
In the case of partial information, we can proceed
very similarly as in the case of full information in Section <ref>,
but with a few important modifications.
We now make the ansatz
u_t(x) =
1/2 ( x ·Σ_t x + μ̅_t ·Σ^0_t μ̅_t + η̅_t ·Σ^1_t η̅_t +
2x ·Λ^0_t μ̅_t + 2 x ·Λ^1_t η̅_t )
+ Δ_t
so that
∂_x u_t(x)
= Σ_t x + Λ^0_t μ̅_t + Λ^1_t η̅_t
and thus the control feedback of (<ref>) is given by
α̂(t,η_t,∂_x u_t) := - ∫_^d∂_x u_t(ξ) η_t(d ξ)
= - (Σ_t + Λ^1_t) η̅_t - Λ^0_t μ̅_t
Thus, we have
dX_t = (b_tX_t - (Σ_t+Λ^1_t)η̅_t + (b̅_t - Λ^0_t) μ̅_t ) dt + σ dW_t + σ^0 dW^0_t.
Recalling the definition (<ref>) of ^,η,∂_x u,
we then take the ^,η,∂_x u-conditional expectation given
^^0_t to get
dμ̅_t = (b_t + b̅_t -Σ_t - Λ^0_t -Λ^1_t) μ̅_t dt + σ^0 dW^0_t.
Letting L̂_t := (b_t + b̅_t -Σ_t - Λ^0_t -Λ^1_t),
the lifted functional of μ̅_t has the form
μ̅(t,ω, y)
= Φ_t (μ̅_0
+ ∫_0^t Φ_s^-1 L̂_s σ^0 ω_s ds )
+ σ^0 y,
where (Φ_t)_ is the solution of
Φ̇_t = L̂_t Φ_t, Φ_0 = 1.
The equation for η̅_t needs to be
derived by computing the first moment
directly from the forward Kushner equation in (<ref>), which gives
d_t η̅_t
= ( (b_t - (Σ_t + Λ^1_t) - Π_t H^⊤Θ̃^-1H ) η̅_t + (b̅_t - Λ^0_t) μ̅_t ) dt
+ σ^0 dW^0_t
+Π_t H^⊤Θ̃^-1 dZ_t
where we define the variance Π_t := ∫_^dξ^2 η_t(dξ) - η̅_t^2.
If the initial condition is Gaussian, then this quantity is deterministic and classically satisfies
Π̇_t = σσ^⊤ + σ^0 (σ^0)^⊤
+ b_t^⊤Π_t + Π_t b_t - Π_t H^⊤ Θ̃_t^-1 H Π_t.
This procedure of estimating the state with η̅_t = 𝔼[X_t|ℱ_t^𝐖^0,𝐙] (classically without the presence of the common noise)
is commonly known as the Kalman filter in a discrete time context or as the Kalman-Bucy filter in a continuous time context (see the seminal work
Kalman-Bucy <cit.>).
So relying on this strong consequence of the Gaussian assumption,
we can solve the resulting linear equation for η̅_t explicitly.
More precisely, let
L_t:= b_t - (Σ_t + Λ^1_t) -Π_t H^⊤ Θ̃^-1 H
and consider the solution (Ψ_t)_ of
Ψ̇_t = L_t Ψ_t, Ψ_0 = 1.
Then
η̅(t,x,ω, γ, y,z)
= Ψ_t ( μ̅_0 + ∫_0^t Ψ_s^-1 (b̅_s - Λ^0_s) μ̅(s,ω, ω_s) ds - ∫_0^t L_s Ψ_s^-1 σ^0 ω_s ds )
+ Ψ_t ( ∫_0^t ( L_s Ψ_s^-1Π_s + Ψ_s^-1Π̇_s ) H^⊤Θ̃^-1γ_s ds ) + σ^0 y + Π_t H^⊤Θ̃^-1 z
where Π̇_t is given by (<ref>).
The lifted value function is given by
û(t,x,ω,γ,y,z) =
1/2 x·Σ_t x + x·Λ^0_t μ̅(t,x,ω,y)+1/2 μ̅(t,x,ω,y) ·Σ^0_t μ̅(t,x,ω,y)
+ 1/2 η̅(t,x,ω,γ,y,z) ·Σ^1_t η̅(t,x,ω,γ,y,z) + x·Λ^1_t η̅(t,x,ω,γ,y,z) + Δ_t.
Note the compensated time derivative ∂_t^y,z := ∂_t + ^y,z_ω,γ of (<ref>) will now involve both path variables ω and γ.
Then the compensated HJB equation will take the form
- ∂_t^y,zû(t,x,ω,γ,y,z) -1/2 ( [a ∂_xx^2 û(t,x,ω,γ,y,z) ] + Δ_y û(t,x,ω,γ,y,z) )
- [σ^0 ∂_x ∂_y û(t,x,ω,γ,y,z) ] - 1/2[ Θ̃ ∂_zz^2 û(t,x,ω,γ,y,z)] - ∂_zû(t,x,ω,γ,y,z) · Hx
- ∂_x û(t,x,ω,γ,y,z) · ( b_t x + b̅_t μ̅(t,ω, y) + K̂(t,ω,γ,y,z) )
= 1/2 ( x· q_t x + (x - s_t μ̅(t,ω, y))·q̅_t (x - s_t μ̅(t,ω, y)) ) + 1/2 | K̂(t,ω,γ,y,z) |^2,
where
K̂(t,ω,γ,y,z) := -(Σ_t + Λ^1_t) η̅(t,ω,γ,y,z) - Λ^0_t μ̅(t,ω,y)
with terminal condition
û(T,x,ω,γ,y,z) = 1/2 ( x· q x + (x - sμ̅(T,ω, y))·q̅ (x - sμ̅(T,ω, y)) ).
Now we may begin computing
the terms appearing in the lifted functional
backward equation (<ref>).
We first compute
∂_t^y,zμ̅(t,ω, y)
=
(b_t + b̅_t - (Σ_t + Λ^1_t) - Λ^0_t) μ̅(t,ω, y)
∂_t^y,zη̅(t,ω,γ,y,z) = L_t η̅(t,ω,γ,y,z)
+ (b̅_t - Λ^0_t) μ̅(t,ω, y)
∂_y μ̅(t,ω, y) = ∂_y η̅(t,ω,γ,y,z)
= σ^0, ∂_z η̅(t,ω,γ,y,z)
= Π_t H^⊤Θ̃^-1
We then compute
∂_t^y,zû(t,x,ω,γ,y,z)
= 1/2 ( x ·Σ̇_t x + 2x ·Λ̇^0_t μ̅(t,ω, y) + μ̅(t,ω, y) ·Σ̇^0_t μ̅(t,ω, y) )
+ 1/2η̅(t,ω,γ,y,z)^⊤Σ̇^1_t η̅(t,ω,γ,y,z)) + x·Λ̇^1_t η̅(t,ω,γ,y,z) + ḋ_t
+ (b_t + b̅_t -(Σ_t + Λ^1_t) - Λ^0_t)μ̅(t,ω, y) · (
Λ^0_t x + Σ^0_t μ̅(t,ω, y) )
+ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t^⊤ x)· ( L_t η̅(t,ω,γ,y,z) + (b̅_t - Λ^0_t) μ̅(t,ω, y) )
and can further compute
∂_x û(t,x,ω,γ,y,z) =
Σ_t x
+ Λ^0_t μ̅(t,ω,y)+ Λ^1_t η̅(t,ω,γ,y,z), ∂_xx^2 û(t,x,ω,y) =
Σ_t
∂_y û(t,x,ω,γ,y,z)
= (σ^0)^⊤ ( Σ^0_t μ̅(t,ω,y) + Σ^1_t η̅(t,ω,γ,y,z)
+ (Λ^0_t + Λ^1_t)x ),
∂_yy^2 û(t,x,ω,γ,y,z)
= (σ^0)^⊤ (Σ^0_t+ Σ^1_t) σ^0,
∂_z û(t,x,ω,γ,y,z) = (Π_t H^⊤Θ̃^-1)^⊤ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t ^⊤ x) ,
∂_zz^2 û(t,x,ω,γ,y,z) = (Π_t H^⊤Θ̃^-1)^⊤Σ^1_t Π_t H^⊤Θ̃^-1
∂_x ∂_y û(t,x,ω,γ,y,z)
=
(σ^0)^⊤ (Λ^0_t+Λ^1_t).
Inputting these calculations in the compensated equation gives
1/2 ( x ·Σ̇_t x + 2x ·Λ̇^0_t μ̅(t,ω, y) + μ̅(t,ω, y) ·Σ̇^0_t μ̅(t,ω, y) )
+ 1/2η̅(t,ω,γ,y,z)·Σ̇^1_t η̅(t,ω,γ,y,z)) + x·Λ̇^1_t η̅(t,ω,γ,y,z) + ḋ_t
+ (b_t + b̅_t -Σ_t -Λ_t^1- Λ^0_t)μ̅(t,ω, y) · (
Λ^0_t x + Σ^0_t μ̅(t,ω, y) )
+ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t^⊤ x) · ( L_t η̅(t,ω,γ,y,z) + (b̅_t - Λ^0_t) μ̅(t,ω, y) )
+1/2[(σσ^⊤ + σ^0 (σ^0)^⊤) Σ_t ] + 1/2[σ^0 (σ^0)^⊤ (Σ^0_t+ Σ^1_t)] + [σ^0 (σ^0)^⊤ (Λ^0_t+Λ^1_t) ] + 1/2[ Π_t H^⊤ H Π_t^⊤Θ̃^-1Σ^1_t ]
+ (Σ^1_t^⊤ η̅(t,ω,γ,y,z) + Λ^1_t^⊤ x )·Π_t H^⊤Θ̃^-1 H x
+ ( Σ_t x
+ Λ^0_t μ̅(t,ω,y)+ Λ^1_t η̅(t,ω,γ,y,z) ) · ( b_t x + (b̅_t- Λ^0_t) μ̅(t,ω, y) - (Σ_t + Λ^1_t) η̅(t,ω,γ,y,z) )
+ 1/2 ( x· q_t x + (x - s_t μ̅(t,ω, y))·q̅_̅t̅ (x - s_t μ̅(t,ω, y)) ) + 1/2 | (Σ_t + Λ^1_t)η̅(t,ω,γ,y,z) + Λ^0_t μ̅(t,ω,y) |^2 = 0.
We now collect terms (symmetrizing for the squared terms) to arrive at the following closed system of Riccati equations (note we anticipate the coefficient of “μ̅η̅” is 0):
* |x|^2 : Σ̇_t = - Σ_t^⊤ b_t - b_t^⊤Σ_t - ( q_t + q̅_t ) + Λ^1_t Π_t H^⊤ Θ̃^-1 H+ H^⊤ Θ̃^-1 H Π_t Λ_t^1^⊤ , Σ_T = q+q̅
* x μ̅ : Λ̇_t^0 = -Λ^0_t^⊤ (b_t + b̅_t -Σ_t-Λ_t^1-Λ^0_t ) -Λ^1_t (b̅_t - Λ^0_t)-Σ_t^⊤ (b̅_t - Λ^0_t)- b_t^⊤Λ^0_t + q̅_t s_t ,
Λ^0_T = - q̅ s
* μ̅^2 : Σ̇^0_t = -( b_t + b̅_t-Σ_t -Λ_t^1-Λ^0_t )^⊤Σ^0_t-Σ_t^0^⊤ ( b_t + b̅_t-Σ_t-Λ_t^1 -Λ^0_t ) - Λ^0_t^⊤ ( b̅_t-Λ^0_t)
-( b̅_t-Λ^0_t)^⊤Λ^0_t-Λ_t^0^⊤ Λ_t^0- s_t^⊤q̅_t s_t, Σ^0_t = s^⊤q̅ s
* 1 : Δ̇_t = - 1/2[(σσ^⊤ + σ^0 (σ^0)^⊤) Σ_t ] - 1/2[ σ^0 (σ^0)^⊤ (Σ^0_t + Σ^1_t) ]
- [σ^0 (σ^0)^⊤ (Λ^0_t + Λ^1_t)]
- 1/2[ Π_t H^⊤ H Π_t^⊤Θ̃^-1Σ^1_t ] , Δ_T = 0
* η̅^2 : Σ̇^1_t = - ( L_t^⊤Σ^1_t + (Σ^1_t)^⊤ L_t ) + ( (Σ_t + Λ^1_t)^⊤Λ^1_t + ( Λ^1_t)^⊤ (Σ_t + Λ^1_t) ) - (Σ_t + Λ^1_t)^⊤ (Σ_t + Λ^1_t),
Σ^1_T = 0
* x η̅ : Λ̇^1_t = - Λ^1_t L_t - H^⊤ Θ̃^-1 H Π_t^⊤ Σ^1_t^⊤ + Σ_t^⊤ (Σ_t + Λ^1_t) - b_t^⊤Λ^1_t, Λ^1_T = 0,
* μ̅η̅ : 0=-Σ^1_t (b̅_̅t̅ - Λ^0_t) - Λ^1_t (b̅_t - Λ^0_t) + Σ_t Λ^0_t - Σ_t Λ^0_t + (Σ^1_t + Λ^1_t) (b̅_̅t̅ - Λ^0_t).
§.§ Discussion of Solvability of the Ricatti Equations
Notice that the equations for Σ_t, Λ^0_t are quadratic
Ricatti equations, while the equation for Σ^0_t
is linear.
Now observe that
L_t+ Π_t H^⊤Θ̃^-1 H = b_t - (Σ_t + Λ^1_t).
Hence, if we add the Σ_t^1 and Λ^1_t^⊤
equations together, we see that Υ_t := Σ_t^1 + Λ^1_t^⊤ solves
Υ̇_t = - L_t^⊤Υ_t - Σ^1_t^⊤(b_t - (Σ_t + Λ^1_t)) + (Σ_t + Λ^1_t)^⊤ Σ_t - Λ^1_t^⊤ b_t
+ ( (Σ_t + Λ^1_t)^⊤Λ^1_t + ( Λ^1_t)^⊤ (Σ_t + Λ^1_t) ) - (Σ_t + Λ^1_t)^⊤ (Σ_t + Λ^1_t)
= - L_t^⊤Υ_t + Υ_t (Σ_t + Λ^1_t) -Υ_t b_t.
Since the terminal condition is Υ_T = Σ^1_T + Λ^1_T^⊤ = 0, we have Υ_t ≡ 0
and thus Σ^1_t = - (Λ^1_t)^⊤ = -Λ_t^1.
Note this implies that the coefficient
for the μ̅η̅ term disappears,
thus reducing the problem to the case of the first six
equations.
To solve these equations, we can look at Γ_t := Σ_t - Σ^1_t
and sum together the first and sixth equation
along with Λ^1_t = -Σ^1_t and the definition of L_t to get
Γ̇_t = Γ_t^⊤Γ_t - b_t^⊤Γ_t - Γ_t^⊤ b_t - (q_t+q̅_t), Γ_T = q+q̅.
which is the same quadratic Ricatti equation as
for “Γ_t” in the case of full information
in Section <ref>.
Similarly, the equation for Λ_t^0 can be rewritten as:
Λ̇_t^0 = -Λ^0_t^⊤ (b_t + b̅_t -Σ_t-Λ_t^1-Λ^0_t ) -Λ^1_t (b̅_t - Λ^0_t)-Σ_t^⊤ (b̅_t - Λ^0_t)- b_t^⊤Λ^0_t + q̅_t s_t
= (Λ^0_t)^⊤Λ^0_t -(Λ^0_t)^⊤ (b_t + b̅_t - Γ_t) + Γ_t^⊤Λ^0_t - Γ_t^⊤ b̅_t - b_t^⊤ Λ_t^0 + q̅_t s_t,
which is the same quadratic Ricatti equation as
for “Λ^0_t” in the case of full information
in Section <ref>.
Now, following the approach in Section <ref>, we can consider Λ̃_t = Γ_t + Λ_t^0, which satisfies:
Λ̇̃̇_t = Λ̃_t^⊤Λ̃_̃t̃ - b_t^⊤ Λ̃_t - Λ̃_t^⊤ (b_t+b̅_t) - (q_t+q̅_t-q̅_t s_t).
Once again, under the conditions that q_t+q̅_t-q̅_t s_t, q+q̅-q̅ s are symmetric and positive semidefinite and that b̅_t is a scalar times the identity, we see that Λ̃_t is also symmetric and positive semidefinite and thus a unique global solution exists.
Finally and most importantly,
the above manipulations embody the separation principle:
to go from the optimal feedback function (<ref>) in the case of full information
in Section <ref> to the case of partial information here,
one just needs to replace the state with the best guess of the state given
the common noise ^0 and the
partial observation .
This is exactly what we have just established.
[Separation Principle]
The optimal feedback control for a mean field game with common noise and a partial information constraint in the linear-quadratic framework with Gaussian initial condition has the linear feedback form
α̂(t,η_t,∂_x u_t) = - ∫_^d∂_x u_t(ξ) η_t(d ξ) = -Γ_t η̅_t - Λ^0_t μ̅_t,
where the coefficients Γ_t and Λ^0_t
satisfy the same equations as for the optimal feedback function
α^*(t,x) := -Γ_t x - Λ^0_t μ̅_t
in the case of full information.[We remind the reader that we also checked the consistency of these equations with the literature at the end of Section <ref>.]
Thus, the optimal control is determined “separately” from the partial observation in that
the latter only enters in the former through the conditional expectation η̅_t = 𝔼[X_t|ℱ_t^𝐖^0,𝐙], which solves the so-called Kalman filtering problem (again, see the seminal work <cit.> of Kalman-Bucy).
§.§.§ Comparison with the literature
The recent article <cit.> of Bensousson-Yam has demonstrated a close connection between the partial observation control problem and mean field theory; more precisely,
they use a master equation approach for the linear quadratic partial information problem without mean field interactions. This approach allows them to prove a separation principle, i.e., that the optimal control is a linear feedback of the expected state given the observation process, but without requiring the standard simplifying assumption that the initial distribution is Gaussian (a significant assumption that our calculations of Section <ref> notably rely on).
It is noted that the complications from non-Gaussian initial conditions only arise in the Kalman filter equations to determine the distribution conditioned on the observations, whereas the fact one arrives at a linear feedback control (<ref>) should not change.
In the context of our paper, the Kalman filter corresponds to the mean flow (η̅_t)_
of the solution η = (η_t)_
to the forward Kushner equation in (<ref>).
In the Gaussian case with linear-quadratic data,
when computing the covariance Π_t from the Kushner equation,
a term involving the third moment naturally arises, but this can
be expressed in terms of the second moment,
thus leading to the deterministic Ricatti equation (<ref>) for the covariance Π_t.
It is not yet clear to the authors whether the approach of <cit.> could be adapted
to the mean field game problem with partial information to generalize the solution outside of the case of Gaussian initial conditions.
§ DERIVATION, DIFFICULTIES, AND SOME CALCULATIONS
This optional appendix
first sketches how we derived the lifted functional approach. We then turn to some difficulties the reader might want to keep in mind
when pursuing this perspective.
Finally, we close with some enlightening
calculations involving the compensator based on the Fréchet derivative.
§.§.§ Derivation of the lifted functional approach
For the sake of simplicity, we take σ = σ^0 = I_d
and a quadratic Hamiltonian H(t,x,p) := 1/2|p|^2.
We then perform the change of variables x ↦ x-y
so that we can reduce the form of the lifted functional system (<ref>) to
finding a pair (u(t,x,ω,y), r(t,x,ω))
satisfying
-∂_t u(t,x,ω,y) - 1/2 ( Δ_x + Δ_y ) u(t,x,ω,y) + 1/2 | ∂_x u(t,x,ω,y)|^2 = f(t,x,y,r(t,·, ω)) + _ω^y u(t,x,ω,y),
∂_t r(t,x,ω)
= 1/2Δ_x r(t,x,ω) + div_x [r(t,x,ω)∂_x u(t,x,ω,ω_t) ],
u(T,x,ω,y) = g(x,y,r(T,·,ω)), r(0,x,ω) = ℓ(x),
where
f(t,x,y,ρ):= f(t,x+y,ρ(· - y)), g(x,y,ρ):= g(x+y,ρ(· - y)).
To make explicit the connection, once we find a solution to (<ref>),
we immediately recover a solution to the original system (<ref>)
by setting
û(t,x,ω,y) := u(t,x-y,ω,y), m̂(t,x,ω,y):= r(t,x-y,ω).
Now let = (B_t)_ be a d-dimensional Brownian motion independent
of .
Also, given
a path ω∈Ω,
we write
W^t,ω,y_s := ω_s 1_[0,t)(s) + [y + W_s - W_t ] 1_[t,T](s).
We consider the candidate solution of the compensated backward HJB of (<ref>) given by the BSDE representation
u(t,x,ω,y) := Y^t,x,ω,y_t, (t,x,ω,y) ∈ [0,T] ×^d
×Ω×^d,
where for each (t,x,ω,y) ∈ [0,T) ×^d ×Ω×^d, the
triple (Y^t,x,ω,y_s, (Z^t,x,ω,y_s, Γ_s^t,x,ω,y))_t ≤ s≤ T
satisfies the “lifted” BSDE (see Peng <cit.>)
Y^t,x,ω,y_s = g(B^t,x_T, W^t,y_T, r(T,·, ^t,ω, y ) )
+ ∫_s^T f(θ,B^t,x_θ, W^t,y_θ, r(θ, ·, ^t,ω, y ) ) dθ
- 1/2∫_s^T |Z^t,x,ω,y_θ|^2 dθ - ∫_s^T [Z^t,x,ω,y_θ· dB_θ + Γ_θ^t,x,ω,y· dW_θ ].
We now sketch how to go
from the candidate solution u(t,x,ω,y) := Y^t,x,ω,y_t
as in (<ref>) to the form of compensated backward HJB
of the system (<ref>).
First, it is readily seen that the concatenated path W^t,ω,y_· still satisfies the flow property:
W^s,W^t,ω,y_·,W^t,ω,y_s_r = W^t,ω,y_r, for t ≤ s ≤ r ≤ T.
This in turn will ensure we have
a corresponding flow property at the level of the BSDE:
u(t+ϵ,B^t,x_t+ϵ,W^t,ω,y,W^t,y_t+ϵ) = Y^t,x,ω,y_t+ϵ.
Arguing as in Theorem 3.2 of Pardoux-Peng <cit.>, this leads us to consider
the decomposition
u(t+ϵ,x,ω,y) - u(t,x,ω,y) = [u(t+ϵ,B^t,x_t+ϵ,W^t,ω,y,W^t,y_t+ϵ) - u(t,x,ω,y) ] (1st difference)
+ [u(t+ϵ,x,ω,y) - u(t+ϵ,B^t,x_t+ϵ,ω, W^t,y_t+ϵ)) ] (2nd difference)
+ [u(t+ϵ,B^t,x_t+ϵ,ω, W^t,y_t+ϵ)) - u(t+ϵ,B^t,x_t+ϵ,W^t,ω,y, W^t,y_t+ϵ) ] (3rd difference)
The first difference in (<ref>) can be expressed in terms of the BSDE by the flow property (<ref>), and thus upon dividing by ϵ>0, taking expectations, and letting ϵ→ 0,
it will contribute the term “1/2| ∂_x u(t,x,ω)|^2 - f(t,x,y,r(t,·, ω))”.
Next, by an application of the completely classical Itô formula, the second
difference in (<ref>) will contribute “-1/2(Δ_x + Δ_y)u(t,x,ω,y)”.
Finally, for the third difference of (<ref>), write X(ϵ):= B^t,x_t+ϵ, Y(ϵ):= W^t,y_t+ϵ, and
H^t,ω,y_s(ϵ) := [y-ω_s+W_s - W_t]1_[t,t+ϵ)(s).
Then the third difference of (<ref>) can be written as
u(t+ϵ,X(ϵ),ω, Y(ϵ)) - u(t+ϵ,X(ϵ),ω+H^t,ω,y(ϵ), Y(ϵ))
where X(ϵ),Y(ϵ) → x,y as ϵ→ 0. Hence, up to stochastic arguments that
will not contribute given suitable joint regularity, we identify the compensator (<ref>) as the limit of the final difference in (<ref>):
lim_ϵ→ 0ϵ^-1 [ u(t+ϵ,X(ϵ),ω+H^t,ω,y(ϵ), Y(ϵ)) - u(t+ϵ,X(ϵ),ω, Y(ϵ)) ] = _ω^y u(t,x,ω,y).
§.§.§ Some difficulties with the lifted functional approach
There are a few issues to deal with that the reader should keep in mind when adopting this perspective:
* The property of being a lifted functional is not a closed condition; for example, consider
ψ̂_ϵ(t,ω,y):= ϵ^-1∫_t-ϵ^t g(ω_s) ds + h(y)
Then as ϵ↓ 0, we have ψ̂_ϵ(t,ω,y) → g(ω_t) + h(y), which no longer separates the present value from the strict prior history.
* The compensated HJB method is not valid for functional data that is too sensitive to a jump nearby a fixed time. For example, consider the path-dependent heat equation (see Cosso-Russo <cit.> for the definition of the vertical ∂_ω^V and horizontal ∂_t^H path dependent derivatives):
-∂_t^H u(t,ω) - 1/2∂_ωω^V u(t,ω) = 0
u(T,ω) = G(ω).
This equation admits the candidate[See Chapter 11 of Zhang <cit.> or Cosso-Russo <cit.> (and references therein) for details on realizing this expression as a viscosity solution of a path dependent PDE.] solution u(t,ω) = [ G(^t,ω)], where
W^t,ω_s := ω_s 1_[0,t)(s) + [ω_t + W_s - W_t] 1_[t,T](s).
Now suppose the terminal condition G(ω) admits the lifted functional form G(ω) = Ĝ(ω,ω_t).
Then we would like to say that û(t,ω,y) := G(^t,ω,y) is a solution of
the compensated heat equation
-∂_t û(t,ω,y) - 1/2∂_yyû(t,ω,y) = _ω^y û(t,ω,y)
û(T,ω,y) = Ĝ(ω,y).
But this is not always true. Indeed, the choice G(ω) = Ĝ(ω,ω_T) = sup_0≤ s < T |ω_s| ∨ |ω_T| provides a counterexample.
Although one can show û(t,ω,y) := G(^t,ω,y) satisfies a certain classical heat equation (see Section 3.2 of Cosso-Russo <cit.> or Example 11.1.2(iii) of Zhang <cit.>),
the uniform metric is very sensitive to a jump nearby a fixed time, so one cannot
compute the compensator _ω^y û(t,ω,y). Thus, û(t,ω,y) cannot be realized as a solution of a compensated heat equation.
However, Laplace's principle allows us to approximate the uniform metric as
sup_0≤ t ≤ T |ω_s| = lim_N →∞ N^-1log∫_0^T e^Nω_sds,
where each approximating terminal data G_N(ω):= N^-1log∫_0^T e^Nω_sds is not too sensitive jumps.
One can show that the candidates û_N(t,ω,y) := G_N(^t,ω,y) are C^2 solutions to compensated heat equations that converge to the viscosity solution û(t,ω,y) := sup_0≤ s ≤ T | W^t,ω,y_s| of the path-dependent heat equation with terminal data G(ω) = sup_0≤ s ≤ T |ω_s|.
* To expand on the previous point, the compensator _ω^y of (<ref>) seems to require leaving the framework of continuous paths.
In fact, equivalent formulas based on the Frechet derivative for the compensator
_ω^y of even basic functionals naturally involve evaluating on paths that are either left or right continuous (or even neither! See the expression (<ref>) below).
Hence, there are at least a few reasons that one
may want to avoid working on the Skorokhod space of right continuous paths with left limits, in contrast to much of the literature on functional Itô formula and path dependent PDEs (though there are notable exceptions, like Cosso-Russo <cit.> and Zhang <cit.>).
Fortunately, one can adapt and extend the seminorm topology of Section 2.2 from Cosso-Russo <cit.> to our setting, which
formalizes the notion of a path dependent functional being “not too sensitive to a possible jump nearby any given fixed time t.”
Fix t ∈ [0,T]. Then for each fixed M>0,
consider the space _t,M([0,T];^d)
of paths bounded by M and continuous on [0,T] except for
possibly a jump at time t.
Endow _t,M([0,T];^d) with the topology
associated to the metric[Note here we use a more standard looking metric since the restriction to bounded paths allows us to avoid the arguably more abstract Frechet-type metric construction “∑_k=1^∞ 2^-k[ω-η]_t,k/1+[ω-η]_t,k,” which does not appear as good for checking estimates. ]
d_t(ω,η):= ∑_k=1^∞ 2^-k [ω-η]_t,k,
induced by an increasing countable family of seminorms of the form
[ω]_t,k := sup_0≤ s ≤ t-2^-k |ω_s| + |ω_t| + sup_t+2^-k≤ s ≤ T |ω_s|.
Then finally,
consider the space _t([0,T];^d):= ∪_M>0_t,M([0,T];^d) endowed
with the smallest topology such that all the inclusions _t,M([0,T];^d) ↪_t([0,T];^d) are continuous.[We remark that this “inductive topology” on _t([0,T];^d) is not metrizable.]
More concretely, η^N converges to η in _t([0,T];^d)
if there is an M>0 such that ‖η^N ‖_∞≤ M for all N, and for all k ≥ 1, [η^N - η]_t,k→ 0 as N →∞; in particular, sequences cannot form arbitrarily large jumps near the given time t, but are allowed to form a double jump at time t in the limit (which occurs naturally in (<ref>) below).
In summary,
to rigorize the definition of compensator (<ref>),
we can restrict to strictly non-anticipative functionals ψ(t, ω) of continuous paths ω∈Ω that are not too sensitive to a possible formation of a jump nearby any given time s ∈ [0,t].
Despite ψ(t, ω) only being defined on continuous paths Ω,
such functionals admit a unique continuous extension
to each _s,M([0,T];^d) for any M>0, and thus to _s([0,T];^d), for any s ∈ [0,t].
This stronger continuity assumption for functionals ϕ(t,ω) of continuous paths can also be shown to be compatible with the general Arzela-Ascoli criterion (Theorem 47.1 of Munkres <cit.>), which should be
convenient for a possible fixed point argument
for the main lifted functional mean field game system (<ref>).
§.§ Some compensator calculations with the Fréchet derivative
Suppose G(t,ω) is an ^d-valued non-anticipative functional
on [0,T] ×Ω, so for each t ∈ [0,T],
ω↦ G(t,ω) can be thought of as a function on C_0([0,t];^d).
Fix t ∈ [0,T].
We denote the Fréchet derivative of ω↦ G(t,ω)
by D_ω G(t,ω), which is an ^d-valued signed Radon
measure on [0,t], so for any η∈ C_0([0,t];^d),
⟨ D_ω G(t,ω), η⟩ = ∫_0^t η_s D_ω G(t,ω)(ds).
We write D_ω^t G(t,ω) := D_ω G(t,ω)({t}) δ_{t }
and D^⊥_ω G(t,ω) := D_ω G(t,ω) - D_ω^t G(t,ω)
to get the Lebesgue decomposition
D_ω G(t,ω) = D^⊥_ω G(t,ω) + D_ω^t G(t,ω)
Now if ω↦ G(t,ω) is continuous with respect
to the seminorm topology determined by d_t of (<ref>)
so it admits a unique extension to _t([0,T];^d),
then we can define its lifting by
Ĝ(t,ω,y):= G(t,ω+[y-ω_t] 1_{t}).
If y ↦Ĝ(t,ω,y) is differentiable,
then D_ω^t G(t,ω) = ∂_y Ĝ(t,ω,y) δ_{t }.
If D^⊥_ω G(t,ω) is absolutely continuous
with respect to Lebesgue measure,
then we write its density as δ_ω G(t,ω)(r), r ∈ [0,t].
Supposing ω↦δ_ω G(t,ω)(r)
is also continuous with respect
to the seminorm topology determined by d_t of (<ref>),
we also write δ_ωĜ(t,ω,y)(r) = δ_ω G(t,ω+[y-ω_t] 1_{t})(r).
Putting everything together, we have
D_ω G(t,ω)(ds) = δ_ωĜ(t,ω,ω_t)(s) ds
+ ∂_y Ĝ(t,ω,ω_t) δ_{t }(ds).
Finally,
suppose G(t,ω) is strictly non-anticipative
and that for any 0 ≤ t ≤ s ≤ T,
both ω↦ G(s,ω) and ω↦δ_ω G(s,ω)(s)
are continuous with respect to d_t.
Then we can formally compute, for every t ∈ [0,T) and s ∈ [t,T],
the compensator (<ref>) of Ĝ(s,ω,y) as
_ω^y G(s,W^t,ω,y) = ∫_0^1 δ_ω G(s,W^t+,ω,y + θ [y-ω_t] 1_{t} )(t) · [y-ω_t] dθ
= ∫_0^1 δ_ω G(s,(1-θ) W^t+,ω,y + θ W^t,ω,y )(t) · [y-ω_t] dθ,
where W^t, ω, y_s was
defined in (<ref>) while its left-continuous version W^t+,ω,y
is defined as
W^t+,ω,y_s := ω_s 1_[0,t](s) + [y + W_s - W_t ] 1_(t,T](s).
As a prototype example, consider G(s,ω) = ∫_0^s F̂(r,ω,ω_r) dr,
where F̂(t,ω,y) is a lifted functional on [0,T] ×Ω×^d.
Then one can compute for 0 ≤ℓ≤ s,
δ_ω G(s,ω)(ℓ) = ∫_ℓ^s δ_ωF̂(r,ω,ω_r)(ℓ) dr
+ (∂_y F̂)(ℓ,ω,ω_ℓ).
For example, if F̂(t,ω,ω_t) := ∫_0^t h(ω_r)dr + g(ω_t),
then by combining the formula (<ref>) with the calculation
(<ref>), the compensator takes on the form
_ω^y G(s,W^t,ω,y) = (s-t) [h(y) - h(ω_t)] + g(y) - g(ω_t).
§.§ Acknowledgments
The first author would like to thank many people: Daniel Lacker, for helping identify a critical error in an early reference, which, in order to fix, led to the discovery of the need for the compensator;
Andrea Cosso and Francesco Russo, for many helpful correspondences; Nizar Touzi, for pointing out useful references;
Nikiforos Mimikos-Stamatopoulos, for regular discussions of technical concepts; and finally and most importantly,
Takis Souganidis, who helped guide
the lifted functional perspective from its inception as well as provide many critical suggestions for this paper.
plain
|
http://arxiv.org/abs/2306.06492v1
|
20230610171629
|
Study of the nonleptonic charmless $B$ ${\to}$ $SS$ decays with the QCD factorization approach
|
[
"Lili Chen",
"Mengfei Zhao",
"Liting Wang",
"Yueyang Kang",
"Qin Chang",
"Junfeng Sun"
] |
hep-ph
|
[
"hep-ph"
] |
Institute of Particle and Nuclear Physics,
Henan Normal University, Xinxiang 453007, China
Institute of Particle and Nuclear Physics,
Henan Normal University, Xinxiang 453007, China
Institute of Particle and Nuclear Physics,
Henan Normal University, Xinxiang 453007, China
Institute of Particle and Nuclear Physics,
Henan Normal University, Xinxiang 453007, China
Institute of Particle and Nuclear Physics,
Henan Normal University, Xinxiang 453007, China
Institute of Physics,
Henan Academy of Sciences, Zhengzhou 455004, China
Institute of Particle and Nuclear Physics,
Henan Normal University, Xinxiang 453007, China
Inspired by the brilliant prospects of the ongoing B meson
experiments, the hadronic charmless B → SS decays
are studied by considering the next-to-leading (NLO) contributions
with the QCD factorization approach, where S denotes the scalar
mesons K_0^∗(1430) and a_0(1450).
Branching ratios and CP violating asymmetries are estimated with
the updated values of hadronic parameters obtained from a
covariant light-front quark model, for two scenarios where the
scalar mesons are the 1^3P_0 and 2^3P_0 states.
It is found that the NLO contributions are very important for the B → SS decays;
For the B → a_0(1450)K_0^∗(1430) and
B_s → K_0^∗(1430)K_0^∗(1430) decays,
branching ratios can reach up to the order of O(10^-5)
by assuming that the scalar mesons are the 1P states,
and should first be investigated in the future experiments.
Study of the nonleptonic charmless B → SS decays
with the QCD factorization approach
Junfeng Sun
July 31, 2023
======================================================================================
§ INTRODUCTION
According to the traditional quark model, the P-wave triplet
states of the quark-antiquark system have the quantum number
J^P = 0^+, and are called the scalar mesons.
The scalar mesons mostly appear as the hadronic resonances,
and have large decay widths.
There will exist several resonances and decay channels within
a short mass interval.
The overlaps between resonances and background make it
considerably difficult to resolve the scalar mesons.
In addition, the di-boson combinations can also have
the quantum number J^P = 0^+.
In contrast to the ground pseudoscalar and vector mesons,
the identification of the scalar is long-standing puzzle.
To understand the internal structure of the scalar mesons is one of
the most interesting topics in hadron physics.
Generally, the scalar mesons have been identified as the ordinary
quark-antiquark qq̅ states, tetraquark qq̅qq̅
states, meson-meson molecular states or even those supplemented
with a scalar glueball.
There are many candidates with J^PC = 0^++ below 2 GeV,
which cannot be accommodated in one SU(3) flavor nonet satisfactorily.
From the mass spectrum of those scalar mesons and their
chromatic as well as electromagnetic decays, a prospective
picture (scenario 2, hereafter this text will be abbreviated as S2)
suggests that the isovector a_0(1450),
isodoublet K^∗_0(1430), isoscalar
f_0(1710) and f_0(1370) above 1 GeV can be assigned to
be a conventional SU(3) qq̅ scalar nonet with the
spectroscopy symbol of 1^3P_0 <cit.>,
while the scalar mesons a_0(980), K_0^∗(700)
(or κ), f_0(980) and f_0(500) (or σ)
below 1 GeV form the unconventional qq̅qq̅ exotic
nonet <cit.>.
Of course, the above assignments is tentative.
In alternative schemes, the scalar mesons with mass
below 1 GeV are interpreted as the lowest lying qq̅
states, while the scalars a_0(1450), K^∗_0(1430),
f_0(1710) and f_0(1370) are regarded as the radial excited
states with the spectroscopy symbol of 2^3P_0
(scenario 1, namely S1).
It is widely known that the B mesons have rich decay modes.
The light scalar mesons can be produced in the B meson decays.
The B meson hadronic decays involving the final scalar mesons
provides another efficient way to investigate the features and
the possible inner structures of the scalar mesons.
Experimentally, some of the B → SP, SV, SS, SX
decays (where the symbols of S, P, V and X denote the
light scalar mesons, pseudoscalar mesons, vector mesons and
other particles, respectively), such as the B →
K^∗_0(1430)^+π^-,
K^∗_0(1430)^+ω,
K^∗_0(1430)^0K^∗_0(1430)^0,
K^∗_0(1430)^0π^+γ decays,
have been measured by Belle, BaBar and LHCb groups <cit.>.
With the running of high-luminosity Belle-II and LHCb experiments
and the coming CEPC, FCC-ee and HL-LHC experiments,
more and more data on the B meson decays will be available
in the future, more and more B → SP, SV, SS, SX
decays can be discovered and investigated, and the measurement
precision will be higher and higher, which lays a solid
experimental foundation to carefully study the scalar
mesons and distinguish theoretical models.
Phenomenologically, many of the B → SP, SV, SS, SX decays
have been studied extensively with various theoretical models.
For example, the study of the B → SP, SV decays with
the QCD factorization (QCDF) approach <cit.>,
the B → SP, SV, SS decays with
the perturbative QCD (PQCD) approach <cit.>,
the semileptonic B → SX decays with the
sum rules and other approaches <cit.>, and so on.
It is easy to imagine that various scenarios,
such as S1 and S2, will inevitably give different theoretical
predictions on the B meson decays.
It is noteworthy that some studies have shown that
branching ratios for the charmless B → SS decays
with the PQCD approach can be very large, for example,
B(B_s→K^∗_0(1430)K^∗_0(1430))
∼ O(10^-4) <cit.>,
B(B→K^∗_0(1430)σ) ∼ O(10^-4)
<cit.>,
B(B_s→σσ) ∼ O(10^-4),
B(B_s→σf_0(980)) ∼ O(10^-4),
B(B_s→f_0(980)f_0(980)) ∼ O(10^-4),
B(B→σσ) ∼ O(10^-5)
<cit.>,
B(B→a_0(980)a_0(980)) ∼ O(10^-5)
<cit.>.
And the more striking phenomena are that branching ratios for
the pure annihilation B → SS decays, which might be very
small by intuition, are very large with the PQCD approach, for example,
B(B_s→a_0(980)a_0(980)) ∼ O(10^-5),
B(B_s→a_0(1450)a_0(1450)) ∼ O(10^-5),
B(B_d→κ^+κ^-) ∼ O(10^-6),
B(B_d→K^∗_0(1430)^+K^∗_0(1430)^-)
∼ O(10^-6) <cit.>.
So the study of the B → SS decays is very promising and
tempting both theoretically and experimentally.
In order to deepen our understanding on the properties of
the light scalar mesons and provide the ongoing and coming
experimental analysis with additional theoretical references,
in this paper, we will study the nonleptonic charmless
B → SS decays with the QCDF approach,
by considering scenarios S1 and S2 for the scalar mesons,
where S = K_0^∗(1430) and a_0(1450).
This paper is organized as follows.
In Section <ref>, the theoretical framework
are briefly reviewed, the next-to-leading order
effective coefficients for the B → SS decays
and the weak annihilation amplitudes are given with
the QCDF approach.
In Section <ref>, the values of the
nonperturbative input parameters are fixed.
The numerical results and our comments are
presented in Section <ref>.
Finally, We conclude with a summary in Section <ref>.
The decay amplitudes are displayed in the Appendix.
§ THEORETICAL FRAMEWORK
§.§ The effective Hamiltonian
The low-energy effective Hamiltonian for the charmless
nonleptonic B → SS decays is written as
<cit.>,
H_ eff = G_F/√(2)∑_q=d,s{ V_ub V_uq^∗[ C_1(μ)O_1(μ)+ C_2(μ)O_2(μ) ]
-
V_tb V_tq^∗[
∑_i=3^10 C_i(μ)O_i(μ)
+ C_7γ(μ)O_7γ(μ)
+ C_8g(μ) O_8g(μ) ] }
+ h.c.,
where the Fermi constant G_F and the
Cabibbo-Kobayashi-Maskawa (CKM) matrix elements V_ij
have been well determined experimentally <cit.>.
The Wilson coefficients C_i, which summarize the
short-distance physical contributions, are in principle computable
with the perturbative theory order by order at the scale of
μ = m_W, and can be evaluated to the energy scale
of the B meson decays μ ∼ m_b with the
renormalization group equation (RGE) <cit.>,
where m_W and m_b are the mass of the gauge
boson W of the weak interactions and the heavy b
quark mass, respectively.
The remaining theoretical work is to calculate the hadronic
matrix elements (HMEs),
⟨S_1S_2|O_i|B⟩,
where the local four-quark effective operators O_i
are sandwiched between the initial B meson and
the final scalar mesons.
In order to generate the essential strong phases for CP
violations in the hadronic B meson decays,
and cancel the unphysical μ-dependence of
decay amplitude A =
⟨S_1S_2| H_ eff|B⟩
originating from the Wilson coefficients,
the high order radiative corrections to HMEs are necessary
and should be taken into consideration.
However, the perturbative contributions embedded in HMEs
became entangled with the nonperturbative contributions,
which makes the theoretical calculations extremely complicated.
How to properly and reasonably evaluate HMEs of the hadronic
B meson decays has been an academic focus.
§.§ The QCDF decay amplitudes
The QCDF approach <cit.> is one of
many QCD-inspired phenomenological remedies to deal with HMEs.
Based on the power counting rules in the heavy quark limits
and an expansion series in the strong coupling α_s
assisted by the collinear approximation, the long- and
short-distance contributions are factorized.
The nonperturbative contributions in HMEs are either power
suppressed by 1/m_b
or incorporated into the hadronic transition form factors
and mesonic distribution amplitudes (DAs).
Up to the leading power corrections of order 1/m_b,
the QCDF factorization formula for HMEs concerned is
written as <cit.>,
⟨S_1S_2|O_i(μ)|B⟩ = ∑_j F_j^B→S_1 f_S_2∫dy T_ij^I(y) ϕ_S_2(y)
+ (S_1↔S_2)
+
f_B f_S_1 f_S_2∫ dx dy dz T_i^II(x,y,z) ϕ_S_1(x) ϕ_S_2(y) ϕ_B(z)
,
where x, y and z are the longitudinal momentum fractions
of the valence quarks.
The form factors F_j^B→S,
the decay constants f_B and f_S,
the mesonic light cone DAs ϕ_B and ϕ_S,
all of them are the nonperturbative parameters.
These parameters are regarded to be universal
and process-independent, and can be obtained from the
experimental data, lattice QCD simulation, QCD sum rules,
or by comparison with other exclusive processes.
T^I and T^II are the hard-scattering
functions describing the local interactions among quarks
and gluons at the B meson decay scale.
They are, in principle, perturbatively calculable to
all orders in α_s at the leading power
order of 1/m_b.
At the leading order (LO) α_s^0,
T^I = 1 and T^II = 0.
The convolution integrals of T^I and ϕ_S
result in the decay constant of the emission scalar mesons.
One can return from the QCDF formula Eq.(<ref>)
to the naive factorization (NF) approximations
<cit.>, i.e.,
the four-quark HMEs can be written as the product of
two diquark HMEs, and the diquark HMEs can be replaced
by HMEs of the corresponding hadronic currents and then
further parameterized by hadronic transition form factors
and decay constants.
Beyond the order α_s^0,
the radiative corrections to HMEs make T^I,II
no longer trivial, and some information about the
CP-violating strong phases and μ-dependence
of HMEs can be retrieved naturally.
With the QCDF approach, the amplitudes for the concerned
B → SS decays can be generally written as,
A = ⟨S_1S_2| H_ eff|B⟩ = G_F/√(2)∑_iλ_i∑_j=1^10 a_j⟨S_1S_2|O_j|B⟩_ NF,
where the parameter λ_i is the product of
the CKM elements; the coefficient a_j including the
nonfactorizable contributions beyond the leading order
of α_s is the combinations of
the Wilson coefficients; HMEs
⟨S_1S_2|O_j|B⟩_ NF
are defined and evaluated with the NF approximation.
§.§ The QCDF coefficients
To simplify the decay amplitude expressions,
we will use the notations in Refs. <cit.>
and write the QCDF coefficients as follows.
α_1(S_1 S_2) = a_1(S_1 S_2)
,
α_2(S_1 S_2) = a_2(S_1 S_2)
,
α_3^p(S_1 S_2) =
a_3^p(S_1 S_2)
+ a_5^p(S_1 S_2)
,
α_4^p(S_1 S_2) =
a_4^p(S_1 S_2)
+ γ̅_χ^S_2
a_6^p(S_1 S_2)
,
α_3,EW^p(S_1 S_2) =
a_9^p(S_1 S_2)
+ a_7^p(S_1 S_2)
,
α_4,EW^p(S_1 S_2) =
a_10^p(S_1 S_2)
+ γ̅_χ^S_2
a_8^p(S_1 S_2)
,
where S_1 denotes the recoiled scalar meson
which absorbs the light spectator quark of the
initial B mesons,
and S_2 denotes the emitted scalar meson.
The ratio γ̅_χ^S
is defined as
γ̅_χ^S(μ) = γ_χ^S(μ) μ̅_S^-1(μ) = 2 m_S/m_b(μ),
γ_χ^S(μ) = 2 m_S^2/m_b(μ)
[ m_1(μ)-m_2(μ) ] ,
μ̅_S(μ) = m_S/m_1(μ)-m_2(μ),
where m_S is the mass of the emission scalar meson,
and the μ-dependent m_i is the
MS running quark mass and can be
evaluated with RGE.
m_1 and m_2
correspond to the two valence quarks in a
scalar meson.
Up to the next-to-leading order (NLO) in the coupling
α_s, the general form of the QCDF
coefficients a_i^p is expressed as,
a_i^p(S_1 S_2) = ( C_i + C_i±1/N_c) N_i(S_2)
+ P_i^p(S_2)
+ C_i±1/N_c C_F α_s/4π[ V_i(S_2) + 4π^2/N_c H_i(S_1 S_2) ]
,
where the superscript p is to be omitted for
i = 1 and 2, and
the upper (lower) signs apply when i is odd (even).
C_i is the Wilson coefficients,
the color factor C_F = (N_c^2-1)/(2 N_c)
and the color number N_c = 3.
Due to the relations between the scalar and vector decay
constants for the scalar meson (see ),
the factor N_i(S_2) is
N_i(S_2) = {[ 1 for i = 6,8;; μ̅_S^-1 others. ]
In Eq.(<ref>), the terms proportional to N_i(S_2)
are the LO contributions.
It is obvious that except for the coefficients of a_6,8,
the LO contributions are proportional to the mass difference
Δm = m_1 - m_2.
For the scalar mesons consisting of the light quarks,
a common sense is that the mass difference Δm
is usually very small.
So, it is easy to picture that the LO contributions are suppressed
by the chiral factors, and that the NLO contributions would be
necessary and important for the B → SS decays.
The terms proportional to α_s are the NLO
contributions, including the vertex corrections
V_i(S_2), penguin contributions P_i^p(S_2),
and hard spectator scattering amplitudes
H_i(S_1 S_2).
When the emission S_2 meson can be decoupled from
the B-S_1 system,
corresponding to the first line in Eq.(<ref>),
V_i(S_2) and P_i^p(S_2) are written as
the convolution integrals of hard scattering kernels
T^I(y) and mesonic DAs ϕ_S_2(y).
When the initial B meson is entangled with the final
states by the hard spectator scattering interactions,
H_i(S_1 S_2) are written as the convolution
integrals of hard scattering kernels T^II and
all participating mesonic DAs, corresponding to the
second line in Eq.(<ref>).
For the B → SS decays, the explicit expressions
of V_i(S_2), P_i^p(S_2) and
H_i(S_1 S_2) have been shown in our previous
paper <cit.> by using the replacements
of the Gegenbauer moments a_i^M_j → b_i^S_j,
the chiral factor γ_χ^M_i
→ γ̅_χ^S_i,
and DAs ϕ_M_i → ϕ_S_i.
For example, by integrating out the momentum fraction,
H_i(S_1 S_2) can be expressed as the functions
of the Gegenbauer moments embedded in the mesonic DAs.
H_i(S_1 S_2) =
{[ 0, for i = 6,8;; - B_S_1 S_2/ A_S_1 S_2 m_B/λ_B[ 9 ∑_m=0^3 b_m^S_1∑_j=0^3(-1)^j b_j^S_2
-3 γ̅_χ^S_1 X_H∑_k=0^3 b_k^S_2], for i = 5,7;; B_S_1 S_2/ A_S_1 S_2 m_B/λ_B[ 9 ∑_m=0^3 b_m^S_1∑_j=0^3 b_j^S_2
-3 γ̅_χ^S_1 X_H∑_k=0^3(-1)^k b_k^S_2], others ].
with the common factors are
A_S_1 S_2 =
i G_F/√(2) U_0^B S_1(m_S_2^2) f̅_S_2 ( m_B^2-m_S_1^2)
,
B_S_1 S_2 =
i G_F/√(2) f_B f̅_S_1 f̅_S_2,
m_B/λ_B = ∫_0^1 dz ϕ_B(z) / z ,
X_H = ∫_0^1 dx / 1-x ,
where U_0^B S_1 is the form factors,
f_B is the decay constant for the B meson,
f̅_S_i is the scalar decay constant
for the scalar mesons,
the quantity λ_B is used to
parameterize our ignorance about the B mesonic DAs,
and the phenomenological parameter X_H is
introduced to regularize the end point singularities.
In addition, according to many practical application of
the QCDF approach in the two-body hadronic B decays,
such as Refs. <cit.>, it was shown that the weak annihilation (WA)
contributions are important and worth of consideration,
although they are formally power suppressed relative
to the LO contributions based on the QCDF power counting
rules in the heavy quark limits.
The QCDF coefficients of the WA amplitudes for
the B → SS decays have the same expression
as those in Eq.(55) of Ref. <cit.>,
i.e.,
β_i^p =
- B_S_1 S_2/ A_S_1 S_2
b_i^p,
b_1 = C_F/ N_c^2 C_1 A_1^i
, b_2 = C_F/ N_c^2 C_2 A_1^i,
b_3^p = C_F/ N_c^2 [ C_3 A_1^i
+ C_5 ( A_3^i + A_3^f )
+ N_c C_6 A_3^f]
,
b_4^p = C_F/ N_c^2 [ C_4 A_1^i
+ C_6 A_2^i]
,
b_3,EW^p = C_F/ N_c^2 [ C_9 A_1^i
+ C_7 ( A_3^i + A_3^f )
+ N_c C_8 A_3^f]
,
b_4,EW^p = C_F/ N_c^2 [ C_10 A_1^i
+ C_8 A_2^i]
,
and the building blocks are respectively written as the
functions of the Gegenbauer moments.
A_1^i ≈
2 π α_s{
9 [ b_0^S_1(
b_0^S_2 ( X_A - 4 + π^2/ 3 )
+ b_2^S_2 ( 6 X_A - 107/3 + 2 π^2 )
+ b_1^S_2 ( 3 X_A + 4 - π^2 )
+ b_3^S_2 ( 10 X_A + 23/18 - 10/3 π^2 )
)
- b_1^S_1 (
b_0^S_2 ( X_A + 29 - 3 π^2 )
+ b_2^S_2 ( 6 X_A +754 -78 π^2 )
+ b_1^S_2 ( 3 X_A - 213 + 21 π^2 )
+ b_3^S_2 ( 10 X_A - 12625/6 +210 π^2 )
)
+ b_2^S_1 (
b_0^S_2 ( X_A - 119 + 12 π^2 )
+ b_2^S_2 ( 6 X_A-9609+972 π^2 )
+ b_1^S_2 ( 3 X_A +1534-156 π^2 )
+ b_3^S_2 ( 10 X_A+118933/3-4020 π^2 )
)
- b_3^S_1 (
b_0^S_2 ( X_A+2956/9-100/3 π^2 )
+ b_2^S_2 ( 6 X_A+198332/3-6700 π^2 )
+ b_1^S_2 ( 3 X_A-20743/3+700 π^2 )
+ b_3^S_2 ( 10 X_A-3585910/9+121100/3 π^2 )
) ]
- γ̅_χ^S_1 γ̅_χ^S_2 X_A^2},
A_2^i ≈
2 π α_s{
9 [ b_0^S_2(
b_0^S_1 ( X_A - 4 + π^2/ 3 )
+ b_2^S_1 ( 6 X_A - 107/3 + 2 π^2 )
- b_1^S_1 ( 3 X_A + 4 - π^2 )
- b_3^S_1 ( 10 X_A + 23/18 - 10/3 π^2 )
)
+ b_1^S_2 (
b_0^S_1 ( X_A + 29 - 3 π^2 )
+ b_2^S_1 ( 6 X_A +754 -78 π^2 )
- b_1^S_1 ( 3 X_A - 213 + 21 π^2 )
- b_3^S_1 ( 10 X_A - 12625/6 +210 π^2 )
)
+ b_2^S_2 (
b_0^S_1 ( X_A - 119 + 12 π^2 )
+ b_2^S_1 ( 6 X_A-9609+972 π^2 )
- b_1^S_1 ( 3 X_A +1534-156 π^2 )
- b_3^S_1 ( 10 X_A+118933/3-4020 π^2 )
)
+ b_3^S_2 (
b_0^S_1 ( X_A+2956/9-100/3 π^2 )
+ b_2^S_1 ( 6 X_A+198332/3-6700 π^2 )
- b_1^S_1 ( 3 X_A-20743/3+700 π^2 )
- b_3^S_1 ( 10 X_A-3585910/9+121100/3 π^2 )
) ]
- γ̅_χ^S_1 γ̅_χ^S_2 X_A^2},
A_3^i ≈
-6 π α_s{γ̅_χ^S_1 [
b_0^S_2 ( X_A^2-2 X_A+π^2/3 )
+6 b_2^S_2 ( X_A^2-16/3 X_A+15/2+π^2/3 )
+3 b_1^S_2 ( X_A^2-4 X_A+4+π^2/3 )
+10 b_3^S_2 ( X_A^2-13/9 X_A+191/18+π^2/3 )
]
+γ̅_χ^S_2 [
b_0^S_1 ( X_A^2-2 X_A+π^2/3 )
+6 b_2^S_1 ( X_A^2-16/3 X_A+15/2+π^2/3 )
-3 b_1^S_1 ( X_A^2-4 X_A+4+π^2/3 )
-10 b_3^S_1 ( X_A^2-13/9 X_A+191/18+π^2/3 )
] },
A_1^f = A_2^f = 0
,
A_3^f ≈
-6 π α_s X_A {γ̅_χ^S_1 [
b_0^S_2 ( 2 X_A-1 )
+ b_2^S_2 ( 12 X_A-31 )
+ b_1^S_2 ( 6 X_A+11 )
+ b_3^S_2 ( 20 X_A-187/3 )
]
-γ̅_χ^S_2 [
b_0^S_1 ( 2 X_A-1 )
+ b_2^S_1 ( 12 X_A-31 )
- b_1^S_1 ( 6 X_A+11 )
- b_3^S_1 ( 20 X_A-187/3 )
] },
where X_A has the similar definition and function as the
the parameter X_H in Eq.(<ref>)
to regularize the end point divergence appearing in the
weak annihilation topologies.
With the QCDF approach, X_H and X_A are usually
parameterized as
X_H = ln( m_B/Λ_h)
(1+ρ_H e^i ϕ_H)
,
X_A = ln( m_B/Λ_h)
(1+ρ_A e^i ϕ_A)
,
with Λ_h = 0.5 GeV <cit.>,
and ρ_H,A and ϕ_H,A are the
undetermined parameters.
Theoretically, X_H and X_A are respectively related
to the contributions from hard spectator scattering and
weak annihilations, and their physical implication and
significations are in nature different.
What's more, these parameters should depend on the specific
process and hadrons, because they actually originate from the
convolution integrals of hard scattering functions and
hadronic DAs.
In the practical application of the QCDF approach,
X_H and X_A are usually and approximately
regarded as the universal quantities to reduce the number
of phenomenological model parameters.
Here, we will consider two special cases. One case (C1) is to
use the minimal parameters as possible, for example,
ρ_H = ρ_A = 1 and
ϕ_H = ϕ_A = -55^∘
<cit.>.
The other case (C2) is that the factorizable and nonfactorizable
WA contributions are treated independently,
and two quantities X_A^f and X_A^i are
introduced to replace X_A.
A global fit on the B → PP decays
with an approximation X_H ≈ X_A^i
gives (ρ_A^i, ϕ_A^i) =
(2.98, -105^∘) and
(ρ_A^f, ϕ_A^f) =
(1.18, -40^∘) <cit.>.
§ INPUT PARAMETERS
There are many input parameters in the numerical calculations.
These parameters generally fall into two categories.
One has been well determined experimentally or theoretically
and listed explicitly in Ref. <cit.>,
such as the Fermi coupling constant G_F, Wilson coefficients,
the CKM elements, and hadron mass as well.
Their central values in Ref. <cit.>
will be regarded as the default inputs unless otherwise specified.
The other is the nonperturbative parameters, such as
the decay constants, mesonic transition form factors,
and hadronic DAs, which lead to produce the main theoretical errors.
The choice of these parameters requires certain caution.
§.§ The CKM elements
The Wolfenstein parameterization is traditionally and commonly
used for the unitary CKM matrix,
due to the obvious power series in the
Wolfenstein parameter λ
among the CKM elements.
The values of the four Wolfenstein parameters
are <cit.>,
A = 0.790^+0.017_-0.012, λ = 0.22650 ± 0.00048, ρ̅ = 0.141^+0.016_-0.017, η̅ = 0.357 ± 0.011.
§.§ The decay constants
The lattice QCD results of the isospin averages of the
B meson decay constants are <cit.>,
f_B_u,d = 190.0±1.3 MeV,
f_B_s = 230.3±1.3 MeV.
There are two kinds of definitions of the decay
constants for the scalar mesons, i.e.,
⟨ S(p) |q̅_1 γ^μ q_2 | 0 ⟩ = f_S p^μ,
⟨ S(p) |q̅_1 q_2 | 0 ⟩ = m_S f̅_S(μ)
.
The scale-dependent scalar decay constant f̅_S(μ)
and the vector decay constant f_S are related by the
equation of motion,
f_S = f̅_S(μ) μ̅_S^-1(μ)
.
Clearly, the vector decay constant f_S is
proportional to the running mass difference,
Δm, between the two valence
quarks resided in the scalar mesons.
f_S for the light scalars should be seriously
suppressed by the small Δm,
especially for the electrically neutral scalar
mesons owing to charge conjugation invariance
or conservation of vector current.
For example, f_S will vanish for the a_0^0 meson.
At the same time the scalar decay constants f̅_S
remain finite.
Here a preferable solution scheme is to use
the scalar decay constants f̅_S.
This is one of the main reasons for the factors in Eq.(<ref>).
In addition, the scalar mesons and its antiparticles have
the same scalar decay constants, f̅_S =
f̅_S̅.
It means that the vector decay constants f_S = -f_S̅
from Eq.(<ref>) and Eq.(<ref>),
which results in the f_S = 0 for the a_0^0 meson.
Experimentally, these decay constants can be extracted from
the purely leptonic decays of the scalar mesons.
It is widely known that the scalar mesons usually appear as
resonances and decay dominantly through the strong interactions,
and the occurrence probability of the leptonic decays of the scalar
mesons should in principle be very small.
The leptonic decays of the scalar mesons have not been discovered by now.
The experimental data on the decay constants of the scalar
mesons is still unavailable.
The theoretical values of the scalar decay constants
f̅_S corresponding to the S1 and S2 scenarios
are listed in Table <ref>.
It is clearly seen that for the S2 scenario,
the central values of the decay constants f̅_S
obtained with the covariant light-front quark model (CLFQM)
in this paper are generally in agreement with those from
the QCD sum rules <cit.>
and light-cone sum rules <cit.>
within an error range.
Of course, the errors arising from the Gaussian parameter
β responsible for mesonic wave functions are still
very large due to the inadequate data and our insufficient
understanding on the scalar mesons for the moment, especially for
f̅_K_0^∗(1430) with the S2 scenario.
What's more, the values of f̅_S with the S2 scenario
are about twice as larger as those with the S1 scenario,
which will inevitably bring the obviously hierarchical relations
with the branching ratios with these two different scenarios,
because the decay amplitudes are directly proportional to
the decay constants.
A significant difference between branching ratios might be used
to distinguish whether these scalar mesons are the 1P
or 2P states.
§.§ Hadronic transition form factors
The form factors of B → S transitions are
defined as <cit.>,
⟨ S(k) |q̅ γ_μ γ_5 b |B(p)⟩ =
-i [ ( P_μ
- m_B^2-m_S^2/ q^2 q_μ) U_1(q^2)
+ m_B^2-m_S^2/ q^2 q_μ U_0(q^2) ]
,
where P_μ = p_μ + k_μ and
q_μ = p_μ - k_μ.
U_0(q^2) and U_1(q^2) respectively
denote longitudinal and transverse form factors.
To regulate the singularities at the pole
q^2 = 0, the relations U_0(0) = U_1(0)
is required.
The values of U_0,1(0) can be obtained by fit
the dependence of the form factors on q^2 with
the 3-parameter formula
<cit.>,
U_i(q^2) = U_i(0) / 1-a ( q^2/m_B^2 )
+b ( q^2/m_B^2 )^2.
The form factors obtained from CLFQM are listed in
Table <ref>. It is clearly seen that
(1)
the central values of U_0,1(0) in this work are very
close to those of Ref. <cit.>.
They are slightly larger
(smaller) than those given in Ref. <cit.>
for the S2 (S1) scenario.
The differences come mainly from the quark running mass
and the Gaussian parameter β as well.
(2)
For the S2 scenario, the SU(3) flavor symmetry among
the central values of U_0,1(0) seems to be held well.
(3)
For the B → K_0^∗(1430), a_0(1450)
transition form factors, the differences between the S1 and S2
scenarios given in Ref. <cit.> are less
obvious than those obtained in this work.
Here the ratio of U_0,1^B→K_0^∗,a_0(0)
between the S1 and S2 scenarios is approximately 2/3,
which will result in the ratio of branching ratio
proportional to the square of the form factors
is approximately 1/2.
The bigger difference of the ratio, the easier the measurement becomes,
and the more helpful it is to distinguish whether these scalar
mesons are the 1P or 2P states from the semileptonic
B → Sℓν decays in the future experiments,
and to check the different theoretical predictions.
§.§ Mesonic light cone DAs
The definition of mesonic light cone DAs is
<cit.>,
⟨ S(p) | q_2β(z_2)
q_1α(z_1) |0 ⟩
= 1/4 f̅_S ∫_0^1 dx
e^ i (xp·z_2+x̅·z_1) {p̸ Φ_S(x)+m_S [
Φ_S^s(x)-σ_μν p^μ z^νΦ_S^σ(x)/6]
}_αβ,
where the arguments x̅ = 1 - x
and z = z_2 - z_1.
Φ_S is the twist-2 light cone DAs.
The twist-3 light cone DAs Φ_S^s,σ are
related by the equations of motion <cit.>,
ξ Φ_S^s(x)
+1/6 d Φ_S^σ(x) /d x = 0
,
where ξ = x - x̅ = 2 x - 1.
The twist-2 DAs are written as
<cit.>
Φ_S(x, μ) = 6 x x̅ { b_0^S+
∑_n=1^∞ b_n^S(μ)
C_n^3/2(ξ) },
where the Gegenbauer moments b_i^S, corresponding to the
expansion coefficients of Gegenbauer polynomials
C_i^3/2(ξ), are hadronic parameters.
The asymptotic forms of the twist-3 DAs are
respectively written as <cit.>,
Φ_S^s(x, μ) = 1
,
Φ_S^σ(x, μ) = 6 x x̅.
The Gegenbauer moments b_n^S in the twist-2 DAs
Φ_S are listed in Table <ref>.
Our comments are
(1)
for either the S1 or S2 scenarios, the orbital angular
momentum L = 1 between the two components of
the scalar mesons.
Using the parity of Gegenbauer polynomials and the isospin symmetry,
the wave function should in principle be antisymmetric,
(-1)^L, under the exchange of the longitudinal momentum
fractions of the two valence quarks x
↔ x̅, i.e.,
the Gegenbauer moments b_n^S with even n should be zero.
This feature is clearly demonstrated for the a_0(1450)
mesons in Table <ref> and Fig. <ref> (a).
(2)
For the K_0^∗(1430) mesons, the flavor SU(3)
symmetry breaking effects should be given due consideration.
DAs for the K_0^∗(1430) mesons should be asymmetric
under x ↔ x̅, i.e.,
the Gegenbauer moments b_n^S with both even and odd n
are nonzero.
This property is properly illustrated by our results
in Table <ref> and Fig. <ref> (b).
(3)
According to the definition of hadronic matrix elements given
by Eq.(3.18) and Eq.(3.20) in Ref. <cit.>,
the Gegenbauer moments b_1^S in DAs and the decay constant
f̅_S are directly interrelated.
The positive values of b_1^S correspond to the
the negative value of f̅_S listed in Table <ref>
for the S1 scenario, and vice versa for the S2 scenario.
In this sense, the positive and negative sign for b_1^S
from CLFQM in this work and QCD sum rules
in Ref. <cit.> are self-consistent.
§ NUMERICAL RESULTS AND DISCUSSIONS
In the rest frame of the B meson, the CP-averaged branching
ratio is defined as,
B = τ_B/16π p_ cm/m_B^2 {| A(B→f)|^2+
| A(B→f)|^2},
where τ_B is the B meson lifetime,
p_ cm is the common momentum of final states.
The direct CP asymmetry is defined as,
A_CP = Γ(B→f)-Γ(B→f)/Γ(B→f)+Γ(B→f).
When the final states are common to the neutral
B_d,s^0 and B_d,s^0 decays,
the CP violating asymmetry is defined as,
A_CP =
A_CP^ mix sin(x Δm t)
-A_CP^ dir cos(x Δm t)
,
A_CP^ mix = 2 Im(λ_f) / 1+|λ_f|^2,
A_CP^ dir = 1-|λ_f|^2/ 1+|λ_f|^2,
λ_f = {[ V_tb^∗ V_td/ V_tb V_td^∗ A(B_d^0→f) / A(B_d^0→f) , for the B_d^0-B_d^0 system,; ; V_tb^∗ V_ts/ V_tb V_ts^∗ A(B_s^0→f) / A(B_s^0→f) , for the B_s^0-B_s^0 system. ].
[h]
The CP-averaged branching ratios (in units of 10^-6)
for the B → SS decays,
where a_0 ≡ a_0(1450)
and K_0^∗ ≡ K_0^∗(1430).
The numbers in the “NF” columns corresponds to the LO results.
The numbers in the “C1” and “C2” columns are results including
the NLO contributions. “C1” corresponds to the case
(ρ_A, ϕ_A) = (1, -55^∘)
and X_H = X_A,
and “C2” corresponds to
(ρ_A^i, ϕ_A^i) = (2.98, -105^∘),
(ρ_A^f, ϕ_A^f) = (1.18, -40^∘) and
X_H ≈ X_A^i.
The first uncertainties come mainly from the CKM elements.
The second uncertainties come from the hadronic parameters including
the decay constant, form factors, and the Gegenbauer moments in DAs.
1cdecay
2*class
3cS1
3cS2
3-5 6-8
1cmodes
1cNF
1cC1
1cC2
1cNF
1cC1
1cC2
B^- → a_0^- a_0^0
T
0.00^+0.00+0.00_-0.00-0.00
0.01^+0.00+0.01_-0.00-0.01
0.03^+0.00+0.02_-0.00-0.01
0.00^+0.00+0.00_-0.00-0.00
0.04^+0.00+0.03_-0.00-0.02
0.10^+0.01+0.06_-0.01-0.05
B̅^0 → a_0^+ a_0^-
T
0.04^+0.00+0.00_-0.00-0.00
0.19^+0.01+0.12_-0.01-0.09
0.37^+0.03+0.12_-0.02-0.11
0.31^+0.02+0.09_-0.01-0.08
0.55^+0.03+0.17_-0.02-0.15
0.63^+0.05+0.25_-0.04-0.21
B̅_s^0 → K_0^∗+ a_0^-
T
0.07^+0.00+0.01_-0.00-0.01
0.14^+0.01+0.09_-0.01-0.07
0.14^+0.01+0.04_-0.01-0.04
0.31^+0.02+0.09_-0.01-0.08
0.51^+0.03+0.23_-0.02-0.19
0.49^+0.03+0.30_-0.03-0.22
B^0 → a_0^0 a_0^0
C
0.04^+0.00+0.00_-0.00-0.00
0.27^+0.02+0.27_-0.01-0.16
0.63^+0.05+0.31_-0.04-0.25
0.32^+0.02+0.10_-0.01-0.08
0.83^+0.04+0.38_-0.04-0.30
1.17^+0.09+0.73_-0.08-0.53
B_s^0 → K_0^∗0 a_0^0
C
0.03^+0.00+0.01_-0.00-0.01
0.09^+0.01+0.06_-0.01-0.05
0.12^+0.01+0.04_-0.01-0.03
0.16^+0.01+0.05_-0.01-0.04
0.36^+0.02+0.18_-0.02-0.15
0.51^+0.04+0.33_-0.03-0.26
B^- → a_0^0 K_0^∗-
P
0.44^+0.02+0.05_-0.01-0.05
0.98^+0.05+0.91_-0.04-0.66
0.82^+0.04+0.26_-0.03-0.22
5.50^+0.24+4.33_-0.17-3.26
6.65^+0.30+5.26_-0.24-3.97
5.07^+0.23+4.03_-0.16-3.03
B^- → a_0^-K_0^∗0
P
0.93^+0.04+0.10_-0.03-0.10
2.05^+0.09+1.88_-0.07-1.36
1.64^+0.07+0.49_-0.05-0.42
11.69^+0.53+9.20_-0.40+6.93
14.13^+0.64+11.16_-0.50+ 8.42
10.58^+0.47+ 8.37_-0.33-6.29
B^0 → a_0^+ K_0^∗0-
P
0.82^+0.04+0.09_-0.03-0.09
1.87^+0.09+1.73_-0.07-1.26
1.70^+0.08+0.56_-0.05-0.47
10.20^+0.45+8.04_-0.32-6.05
12.40^+0.56+9.81_-0.44-7.40
10.42^+0.46+8.29_-0.33-6.23
B^0 → a_0^0K_0^∗0
P
0.43^+0.02+0.05_-0.01-0.05
0.97^+0.04+0.75_-0.03-0.50
0.86^+0.04+0.27_-0.03-0.23
5.42^+0.24+4.27_-0.17-3.22
6.56^+0.30+5.18_-0.23-3.91
5.44^+0.24+4.30_-0.17-3.23
B_s^0 → K_0^∗0K_0^∗0
P
1.37^+0.06+0.25_-0.04-0.23
1.88^+0.09+1.70_-0.07-1.23
0.54^+0.02+0.15_-0.02-0.12
10.91^+0.48+8.60_-0.34-6.48
14.99^+0.68+12.14_-0.54- 9.18
5.94^+0.26+5.81_-0.19-3.77
B_s^0 → K_0^∗+ K_0^∗-
P
1.29^+0.06+0.24_-0.04-0.22
1.73^+0.08+1.57_-0.06-1.14
0.61^+0.03+0.17_-0.02-0.13
10.27^+0.45+8.10_-0.32-6.10
13.98^+0.64+11.39_-0.50- 8.61
6.65^+0.30+6.65_-0.21-4.39
B^- → K_0^∗0 K_0^∗-
P
0.03^+0.00+0.00_-0.00-0.00
0.07^+0.00+0.07_-0.00-0.05
0.07^+0.00+0.03_-0.00-0.03
0.38^+0.02+0.30_-0.02-0.23
0.30^+0.02+0.24_-0.01-0.18
0.25^+0.01+0.22_-0.01-0.16
B^0 → K_0^∗0K_0^∗0
P
0.03^+0.00+0.00_-0.00-0.00
0.06^+0.00+0.06_-0.00-0.04
0.05^+0.00+0.02_-0.00 -0.01
0.36^+0.02+0.28_-0.02-0.21
0.29^+0.02+0.23_-0.01-0.17
0.11^+0.01+0.12_-0.01-0.07
B^0 → K_0^∗+ K_0^∗-
A
0.02^+0.00+0.02_-0.00-0.01 0.11^+0.01+0.14_-0.01-0.11
0.10^+0.01+0.11_-0.01-0.09 0.56^+0.04+0.62_-0.04-0.46
B_s^0 → a_0^0 a_0^0
A
0.05^+0.00+0.06_+0.00-0.04 0.38^+0.02+0.51_-0.01-0.29
0.12^+0.01+0.07_-0.01-0.05 0.66^+0.03+0.42_-0.03-0.29
B_s^0 → a_0^+ a_0^-
A
0.10^+0.00+0.07_-0.00-0.06 0.77^+0.04+0.60_-0.03-0.48
0.23^+0.01+0.09_-0.01-0.08 1.32^+0.06+0.53_-0.05-0.46
The numerical results on the CP-averaged branching ratios and
CP asymmetries for the B → SS decays are listed in
Table <ref> and <ref>.
Here we use the symbols T for the color favored tree processes,
C for the color-suppressed tree processes, P for the penguin
dominated processes, and A for the pure annihilation processes.
Our comments are as follows.
(1)
As we have discussed earlier, the LO contributions are suppressed
by the factor N_i(S_2) in Eq.(<ref>), then the NLO
contributions will be very important for the B → SS
decays.
It is clearly shown in Table <ref> that for both
the S1 and S2 scenarios,
the NLO contributions to branching ratios are generally
significant, even a few fold changes to the LO contributions
corresponding to the numbers in the “NF” columns
for some processes, such as C-class B decays where
the NLO contributions are proportional to the large
Wilson coefficient C_1.
(2)
The hard spectator scattering amplitudes
H_i(S_1 S_2) belong to the nonfactorizable
NLO contributions in Eq.(<ref>) with the QCDF approach.
So, the NLO contributions should be sensitive to the
parameter X_H.
And the parameter X_H is closely related to the
WA parameter X_A in this work.
In Table <ref>,
the differences of branching ratios between the C1
and C2 cases are still obvious, for example, for the
B → a_0a_0 and B_s →
K_0^∗K_0^∗ decays, and
the A-class decays as well.
Additionally, the parameters X_H,A are always accompanied
by the Gegenbauer moments in Eq.(<ref>)
and Eqs.(<ref>—<ref>).
The smaller uncertainties of the Gegenbauer moments bring branching
ratios with the smaller theoretical uncertainties, compared
with those in Refs. <cit.>.
(3)
As it is well known that the T-class B decays are
induced by the external W boson emission interactions with
the factorization approach, and their amplitudes are
proportional to the large Wilson coefficient C_1 or α_1.
These processes should theoretically have a relatively
large branching ratio.
It might be a little curious in Table <ref> that branching
ratios for the T-class B → SS decays are very small,
∼ O(10^-7).
Some even are less than the branching ratios of the purely
WA decays.
One of the main reasons is that the LO contributions of the
T-class decay amplitudes are seriously suppressed by the factor
N_i(S_2) in Eq.(<ref>), with N_i(a_0)
∼ 0.002, and their NLO contributions are suppressed
by both the factor α_s/N_c and the small coefficient
C_2.
(4)
It is obvious in Table <ref> that
for both the C1 and C2 cases, branching ratios of the S2
scenario are larger than the corresponding ones of the
S1 scenario, because the decay
amplitudes are proportional to the product of the decay
constants of the scalar mesons and form factors, and the
numerical values of both the decay constants of the scalar mesons
(see Table <ref>) and form factors (see Table <ref>)
of the S2 scenario are larger than the corresponding ones
of the S1 scenario.
Specifically, the P-class B → a_0K_0^∗
and B_s → K_0^∗K_0^∗
decays where the penguin contributions are largely enhanced by
the CKM elements V_tbV_ts^∗ ∼ O(λ^2)
relative to the possible tree contributions associated with
V_ubV_us^∗ ∼ O(λ^4),
their branching ratios can reach up to
even O(10^-5) for the S2 scenario.
These flagship decay modes should get priority in the future experimental
research program for searching for the B → SS decays.
(5)
Experimentally, more than ten years ago,
a hint of the B^0 →
K_0^∗0K_0^∗0 decays
with a significance of 0.8 σ
has been reported by the Belle Collaboration
with the K^+K^-π^+π^- final states
<cit.>, and branching ratio B =
(3.21^+2.89+2.31_-2.85-2.32)×10^-6 and the
upper limit at the 90% confidence level B
< 8.4×10^-6.
Our results are marginally consistent with data when
considering the large experimental errors.
Theoretically, besides the small Wilson coefficients and
the CKM elements V_tbV_td^∗
∼ O(λ^3),
the relatively smaller branching ratios
might arise from the Gegenbauer moments, which result in
a flatter shape line of the scalar mesonic DAs,
and further leads to a milder overlap among the
participating mesonic DAs, and finally give a
more modest decay amplitudes.
Experimentally, it is entirely necessary and desirable to
improve the accuracy of measurements and investigate more
and more B → SS decays in the future in
order to verify various theoretical models and explore the
properties the scalar mesons.
(6)
The weak annihilation amplitudes are thought to be power suppressed
with the QCDF approach <cit.>.
The purely WA B decays should in principle have
very small branching ratios.
The evidences have been demonstrated in the B →
K^±K^∗∓ and B_s →
ππ decays theoretically
<cit.>
and experimentally <cit.>.
The similar phenomena or/and patterns also appear in
Table <ref> for the A-class B → SS
decays, with branching ratios of B O(10^-7).
The impressive and amazing thing is that in some cases,
branching ratios of the A-class B → SS decays
with appropriate parameters can catch up with or
overtake those of the T-class decays, which is very unlike
the hadronic B → PP, PV decays.
These typical characteristics for the B → SS decays
are closely related with the properties of the scalar mesons,
such as the decay constants, DAs and so on.
Additionally, branching ratios of the A-class
B → SS decays are very sensitive to the
parameter X_A, with both the S1 and S2 scenarios.
It is clear that with the topologically dependent parameters,
i.e., X_A^i X_A^f for the C2 case,
the corresponding branching ratios are relatively larger,
due to the larger value of ρ_H^i.
Our understanding of the WA contributions to the nonleptonic
B decays with the QCDF approach is not comprehensive enough.
Albeit very challenging, the experimental measurements on the
A-class B → SS decays are interesting and helpful
to explore the underlying dynamical mechanism and the higher
power corrections to HMEs.
(7)
In Table <ref>,
the theoretical uncertainties are very large, especially
those from the hadronic parameters.
Usually, the ratio of branching ratios are defined to reduce
theoretical uncertainties on one hand, and on the other hand
to check some potential symmetry or conservation quantities,
for example, the observables R_K,D for the universality of
the inherited electroweak couplings to all charged leptons.
Here, we give some ratios of branching ratios with the
universal parameter X_A for the C1 case, for example,
R_1 = B( B^-→ a_0^-K_0^∗0 ) / 2 B( B^-→ a_0^0 K_0^∗- ) ≈ 1.04^+0.00+0.16_-0.00-0.14 (S1), 1.06^+0.00+0.11_-0.00-0.11 (S2);
R_2 = B( B^0→ a_0^+ K_0^∗- ) / 2 B( B^0→ a_0^0K_0^∗0 ) ≈ 0.96^+0.00+0.15_-0.00-0.14 (S1), 0.94^+0.00+0.11_-0.00-0.10 (S2);
R_3 = B( B_s^0→ K_0^∗0K_0^∗0 ) / B( B_s^0→ K_0^∗+ K_0^∗- ) ≈ 1.08^+0.00+0.02_-0.01-0.02 (S1), 1.07^+0.00+0.01_-0.00-0.01 (S2).
All these ratios are expected to be R_1,2,3 = 1 by
applying the SU(3) flavor symmetry.
(8)
It is clear in Table <ref> that the CP violating
asymmetries depend on the parameter X_A which contains
the strong phases.
It is known that with the QCDF approach, the strong phases
necessary for the direct CP violation arise from the NLO
contributions, which are the order α_s or
Λ_ QCD/m_b and suppressed compared
with the LO contributions.
However, as noted earlier, the LO contributions are seriously
suppressed by the factor N_i(S_2) in Eq.(<ref>),
which will indirectly result in the larger strong phases from
the NLO contributions.
These effects will have more influences on the direct CP
violating asymmetries for the T- and C-class B → SS
decays than the P-class ones, because of the larger Wilson
coefficients for the T- and C-class decays.
The larger direct CP asymmetries for the B_d →
a_0a_0 and B_s → a_0K_0^∗
are expected for both the S1 and S2 scenarios.
And by comparison, the absolute values of the direct CP violating
asymmetries in the T- and C-class B → SS decays are
generally larger than those in the corresponding B → PP,
PV decays <cit.>.
In addition, to contract with the so-called πK puzzle,
the difference between the direct CP asymmetries for the
B^- → a_0^-K_0^∗0 and
B_0 → a_0^+ K_0^∗0- decays
is estimated to be,
ΔA_CP =
A_CP(B^-→ a_0^-K_0^∗0)
-A_CP(B_0→ a_0^+ K_0^∗0-)
=
(5.95^+0.18+1.69_-0.18-1.36)% (S1), (5.48^+0.17+1.21_-0.17-0.90)% (S2),
with the universal parameter X_A = X_H for the C1 case,
and
ΔA_CP =
(3.71^+0.11+2.46_-0.11-1.95)% (S1), (5.05^+0.15+1.66_-0.15-1.40)% (S2),
with the topologically dependent parameters X_A^i
X_A^f for the C2 case.
Unfortunately, no data are available on the CP asymmetries
for the B → SS decays at the moment.
§ SUMMARY
To meet the coming high precision measurements on the B meson
decays based on the huge amount of data,
and provide a ready and helpful reference in
clarifying the open questions related to the scalar mesons,
the hadronic charmless B → SS decays are studied
with the QCDF approach, where the symbol S denotes the
scalar mesons K_0^∗(1430) and a_0(1450).
It is found that the LO contributions are proportional to
the mass difference of the two valence quarks embedded in
the scalar mesons, and thereby seriously suppressed.
This causes two consequences.
(1)
The branching ratios for the B → a_0a_0
and B_s → a_0K_0^∗ decays belonging
to the T- and C-class are very small, about the order
of O(10^-7).
(2)
The NLO contributions become necessary and predominant
for the B → SS decays.
With the updated values of hadronic parameters obtained from
CLFQM, including the transition form factors, the decay
constants and Gegenbauer moments in mesonic DAs for the
two scenarios where the scalar mesons in question are
the 1P and 2P triplet states,
the CP-averaged branching ratios and CP violating
asymmetries are given with the universal end-point parameters
X_A and topology-dependent parameters X_A^i
X_A^f.
The numerical results show that
(1)
theoretical uncertainties of both the branching ratios and the
direct CP asymmetries come mainly from hadronic parameters.
(2)
Branching ratios for the B_s →
K_0^∗K_0^∗ decays and
the purely weak annihilation decays B_s → a_0a_0
and B_d → K_0^∗+K_0^∗-,
and the direct CP asymmetries for the B_d →
a_0a_0 decays are very sensitive to the parameter X_A.
(3)
For the B → a_0K_0^∗ and
B_s → K_0^∗K_0^∗ decays,
branching ratios for the S2 scenario are about one order of
magnitude larger than those for the S1 scenario,
and can reach up to the order of O(10^-5).
These decays should first be searched and investigated
experimentally.
(4)
Theoretical uncertainties come mainly from the hadronic
parameters.
More focus and effort are needed to improve the theoretical
calculation precision.
Some ratios of branching ratios are given based on
the SU(3) flavor symmetry.
In addition, there is too little available data to draw any
conclusions on whether the scalar mesons K_0^∗(1430)
and a_0(1450) are the 1P or 2P states.
Hope more and more B → SS decays can be
measured with higher and higher precision at the
high-luminosity colliders in the future.
§ ACKNOWLEDGEMENTS
This work is supported by the National Natural Science
Foundation of China (Grant Nos. 12275067, 12275068, 12135006, 12105078),
Natural Science Foundation of Henan Province (Grant No. 222300420479), and
Excellent Youth Foundation of Henan Province (Grant No. 212300410010).
§ THE DECAY AMPLITUDES FOR THE B → SS DECAYS
Here, the symbols is used to simplify the decay amplitudes.
λ_q (⋯) = ∑_p=u,c V_pb V^∗_pq (⋯)
√(2) A( B^-→ a_0^- a^0_0 )
= λ_d {
A_ a_0^- a^0_0 [
δ_u^p α_2 - α_4^p
+ 3/2 α_3,EW^p
+ 1/2 α_4,EW^p
-δ_u^p β_2
-β_3^p -β_3,EW^p]
+ A_ a_0^0 a_0^- [
δ_u^p α_1 + α_4^p
+α_4,EW^p + δ_u^p β_2
+β_3^p + β_3,EW^p] },
A( B^-→ a_0^-K_0^∗0 )
= λ_s A_ a_0 K_0^∗ [
α_4^p -1/2 α_4,EW^p
+δ_u^p β_2 +β_3^p
+β_3,EW^p]
,
√(2) A( B^-→ a_0^0 K_0^∗- )
= λ_s {
A_ a_0 K_0^∗ [
δ_u^p α_1+ α_4^p
+α_4,EW^p + δ_u^p β_2
+β_3^p +β_3,EW^p]
+ A_ K_0^∗ a_0 [
δ_u^p α_2
+3/2 α_4,EW^p] },
A( B^-→ K_0^∗- K_0^∗0 )
= λ_d
A_ K_0^∗- K_0^∗0 [
α_4^p -1/2 α_4,EW^p
+δ_u^p β_2
+β_3^p +β_3,EW^p]
,
A( B^0→ a_0^+ a_0^- )
= λ_d {
A_ a_0^+ a_0^- [
δ_u^p α_1+α_4^p
+α_4,EW^p+β_3^p +β_4^p
- 1/2 β_3,EW^p
- 1/2 β_4,EW^p]
+ A_ a_0^- a_0^+ [
δ_u^p β_1
+β_4^p+β_4,EW^p] },
A( B^0→ a_0^0 a_0^0 )
= -λ_d
A_ a_0 a_0 [
δ_u^p α_2 - α_4^p
+ 3/2 α_3^p
+ 1/2 α_4,EW^p
-δ_u^p β_1-β_3^p
-2 β_4^p+ 1/2 β_3,EW^p
- 1/2 β_4,EW^p]
,
A( B^0→ a_0^+ K_0^∗- )
= λ_s
A_ a_0 K_0^∗ [
δ_u^p α_1 + α_4^p
+α_4,EW^p+ β_3^p
- 1/2 β_3,EW^p]
,
A( B^0→ a_0^0K_0^∗0 )
= λ_s {
A_ a_0 K̅_0^∗ [
-α_4^p + 1/2 α_4,EW^p
-β_3^p + 1/2 β_3,EW^p]
+ A_K̅_0^∗ a_0 [
δ_u^p α_2
+ 3/2 α_3,EW^p] },
A( B^0→ K_0^∗+ K_0^∗- )
= λ_d {
A_ K_0^∗- K_0^∗+ [
δ_u^p β_1 + β_4^p
+β_4,EW^p]
+ B_ K_0^∗+ K_0^∗- [
b_4^p- 1/2 b_4,EW^p] },
A( B^0→ K_0^∗0K_0^∗0 )
= λ_d {
A_K_0^∗ K_0^∗ [
α_4^p - 1/2 α_4,EW^p
+β_3^p+β_4^p
- 1/2 β_3,EW^p
- 1/2 β_4,EW^p]
+ B_ K_0^∗ K_0^∗ [
b_4^p- 1/2 b_4,EW^p] },
A( B_s^0→ a_0^+ a_0^- )
= λ_s {
B_ a_0^+ a_0^- [
b_4^p- 1/2 b_4,EW^p]
+ B_ a_0^- a_0^+ [
δ_u^p b_1
+ b_4^p + b_4,EW^p] },
A( B_s^0→ a_0^0 a_0^0 )
= λ_s B_ a_0 a_0 [
δ_u^p b_1 + 2 b_4^p
+ 1/2 b_4,EW^p]
,
A( B_s^0→ K_0^∗+ a_0^- )
= λ_d
A_ K_0^∗ a_0 [
δ_u^p α_1 + α_4^p
+α_4,EW^p + β_3^p
- 1/2 β_3,EW^p]
,
√(2) A( B_s^0→ K_0^∗0 a_0^0 )
= λ_d
A_ K_0^∗ a_0 [
δ_u^p α_2 - α_4^p
+ 3/2 α_3,EW^p
+ 1/2 α_4,EW^p
-β_3^p
+ 1/2 β_3,EW^p]
,
A( B_s^0→ K_0^∗0K_0^∗0 )
= λ_s {
A_ K_0^∗ K̅_0^∗ [
α_4^p - 1/2 α_4,EW^p
+β_3^p+β_4^p
- 1/2 β_3,EW^p
- 1/2 β_4,EW^p
+ B_K̅_0^∗ K_0^∗ [
b_4^p - 1/2 b_4,EW^p] },
A( B_s^0→ K_0^∗+ K_0^∗- )
= λ_s {
A_ K_0^∗ K̅_0^∗ [
δ_u^p α_1
+α_4^p + α_4,EW^p
+β_3^p+β_4^p
- 1/2 β_3,EW^p
- 1/2 β_4,EW^p
+ B_K̅_0^∗ K_0^∗ [
δ_u^p b_1
+ b_4^p + b_4,EW^p] }.
99
pdg2022
https://doi.org/10.1093/ptep/ptaa104
R. Workman et al. (Particle Data Group),
Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
PhysRevD.15.267
https://doi.org/10.1103/PhysRevD.15.267
R. Jaffe,
Phys. Rev. D 15, 267 (1977).
PhysRevD.15.281
https://doi.org/10.1103/PhysRevD.15.281
R. Jaffe,
Phys. Rev. D 15, 281 (1977).
PhysRevD.71.054020
https://doi.org/10.1103/PhysRevD.71.054020
H. Cheng, K. Yang,
Phys. Rev. D 71, 054020 (2005).
PhysRevD.73.014017
https://doi.org/10.1103/PhysRevD.73.014017
H. Cheng, C. Chua, K. Yang,
Phys. Rev. D 73, 014017 (2006).
PhysRevD.77.014034
https://doi.org/10.1103/PhysRevD.77.014034
H. Cheng, C. Chua, K. Yang,
Phys. Rev. D 77, 014034 (2008).
PhysRevD.79.094005
https://doi.org/10.1103/PhysRevD.79.094005
B. El-Bennich, A. Furman, R. Kamiński et al.,
Phys. Rev. D 79, 094005 (2009);
https://doi.org/10.1103/PhysRevD.83.039903
Erratum, D 83, 039903 (2011).
PhysRevD.82.034014
https://doi.org/10.1103/PhysRevD.82.034014
H. Cheng, C. Chua,
Phys. Rev. D 82, 034014 (2010).
PhysRevD.85.074010
https://doi.org/10.1103/PhysRevD.85.074010
Y, Li, X. Fan, J. Hua, E. Wang,
Phys. Rev. D 85, 074010 (2012).
AHEP.2013.175287
https://doi.org/10.1155/2013/175287
Y, Li, E. Wang, H. Zhang,
Adv. High Energy Phys. 2013, 175287 (2013)
PhysRevD.87.114001
https://doi.org/10.1103/PhysRevD.87.114001
H. Cheng, C. Chua, K. Yang, Z. Zhang,
Phys. Rev. D 87, 114001 (2013).
PhysRevD.91.074022
https://doi.org/10.1103/PhysRevD.91.074022
Y. Li, H. Zhang, Y. Xing et al.,
Phys. Rev. D 91, 074022 (2015).
CPC.40.013101
https://doi.org/10.1088/1674-1137/40/1/013101
Y. Li, H. Zhang, Y. Xing et al.,
Chin. Phys. C 40, 013101 (2016).
PhysRevD.99.076010
https://doi.org/10.1103/PhysRevD.99.076010
J. Qi, X. Guo, Z. Wang et al.,
Phys. Rev. D 99, 076010 (2019).
PhysRevD.105.016002
https://doi.org/10.1103/PhysRevD.105.016002
L. Chen, M. Zhao, Y. Zhang, Q. Chang,
Phys. Rev. D 105, 016002 (2022).
PhysRevD.74.114010
https://doi.org/10.1103/PhysRevD.74.114010
W. Wang, Y. Shen, Y. Li, C. Lü,
Phys. Rev. D 74, 114010 (2006).
EPJC.50.877
https://doi.org/10.1140/epjc/s10052-007-0231-9
Y. Shen, W. Wang, J. Zhu. C. Lü,
Eur. Phys. J. C 50, 877 (2007).
CPC.33.508
https://doi.org/10.1088/1674-1137/33/7/002
Z. Zhang, Z. Xiao,
Chin. Phys. C 33, 508 (2009).
CPC.34.157
https://doi.org/10.1088/1674-1137/34/2/001
X. Liu, Z. Zhang, Z. Xiao,
Chin. Phys. C 34, 157 (2010).
CPC.34.528
https://doi.org/10.1088/1674-1137/34/5/002
Z. Zhang, Z. Xiao,
Chin. Phys. C 34, 528 (2010).
PhysRevD.81.074014
https://doi.org/10.1103/PhysRevD.81.074014
C. Kim, Y. Li, W. Wang,
Phys. Rev. D 81, 074014 (2010).
PhysRevD.82.034036
https://doi.org/10.1103/PhysRevD.82.034036
Z. Zhang,
Phys. Rev. D 82, 034036 (2010).
PhysRevD.82.114016
https://doi.org/10.1103/PhysRevD.82.114016
Z. Zhang,
Phys. Rev. D 82, 114016 (2010).
EPJC.67.163
https://doi.org/10.1140/epjc/s10052-010-1285-7
Z. Zhang, J. Zhang,
Eur. Phys. J. C 67, 163 (2010).
CTP.53.540
https://doi.org/10.1088/0253-6102/53/3/26
X. liu, Z. Xiao,
Commun. Theor. Phys. 53, 540 (2010).
CTP.56.1063
https://doi.org/10.1088/0253-6102/56/6/16
Z. Zhang,
Commun. Theor. Phys. 56, 1063 (2011).
PhysRevD.83.054001
https://doi.org/10.1103/PhysRevD.83.054001
Z. Zhang,
Phys. Rev. D 83, 054001 (2011).
EPL.97.11001
https://doi.org/10.1209/0295-5075/97/11001
Z. Zhang,
EPL 97, 11001 (2012).
PhysRevD.88.094003
https://doi.org/10.1103/PhysRevD.88.094003
X. Liu, Z. Xiao, Z. Zou,
Phys. Rev. D 88, 094003 (2013).
CPC.37.043103
https://doi.org/10.1088/1674-1137/37/4/043103
Z. Zhang, S. Wang, L. Zhang,
Chin. Phys. C 37, 043103 (2013).
JPG.46.095001
https://doi.org/10.1088/1361-6471/ab2553
N. Wang, Q. Chang, Y. Yang, J. Sun
J. Phys. G 46, 095001 (2019).
EPJC.82.59
https://doi.org/10.1140/epjc/s10052-022-10016-6
Z. Liu, Z. Zou, Y. Li et al.,
Eur. Phys. J. C 82, 59 (2022).
JPG.40.025002
https://doi.org/10.1088/0954-3899/40/2/025002
X. Liu, Z. Xiao, Z. Zou,
J. Phys. G 40, 025002 (2013).
JPG.43.045001
https://doi.org/10.1088/0954-3899/43/4/045001
D. Dou, X. Liu, J. Li, Z. Xiao,
J. Phys. G 43, 045001 (2016).
EPJC.79.960
https://doi.org/10.1140/epjc/s10052-019-7484-y
Q. Li, L. Yang, Z. Zou, et al.,
Eur. Phys. J. C 79, 960 (2019).
PhysRevD.102.116007
https://doi.org/10.1103/PhysRevD.102.116007
Z. Liu, Z. Zou, Y. Li et al.,
Phys. Rev. D 102, 116007 (2020).
CTP.73.045201
https://doi.org/10.1088/1572-9494/abe0c1
Y. Chen, Z. Jiang, X. Liu,
Commun. Theor. Phys. 73, 045201 (2021).
EPJC.82.177
https://doi.org/10.1140/epjc/s10052-022-10111-8
H. Niu, G. Li, J. Ren, X. Liu,
Eur. Phys. J. C 82, 177 (2022).
PhysRevD.76.074017
https://doi.org/10.1103/PhysRevD.76.074017
T. Aliev, K. Azizi, M. Savci
Phys. Rev. D 76, 074017 (2007).
PhysRevD.78.014006
https://doi.org/10.1103/PhysRevD.78.014006
Y. Wang, M. Aslam, C. Lü,
Phys. Rev. D 78, 014006 (2008).
PhysRevD.79.014013
https://doi.org/10.1103/PhysRevD.79.014013
R. Li, C. Lü, W. Wang, X. Wang,
Phys. Rev. D 79, 014013 (2009).
PhysRevD.80.016009
https://doi.org/10.1103/PhysRevD.80.016009
N. Ghahramany, R. Khosravi,
Phys. Rev. D 80, 016009 (2009).
PhysRevD.82.034016
https://doi.org/10.1103/PhysRevD.82.034016
W. Wang, C. Lü,
Phys. Rev. D 82, 034016 (2010).
PhysRevD.83.025024
https://doi.org/10.1103/PhysRevD.83.025024
Y. Sun, Z. Li, T. Huang,
Phys. Rev. D 83, 025024 (2011).
PhysRevD.91.074007
https://doi.org/10.1103/PhysRevD.91.074007
A. Issadykov, M. Ivanov, S. Sakhiyev,
Phys. Rev. D 91, 074007 (2015).
EPJC.78.909
https://doi.org/10.1140/epjc/s10052-018-6385-9
X. Kang, T. Luo, Y. Zhang, L. Dai, C. Wang,
Eur. Phys. J. C 78, 909 (2018).
PhysRevD.105.116027
https://doi.org/10.1103/PhysRevD.105.116027
R. Khosravi,
Phys. Rev. D 105, 116027 (2022).
RevModPhys.68.1125
https://doi.org/10.1103/RevModPhys.68.1125
G. Buchalla, A. Buras, M. Lautenbacher,
Rev. Mod. Phys. 68, 1125, (1996).
PhysRevLett.83.1914
https://doi.org/10.1103/PhysRevLett.83.1914
M. Beneke, G. Buchalla, M. Neubert, C. Sachrajda,
Phys. Rev. Lett. 83, 1914 (1999).
NPB.591.313
https://doi.org/10.1016/S0550-3213(00)00559-9
M. Beneke, G. Buchalla, M. Neubert, C. Sachrajda,
Nucl. Phys. B 591, 313 (2000).
NPB.606.245
https://doi.org/10.1016/S0550-3213(01)00251-6
M. Beneke, G. Buchalla, M. Neubert, C. Sachrajda,
Nucl. Phys. B 606, 245 (2001).
PLB.488.46
https://doi.org/10.1016/S0370-2693(00)00854-6
D. Du, D. Yang, G. Zhu,
Phys. Lett. B 488, 46 (2000).
PLB.509.263
https://doi.org/10.1016/S0370-2693(01)00398-7
D. Du, D. Yang, G. Zhu,
Phys. Lett. B 509, 263 (2001).
PhysRevD.64.014036
https://doi.org/10.1103/PhysRevD.64.014036
D. Du, D. Yang, G. Zhu,
Phys. Rev. D 64, 014036 (2001).
ZPC.34.103
https://doi.org/10.1007/BF01561122
M. Bauer, B. Stech, M. Wirbel,
Z. Phys. C 34, 103 (1987).
NPBPS.11.325
https://doi.org/10.1016/0920-5632(89)90019-4
J. Bjorken, Nucl. Phys. B Proc. Suppl. 11, 325 (1989).
NPB.675.333
https://doi.org/10.1016/j.nuclphysb.2003.09.026
M. Beneke, M. Neubert,
Nucl. Phys. B 675, 333 (2003).
PhysRevD.65.074001
https://doi.org/10.1103/PhysRevD.65.074001
D. Du, H. Gong, J. Sun, D. Yang, G. Zhu,
Phys. Rev. D 65, 074001 (2002).
PhysRevD.65.094025
https://doi.org/10.1103/PhysRevD.65.094025
D. Du, H. Gong, J. Sun, D. Yang, G. Zhu,
Phys. Rev. D 65, 094025 (2002).
PhysRevD.67.014023
https://doi.org/10.1103/PhysRevD.67.014023
D. Du, J. Sun, D. Yang, G. Zhu,
Phys. Rev. D 67, 014023 (2003).
NPB.774.64
https://doi.org/10.1016/j.nuclphysb.2007.03.020
M. Beneke, J. Rohrer, D. Yang,
Nucl. Phys. B 774, 64 (2007).
PhysRevD.80.114008
https://doi.org/10.1103/PhysRevD.80.114008
H. Cheng, C. Chua,
Phys. Rev. D 80, 114008 (2009).
PhysRevD.80.114026
https://doi.org/10.1103/PhysRevD.80.114026
H. Cheng, C. Chua,
Phys. Rev. D 80, 114026 (2009).
PLB.702.408
https://doi.org/10.1016/j.physletb.2011.07.045
G. Zhu,
Phys. Lett. B 702, 408 (2011).
PhysRevD.86.054016
https://doi.org/10.1103/PhysRevD.86.054016
Q. Chang, X. Cui, L. Han, Y. Yang,
Phys. Rev. D 86, 054016 (2012).
PhysRevD.88.014043
https://doi.org/10.1103/PhysRevD.88.014043
K. Wang, G. Zhu,
Phys. Rev. D 88, 014043 (2013).
PhysRevD.90.054019
https://doi.org/10.1103/PhysRevD.90.054019
Q. Chang, J. Sun, Y. Yang, X. Li,
Phys. Rev. D 90, 054019 (2014).
PLB.740.56
https://doi.org/10.1016/j.physletb.2014.11.027
Q. Chang, J. Sun, Y. Yang, X. Li,
Phys. Lett. B 740, 56 (2015).
PLB.743.444
https://doi.org/10.1016/j.physletb.2015.03.001
J. Sun, Q. Chang, X. Hu, Y. Yang,
Phys. Lett. B 743, 444 (2015).
PhysRevD.91.074026
https://doi.org/10.1103/PhysRevD.91.074026
Q. Chang, X. Hu, J. Sun, Y. Yang,
Phys. Rev. D 91, 074026 (2015).
JPG.43.105004
https://doi.org/10.1088/0954-3899/43/10/105004
Q. Chang, X. Li, J. Sun, Y. Yang,
J. Phys. G 43, 105004 (2016).
EPJC.77.415
https://doi.org/10.1140/epjc/s10052-017-4980-9
Q. Chang, X. Li, X. Li, J. Sun,
Eur. Phys. J. C 77, 415 (2017).
PLB.619.105
https://doi.org/10.1016/j.physletb.2005.05.043
D. Du, J. Li, M. Yang,
Phys. Lett. B 619, 105 (2005).
EPJA.49.78
https://doi.org/10.1140/epja/i2013-13078-7
H. Han, X. Wu, H. Fu, Q. Zhang, T. Zhong,
Eur. Phys. J. A 49, 78 (2013).
PhysRevD.81.071101
https://doi.org/10.1103/PhysRevD.81.071101
C. Chiang et al. (Belle Collaboration),
Phys. Rev. D 81, 071101 (2010).
|
http://arxiv.org/abs/2306.04581v2
|
20230607163352
|
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations
|
[
"Prithviraj Dasgupta"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CR",
"I.2.3"
] |
defcounter
thmcounter
Divide and Repair: Using Options to Improve Performance of Imitation Learning Against Adversarial Demonstrations
Prithviraj Dasgupta
Distributed Intelligent Systems Section
Information Technology Division
Naval Research Laboratory, Washington, D. C., USA
E-mail: [email protected]
June 2023
=========================================================================================================================================================================================
fancyfirst
Abstract
We consider the problem of learning to perform a task from demonstrations given by teachers or experts, when some of the experts' demonstrations might be adversarial and demonstrate an incorrect way to perform the task. We propose a novel technique that can identify parts of demonstrated trajectories that have not been significantly modified by the adversary and utilize them for learning, using temporally extended policies or options. We first define a trajectory divergence measure based on the spatial and temporal features of demonstrated trajectories to detect and discard parts of the trajectories that have been significantly modified by an adversarial expert, and could degrade the learner's performance, if used for learning, We then use an options-based algorithm that partitions trajectories and learns only from the parts of trajectories that have been determined as admissible. We provide theoretical results of our technique to show that repairing partial trajectories improves the sample efficiency of the demonstrations without degrading the learner's performance. We then evaluate the proposed algorithm for learning to play an Atari-like computer-based game called LunarLander in the presence of different types and degrees of adversarial attacks of demonstrated trajectories. Our experimental results show that our technique can identify adversarially modified parts of the demonstrated trajectories and successfully prevent the learning performance from degrading due to adversarial demonstrations.
§ INTRODUCTION
Learning from demonstrations is a widely-used form of machine learning where a teacher or expert provides demonstrations of how to perform the learning task to speed up the learning process <cit.> in the context of reinforcement learning <cit.>. It has been used in many successful applications of machine learning algorithms including autonomous driving <cit.>, robotic manipulation <cit.>, and human-robot interaction <cit.>. Conventionally, the experts demonstrating the task are assumed to be benign and show the correct way of performing the task. However, as machine learning-based autonomous systems become more pervasive, they are exposed to demonstrations from a variety of sources. Some of these demonstrations might be from adversarial experts that give incorrect demonstrations with the intention of making the autonomous system behave in incorrect and unintended ways. To address this problem, researchers have developed techniques for learning reliably in the presence of adversarial expert demonstrations <cit.>. The main idea in most of these techniques is to use an eligibility metric, such as a confidence measure, on trajectories or temporal sequences of state-action pairs representing expert demonstrations, followed by accepting or rejecting the trajectories based on that metric. These techniques work with full or end-to-end (initial state to final state) trajectories, that is, the eligibility metric is calculated for the full trajectory, and, if found ineligible, the full trajectory is discarded. In this paper, we posit that even though the full trajectory might cause the learning task to fail, there could be parts of the trajectory that were benign, possibly show a new way of performing a part of the task, and could benefit the learning process. This insight is based on the observation that many adversarial attacks on machine learning algorithms are composed by modifying the input (e.g., training data examples for supervised learning <cit.> or demonstrated trajectories for reinforcement learning <cit.>) only at certain, strategic features or locations, instead of all across the input. To address the problem of adversarial learning from demonstrated trajectories while retaining usable parts of the trajectories, we propose a novel technique using temporally extended policies or options <cit.>. Our technique consists of two steps: first, we develop a divergence measure that can indicate the degree of deviation in expert demonstrations with respect to a small set of demonstrations that are guaranteed to be benign. We then use options to partition demonstrated trajectories and use the divergence measure to selectively accept or discard parts of demonstrated trajectories. We have provided theoretical analyses to show that our proposed technique of accepting only non-adversarial portions of trajectories for learning can prevent degrading of the learner's performance. We have also validated the technique using different types and degrees of attacks made by an adversary while learning to play an Atari-like game called LunarLander using a form of learning from demonstrations called imitation learning. Our results show that our proposed technique can be used to identify and learn only from acceptable parts of demonstrated trajectories to improve the rewards from imitation learning in the presence of adversarial demonstrations. To the best of our knowledge, our work is one of the first attempts at integrating divergence measure with options to address the problem of adversarial learning from demonstrations.
The rest of this paper is structured as follows: in the next section, we provide an overview of relevant literature in adversarial reinforcement learning focusing on imitation learning. We then introduce the mathematical framework for the problem, measures for characterizing demonstrations given in the form of trajectories, and our option-based algorithm for partitioning and using acceptable parts of demonstrated trajectories for imitation learning. Sections <ref> and <ref> provide the theoretical and experimental evaluation results of our proposed techniques, respectively, and, finally, we conclude. A preliminary version of this research is in <cit.>. In this paper, we have thoroughly rewritten the paper, formalized the mathematical framework, proposed new algorithms, and added new theoretical and experimental results.
§ RELATED WORK
Adversarial learning has gained prominence over the past decade as an essential means to guarantee desired behavior of machine learning-based systems deployed in the real world. Here we discuss relevant literature on adversarial reinforcement learning (RL); a comprehensive survey of adversarial supervised learning is in <cit.>.
Early researchers considered adversarial RL in the context of an RL agent learning suitable actions to play a competitive game like keep-away soccer against a player called an adversary <cit.>, where the adversary's intent was to defeat the RL agent, albeit via fair play instead of using malicious tactics such as incorrect actions to misguide the RL agent. Subsequently, researchers proposed techniques where the expert demonstrator modifies the trajectory it demonstrates either indirectly or directly. In the former direction, researchers have considered including a risk term representing the demonstrator's possible deviations from optimal trajectories inside the Q-value function used by an RL agent to determine its policy <cit.>. In the latter direction, Mandlekar et al. <cit.> proposed a technique where the demonstrator directly modifies a valid trajectory using a perturbation technique like fast gradient sign method (FGSM) <cit.> to create adversarial trajectories that are then demonstrated to the RL agent. The RL agent trains with both clean and adversarial demonstrations so that the learned policy can perform effectively even in the presence of adversarial demonstrations. Our work in this paper is complementary to this research and investigates options as a means to improve the rewards received by an RL agent in the presence of adversarial trajectory demonstrations.
Recently, authors <cit.>, <cit.> have also investigated adversarial RL as a competitive zero-sum game where an adversarial demonstrator and an RL agent interact with each other but the learning objectives of the demonstrator and the RL agent are known to be directly contradictory to each other. Experimental results with simulated demonstrations of body movements on robotic figures showed that the demonstrator could successfully use its calculated policies to determine actions that misguided the RL agent to learn incorrect actions and lose stability instead of learning its intended task like walking or kicking a ball. In contrast to these scenarios where the demonstrator explicitly reveals its motive to make the RL agent fail by selecting incorrect states and actions, our research considers a more practical scenario where the demonstrator tries to stealthily modify some demonstrations that could make the RL agent fail, without revealing the demonstrators adversarial motives to the RL agent.
Recently, similar to some of the findings in our paper, authors <cit.> have demonstrated that adversarial attacks on reinforcement learning results in reduced rewards to the learning agent in simulated and physical robotic mobility tasks.
Another direction on adversarial RL integrated the techniques of inverse reinforcement learning <cit.> and generative adversarial networks <cit.> in the generative adversarial imitation learning (GAIL) framework <cit.> where trajectories are generated both by an expert using the expert policy, and, by a generator using the policy being learned. A discriminator evaluates the source of these trajectories and the learned policy is deemed as converged when the discriminator is unable to distinguish whether the trajectory was generated by the expert versus the generator. GAIL has also been extended to state-only observations with minimum demonstrations using sparse action guided regularization <cit.>, and, to generative adversarial imitation from observation (GAIfO) that uses a cost function that depends on state observations only <cit.>. The main difference between our work and GAIL is that, whereas, in GAIL the adversary or generator's objective is to update the policy being learned for faster convergence to an optimal policy. our work considers that the adversary's objective is to demonstrate incorrect trajectories to misguide the learner that is learning the policy.
Our research is closely related to techniques for imitation learning with imperfect expert demonstrations. In these techniques, a rank or confidence score for each trajectory is provided either as input or via learning. This score is then used to update the trajectory's rewards and selectively include the trajectory in the training during learning. In <cit.> trajectories are associated with a score or rank that is provided as input or self-generated and used to revise the rewards of sub-optimal trajectories using inverse reinforcement learning. In <cit.>, authors proposed techniques called 2IWIL and ICGAIL that use semi-supervised inverse reinforcement learning techniques to calculate a confidence score for unlabeled trajectories while using a small set of confidence score-labeled trajectories. An inverse dynamics function is learned in <cit.> to calculate a transformed trajectory from each expert trajectory followed by using a distance measure between the expert and transformed trajectory to determine a feasibility score for the expert trajectory. In <cit.>, anomaly detection between trajectories to boost or penalize the rewards associated with a trajectory is proposed. Independent of imitation learning, trajectory classification and trajectory anomaly detection techniques <cit.> have been proposed in literature to determine if the path followed by a vehicle to travel between two locations conforms to the usual set of travel routes between those locations. In this paper, instead of determining confidences or recalculating rewards for demonstrated trajectories, we first partition demonstrated trajectories and then repair or discard trajectory parts based on a metric calculated from spatial and temporal features of trajectories. Some of the aforementioned techniques such as confidence measures, feasibility scores and trajectory anomaly metrics could also be used in conjunction with our technique to make the decision to repair or discard demonstrated trajectories.
Options or hierarchically abstract policies have been proposed as a framework to improve the planning quality and computation time of policies <cit.>. Recently, the option-critic architecture <cit.> has generalized the problem of determining options for a task using two components called the option and critic that work in tandem with each other. The option component evaluates the options based on current parameters while the critic component updates the parameters of the policy underlying options by calculating value and objective functions. In <cit.> authors have proposed methods to automatically calculate options from data without using human-specified parameters and option-related information. Our work in this paper applies the framework of options to the address issues in adversarial reinforcement learning.
§ IMITATION LEARNING WITH ADVERSARIAL EXPERTS
Preliminaries. We formalize the reinforcement learning framework using a Markov Decision Process (MDP) given by (S, A, T, R, γ) where S denotes the set of states and A denotes the set of actions for the learning agent, T denotes a state to action transition function specifying the forward dynamics model of the environment, where T(s, a, s') is the probability of the agent reaching state s' when it takes action a at state s, R: S × A → denotes a reward function that gives a reward received by the agent by taking action a at state s, and γ is a discount factor. A policy π: S → [0,1]^|A| is a state to action mapping that prescribes a probability distribution P(A) over the action set. The objective of the RL algorithm is to determine an optimal policy that maximizes the expected rewards, that is, π^* = max_π𝔼(∑_t=0^∞γ^t R(s_t, a_t)). Let P^* = P(s|π^*) = P(s, a^*=π^*) denote the probability distribution of state-action pairs while following the optimal policy π^*. In imitation learning, human experts provide demonstrations in the form of state-action sequences called trajectories that represent the policy. The i-th trajectory is denoted by τ_i^π = (s_i,k,a_i,k)_k=0^H, where π is the policy used to generate the actions in τ_i and H denotes an episode's horizon or the average length of a trajectory. For the sake of legibility, in the rest of the paper, we use a_i,k = π(s_i,k) as a shorthand for a_i,k = max_a π(s_i,k).[Usually the expert demonstrates actions, a_i,k, only and the states are given by the agent's forward dynamics model T(s_i,k, a_i,k, s'_i,k)] Let π_θ denote the policy learned using imitation learning where θ is a policy parameter (e.g., a set of weights in a policy network). The objective of imitation learning is to determine the optimal policy by finding an optimal policy parameter θ^* that minimizes the expected loss between the actions from the optimal policy provided via the expert demonstrations and the actions as per the learned policy π_θ, that is, θ^* = min_θ𝔼_(s_i,k, a^*_i,k) ∼ P^* L(a^*_i,k, π_θ(s_i,k)). It is assumed that the expert performs its actions following the optimal policy, so a^*_i,k = π^*(s_i,k), and, consequently, the state-actions pairs in the expert trajectories conform to P^*, that is, ∀ i, k, (s_i,k, a^*_i,k) ∼ P^*. The value of policy π is given by V^π = 𝔼[∑_i=0^H R(s_i, a_i): a_i = π(s_i)].
For our problem setting, we consider a mix of benign and adversarial experts. Benign experts provide clean trajectories to the learner that follow the optimal policy and demonstrate the correct way to perform the task. We denote 𝕋_clean as the clean trajectory set, π_clean as the policy learned via imitation learning from clean trajectories and τ_clean as a trajectory generated while using policy π_clean. An adversarial expert, on the other hand, demonstrates adversarial trajectories that are constructed by modifying clean trajectories, and, consequently, do not conform to the optimal or clean policy. The adversarial trajectory set, adversarial policy and an adversarial trajectory are denoted by 𝕋_adv, π_adv and, τ_adv respectively. By definition of π_adv not being an optimal policy, it yields lower value than π_clean, that is, V^π_adv/V^π_clean < 1.
For our problem, we denote a trajectory set as: 𝕋 = (𝕋_clean∪𝕋_adv, η, {γ_i}), where η∈ [0, 1] denotes the fraction of trajectories that have been modified and γ_i∈[0, 1] denotes the fraction within the i-th trajectory that has been modified. Mathematically, η = |𝕋_adv|/|𝕋_clean∪𝕋_adv| and γ_i = ∑ k/|τ_i|: (s_i,k, a_i,k) ∈τ_i ∧ a_i,k≠π_clean(s_i,k). The values of 𝕋_clean∪𝕋_adv, η and γ_i are known to the adversarial expert while the learning agent only knows 𝕋_clean∪𝕋_adv.
We have divided our proposed technique into two parts. First, we describe the options framework to learn policies for sub-tasks from partial trajectories. Then, we develop a trajectory divergence measure between a demonstrated trajectory and known or clean trajectories that can be used to decide whether to accept or reject the demonstrated trajectory parts.
§.§ Policy Repair Using Options
We propose an options-based framework for policy repair where, instead of learning a policy over the entire state-action space, the state-action space is partitioned into subsets and a policy is learned for each part. Without loss of generality, we assume that the partition is done temporally - a trajectory τ is partitioned into M equal parts, and the i-th part (i=0, ..., M-1) is denoted by τ_i. Intuitively, this partition corresponds to dividing the end-to-end or full-horizon task into subtasks. The main idea in options is to learn a policy for each sub-task. Formally, an option for the i-th part is defined as ω_i = (I_i, π_i, β_i), where I_i is the set of initiation or start states for sub-task i, π_i is the optimal policy for solving sub-task i, and β_i is the termination or end states for sub-task i. As before, π_θ_i^* is learned via imitation learning and given by θ_i^* = min_θ_i𝔼_s, a^* ∼ P_i^*(s) (L(a^*, π_θ_i(s)) where P_i^*(s) = P(s|π_i^*).
§.§.§ Using Trajectory Divergence to Accept/Reject Trajectories
A core aspect of our options-based policy repair technique is to be able to determine the divergence between an unknown (whether it is benign or adversarial) demonstrated trajectory and a clean trajectory, that is, one that is guaranteed to be non-adversarial. This divergence measure can then be used to decide whether to accept or reject the demonstrated trajectory. However, a straightforward approach of making the trajectory accept/reject decision based on a single metric-based divergence measure might not work. For instance, an adversarial expert might demonstrate trajectories that have low divergence with clean trajectories, but inject a few incorrect moves or actions at key states in the trajectories that result in the agent either failing to do the task or doing it sub-optimally.
Again, a demonstrated trajectory might represent a previously unseen but correct and possibly improved way of doing the task. This trajectory would have a higher divergence measure with known, clean trajectories and if the accept/reject decision is based on the divergence measure only, it would end up getting an incorrect, reject decision. To address these challenges, we propose a divergence measure that combines two commonly used trajectory divergence measures with a supervised learning-based classification technique, as described below.
Occupancy Measure (OC). The first trajectory divergence measure we use is the occupancy measure <cit.>. It represents the number of times state-action pairs along a given expert trajectory are visited while using the (clean) policy. The occupancy measure of a demonstrated trajectory τ = ((s_0, a_0) (s_1, a_1), …, (s_|τ|, a_|τ|)) with respect to a clean trajectory τ_clean generated while following the clean policy π_clean, is given by :
OC_τ = ∑_(s_i, a_i) ∈τ_cleanπ^*(a_i|s_i) ∑_t=0^|τ|γ^t p(s_t=s_i | π_clean),
where γ∈ [0, 1] is a discount factor. Clearly, OC_τ has higher values when the demonstrated trajectory, τ, is closer or similar to the clean trajectory, τ_clean. The minimum value of OC_τ = 0 happens when there is no overlap between the state-action pairs of the two trajectories. The occupancy measure is a suitable metric for making the accept/reject decision of a demonstrated trajectory if it overlaps with many state-action pairs of clean trajectories. However, a limitation of using it as the only decision variable is that if the demonstrated trajectory is non-adversarial and similar to a clean trajectory, but overlaps with very few or no state-action pairs in it, the occupancy measure would be close to or equal to zero and give an incorrect decision of rejecting the trajectory.
Fréchet Distance (FD). Our second trajectory divergence measure is the Fréchet distance <cit.>. It gives the distance between two polylines while considering the spatial and temporal ordering of the points on them. Mathematically, the Fréchet distance between an expert trajectory τ and a clean trajectory τ_clean is given by:
FD_τ = min_α, βmax_t ∈ [0,1] d(τ(α(t)), τ_clean(β(t))),
where, d(·) gives the Euclidean distance or L2 norm between two trajectory points on τ and τ_clean respectively. α, β are functions that take an argument t ∈ [0,1] and return an index into τ and τ_clean respectively, with α(0) = β(0) = 0, and α(1) = |τ|, β(1) = |τ_clean|. The Fréchet distance calculation iterates over different functions for α and β, determines the maximum distance between ordered pairs of points on τ and τ_clean for each α and β combination iterated over, and, finally, returns the minimum of these maximum distances. When both expert and clean trajectories are identical, the Fréchet distance has its smallest value, 0. As the two trajectories get further apart, the Fréchet distance increases. For the last example from the previous paragraph, using the Fréchet distance rectifies the incorrect decision given by occupancy measure as the Fréchet distance for a demonstrated trajectory with high similarity but little or no overlap in state-action pairs with a clean trajectory would have a low value and yield a correct decision to accept the trajectory.
To make an accept/reject decision of a trajectory based on its occupancy measure and Fréchet distance values, we train a classifier, χ: OC × FD →{ Accept, Reject} via supervised learning. The classifier's training set contains the OC and FD values sampled from different clean and adversarial trajectories, along with a label, λ_τ for each trajectory sample, given by:
λ_τ = Accept if R(τ) ≥ (1-ϵ_p)R_max
Reject otherwise
Handling Benign Divergent Trajectories. The classifier χ suffices to admit trajectories based on the similarity of their spatio-temporal features to known, benign trajectories. However, a demonstrated trajectory that shows a novel way to perform the task and is suitable for learning from, might have a high divergence measure and, consequently, get rejected by the classifier. To address these false positives, we augment the classifier's prediction with a special condition that reverses only the reject decisions on a trajectory if the ratio of the returns (sum of rewards) between the demonstrated and clean trajectories is above a fraction 1-ϵ_p. The advantage of using the return ratio only is that it can be calculated quickly using the agent's reward function and demonstrated trajectory data, without requiring access to the agent's policy or value functions that require complex, time-consuming calculations.
Algorithm <ref> gives the pseudo-code algorithm for repairing trajectories with our options-based framework using the above divergence measures and trajectory accept/reject decision classifier. Given a set of guaranteed, clean trajectories, T_clean and a set of demonstrated trajectories we first split trajectories from each set into M parts (line 3). For each part, we determine if it can be accepted into the training set for the imitation learning algorithm using the classifier's 'Accept' prediction or return ratio criteria (lines 5-7). If acceptable, the demonstrated trajectories are included with the clean trajectories for training the policy π^*_i for sub-task i via imitation learning (line 8-9). The initiation and termination states for option i are also recorded along with policy π^*_i within option ω_i. An important requirement for using options is option chaining which determines when to terminate option ω_i and how to select the next option ω_i+1, so that an end-to-end policy can be formed in the state-action space of the problem. While chaining is done at policy execution time <cit.>, we create a dictionary D_chain: S → S while creating the set of options to speed up execution. D_chain is constructed in Lines 15-18 in Algorithm <ref> by recording the closest state s_j ∈ I_i+1 is closest in terms of L2 norm distance to a state s_i ∈β_i.
§.§.§ Option Chaining
Algorithm <ref> shows the option chaining at run-time to enable executing successive policies for sub-tasks using options. As shown in Lines 8-12 of Algorithm <ref>, to determine if policy π_i in option ω_i is about to terminate, a state s_cur that is reached by the agent while executing π_i is checked for proximity within an L2 norm distance of ϵ_chain from any state in the termination set β_i. If any such states exist in β_i, the closest such state to s_cur, s_end, is selected (line 10) and s_cur is updated to a state in the initiation set of the next option ω_i+1 given by D_chain(s_end) (line 11). The current option is also updated to the option for the next sub-task (line 12).
§ THEORETICAL ANALYSIS
In this section, we formalize our trajectory repair technique described in Section <ref>. First, we show that using trajectory divergence measure alone gives a weak condition for the accept/reject decision for demonstrated trajectories. We then show that augmenting this decision with a rewards-based rule (following Algorithm <ref>, Line 7) guarantees that accept/reject decisions are consistent with benign and adversarial trajectories. Finally, we show that the above results remain valid for part trajectories so that they can be applied to our options-based, trajectory repair technique.
Definition defcounter. Dominated Policy. Given two policies π and π' we say π is dominated by π' if V^π/V^π' < 1-ϵ_p, where ϵ_p is a constant. We denote this in shorthand as π≺π'.
defcounter
Definition defcounter. Divergent Trajectories. Let τ^π and τ^π' represent two trajectories that are sampled from two policies π and π'. We say τ^π and τ^π' are divergent if D(τ^π, τ^π') > δ, where D is a divergence measure between τ^π and τ^π' and δ is a constant.
defcounter
Definition defcounter. Local Policy Repair Function. Given state s and two policies π and π' with π≺π', a local policy repair function is a transformation f_rep: S × [0, 1]^|A|→ [0, 1]^|A|, such that, D̃(π_s || f_rep(s, π'_s)) < ϵ_div, where D̃ is a distance measure between two probability distributions.[Note that f_rep(s, π'_s) transforms π' to a new policy, say π”]
defcounter
Definition defcounter. ϵ-repair set: Given an initial policy π and a target policy π^tar, the ϵ-repair set for π, π^tar, ρ_π→π^tar, is a set of states such that the policy π' obtained by applying f_rep(s, π) to every s ∈ρ_π→π^tar satisfies V^π'/V^π^tar≥ 1-ϵ_p.
defcounter
Theorem thmcounter. If π≺π', then trajectories τ^π and τ^π' sampled from π and π' respectively are divergent .
thmcounter
Proof. (By contradiction.)
Let us suppose π≺π', but trajectories τ^π and τ^π' are not divergent, that is, D(τ^π, τ^π') ≤δ. Without loss of generality, we assume δ = 0. This implies that the divergence measure between τ^π and τ^π' is zero, and, consequently, ∀ i, s_i^τ^π = s_i^τ^π'. Now, from the definition of a dominated policy in Definition 1, it follows that V^π≠ V^π'. Recall, V^π= 𝔼[∑_i=0^H R(s_i, a_i): a_i = π(s_i)], and, so, there must be at least one time-step, i, at which, R(s_i^π, a_i^π) ≠ R(s_i^π', a_i^π'). This implies, either s_i^π≠ s_i^π', or, s_i^π = s_i^π', but a_i^π≠ a_i^π'. The latter case, implies different actions are taken at state s_i by policies π and π', which leads to different next states s_i+i^π≠ s_i+1^π', reached by policies π and π'. In both cases, there are at least two states on trajectories generated from π and π' that are distinct from each other, that is, s_i^τ^π≠ s_i^τ^π', for at least some i. This contradicts our assumption, ∀ i, s_i^τ^π = s_i^τ^π'. Hence proved. □
However, we note that converse of Theorem 1 is not valid - when D(τ^π, τ^π') > δ, it is not guaranteed that policy π' will dominate π. We give an informal proof sketch: if D(τ^π, τ^π') > δ, there must be at least one i where s_i^τ^π≠ s_i^τ^π'. We cannot make any guarantees about the relative rewards at these states while using policies π' and π. If R(s_i^π, a_i^π) > R(s_j^π', a_j^π'), s_i ≠ s_j, we could get V^π > V^π', which would imply that π' is dominated by π. On the other hand, if R(s_i^π, a_i^π) < R(s_j^π', a_j^π'), π' dominates π. This means that trajectory divergence is a necessary, but not a sufficient condition for policy dominance. This necessitates an additional condition to select states from S to construct the ϵ-repair set. For this, we propose the following rule:
Rule 1. For a state s_i^π to be added to ϵ-repair set, ρ_π→π', R(s_i^π,π_s_i^π)/R(s_i^π', π_s_i^π') < 1-ϵ_p.
The above rule states that a state can be added to the ϵ-repair set if the reward at that state by selecting an action using policy π is lower than selecting an action using policy π'. Based on this rule, we have the following theorem about the convergence of trajectories based on their divergence measure.
Let M_max denote the maximum number of states in S where R(s_i^π,π_s_i^π) < R(s_i^π', π_s_i^π').
Theorem thmcounter. If Rule 1 is applied M times to build the ϵ-repair set ρ_π→π', then as M → M_max, V^π/V^π'→ 1 and D(τ^π, τ^π') → 0.
thmcounter
Proof.[For legibility, we give the proof for ϵ_p = 0, it can be extended easily to ϵ_p = 0^+.] Recall that V^π= 𝔼[∑ R(s_i^π, π_s_i^π)] and V^π'= 𝔼[∑ R(s_i^π', π'_s_i^π')]. The difference between these two terms can be written as:
V^π' - V^π = 𝔼[∑ R(s_i^π', π'_s_i^π') - R(s_i^π, π_s_i^π)]
= 𝔼[R(s_1^π', π_s_1^π') + ... + R(s_k^π', π_s_k^π') + ... + R(s_M^π', π_s_M^π')]
- 𝔼[R(s_1^π, π_s_1^π) + ... + R(s_k^π, π_s_k^π) + ... + R(s_M^π, π_s_M^π)]
We use Δ V_0^π' - π as a shorthand to denote the initial value of V^π' - V^π (before applying Rule 1), and, Δ V_1^π' - π as its value after applying Rule 1 once, Δ V_2^π' - π as its value after applying Rule 1 twice, and, so on. If we select states (s_k^π, s_k^π') via Rule 1 and apply f_rep(s_k^π, π_s_k^π), then because R(s_k^π', π_s_k^π') - R(s_k^π,π_s_k^π) > 0, therefore, Δ V_0^π' - π > Δ V_1^π' - π. Similarly, Δ V_1^π' - π > Δ V_2^π' - π. If we continue in this manner, V^π' - V^π becomes successively smaller and smaller. Finally, when f_rep() has been applied at most M_max times, Δ V_M_max^π' - π= 0. At this point, V^π' = V^π, or V^π/V^π' = 1. In the limiting case, when state pairs (s_k^π, s_k^π') have R(s_k^π', π_s_k^π') - R(s_k^π,π_s_k^π) ≈ 0^+, we get, V^π/V^π'→ 1.
In a similar manner, applying f_rep(s_k^π, π_s_k^π) makes π_s_k = π'_s_k, and, consequently, the same state is reached by taking action π'_s_k at s_k^π. This makes, D_0(τ^π, τ^π') > D_1(τ^π, τ^π') > D_2(τ^π, τ^π') > ..., where the subscript denotes the number of times Rule 1 and f_rep() have been applied. When f_rep() has been applied M_max times, we get D_M_max(τ^π, τ^π') = 0, and, in the limiting case, when state pairs (s_k^π, s_k^π') have R(s_k^π', π_s_k^π') - R(s_k^π,π_s_k^π) ≈ 0^+, D(τ^π, τ^π') → 0. □.
Lemma thmcounter. If policies π and π', π≺π', are divided into sub-policies π_1, π_2, ... π_M and π'_1, π'_2 ... π'_M, then for at least one interval m ∈{1, ..., M}, π_m ≺π_m'
thmcounter
Proof. (by contradiction) From Definition 1, if π≺π', then V^π < V^π'.[For simplicity and without loss of generality, we slightly relax Definition 1 by assuming ϵ_d=0, which gives V^π/V^π'<1] Suppose π≺π' and policies π and π' are divided into sub-policies, π_1, π_2 and π'_1, π'_2 respectively, and, both sub-policies of π are not dominated. That is, V^π_1 ≥ V^π'_1 and V^π_2 ≥ V^π'_2. Rearranging and adding terms of the last two inequalities, we get, V^π_1 - V^π'_1 + V^π_2 - V^π'_2 ≥ 0, or, (V^π_1 + V^π_2) - (V^π'_1 + V^π'_2) ≥ 0. Substituting, (V^π_1 + V^π_2) = V_π and V^π'_1 + V^π'_2 = V^π', we get V^π - V^π' > 0, or, V^π > V^π', which contradicts the definition of π≺π'. Therefore, our assumption that V^π_1 ≥ V^π'_1 and V^π_2 ≥ V^π'_2 (both sub-policies are not dominated) is incorrect, and at least one of the sub-policies must be dominated. This proof can be easily extended beyond two sub-policies by induction. □
Theorem thmcounter. Repairing partial policy via options is faster than repairing full horizon policy.
thmcounter
Proof.
§ EXPERIMENTAL RESULTS
§.§ Experimental Setup
Environment. For evaluating our option-based adversarial RL algorithm, we used the environment available within AI Gym <cit.>. The problem consists of landing an airborne two-legged spacecraft at a specific location called the landing pad within a 2D environment akin to the surface of the Moon. The state space consists of an 8-dimension vector given by the 2-D coordinates of the center of the spacecraft, 2-D linear velocity, orientation and angular velocity and whether both legs of the spacecraft are on the ground. The initial state of the spacecraft consists of random coordinates towards the top of the environment and random initial velocity. The action space of the spacecraft consists of four actions: to fire its main, left or right engines or do nothing (no-op). The agent receives a reward of 320 points of landing on both legs on the landing pad, a penalty of -100 points for crashing, while maneuvering the spacecraft incurs a penalty of -0.3 for using the main engine and -0.03 for the left or right engine. For our baseline reinforcement learning algorithm we used the deep Q-network learning (DQN) algorithm available via stable baselines <cit.>. The algorithms were implemented using the following open source libraries: Tensorflow 1.15, OpenAI Gym 0.18 and Stable Baselines 2.10.
Generating clean and adversarial trajectories. To generate clean trajectories we trained a Deep Q-network (DQN) algorithm in the LunarLander environment for 2.5 × 10^5 time-steps, all other algorithm hyper-parameters were set to the values given in RL Baselines Zoo <cit.>. We generated 1000 clean trajectories. These trajectories were then modified using the adversarial trajectory modification algorithms described in Section <ref>. For the adversarial attacks, we used η = {0.3, 0.6, 0.9}, γ_i = {0.3, 0.6, 0.9}, and attack location as {BEG, MID, END, FLP} giving rise to 36 different adversarial trajectory sets, each comprising 1000 trajectories.
We then trained policies via imitation learning with these adversarial trajectory sets.
§.§.§ Trajectory Modification Attacks by Adversary
We considered two adversarial attack strategies for modifying expert demonstrations: 1) A directed attack strategy that requires access only to clean trajectories demonstrated by a benign expert, 2) A gradient-based attack strategy that requires access to the learner's policy network and rewards.
A directed attack targets sequential locations inside a trajectory starting from an attack start location, that could be either at the beginning (BEG), middle (MID) or end (END) of the trajectory. The actions at γ_i|τ_i| consecutive locations following the attack start location are then modified using an action modification function ϕ: A → A, given by ϕ(a_i,k) = max_s'Δ_s(s_i,k, s') where, s' = max_a' T(s_i,k, a', s_i,k+1)). That is, ϕ replaces a_i,k with the action that takes the agent to a state s' that is farthest along a state-distance metric, Δ_s, from s_i,k+1. The directed attack is a straightforward, fast, yet effective attack as it does not require the adversary to have information about the learner's reward function. While it can be realized by the adversary with a lower attack budget it can also be detected relatively easily by the learner.
Gradient-based Attack. The gradient-based technique is inspired the hot-flip (FLP) technique <cit.> for text perturbation. The technique identifies the minimum number of characters and their locations within a text string that need to be modified so that the text string gets mis-classified by a supervised learning-based model. We apply a similar idea for our gradient-based attack where the adversary identifies locations or indices in the trajectory that need to be modified, by considering the gradient of the objective or reward function with respect to the observation, denoted by ∂ r_i,k/∂ o_i,k, k = 1...|τ_i|, and swap the observations corresponding to maximum and minimum gradients for γ_i|τ_i| iterations. The pseudo-code for the gradient-based attack is shown in Algorithm <ref>,
Note that for the gradient-based attack, the adversary needs to have knowledge about the learner's reward function. We note that researchers have proposed sophisticated but also more computationally complex attacks for modifying actions <cit.> that aim to reduce the reward received by an RL agent, although not within the context of imitation learning. Our attacks are computationally simpler but still achieve the desired effect of reducing the learner's rewards. The policy repair technique proposed in the paper could also be used in conjunction with any of these attacks.
Note that because the adversary modifies clean trajectories to generate adversarial trajectories, it knows the partition of 𝕋 into 𝕋_clean and 𝕋_adv and can calculate γ_i and η. The learner on the other hand does not know the partition and is not aware of these parameters.
§.§ Experimental Validation
We evaluated the performance of our proposed technique using the following hypotheses:
H1. Adversarial Perturbation Effect:
Increasing the amount of perturbation in the expert demonstration trajectories decreases the performance of conventional imitation learning.
H2. Trajectory Accept/Reject Decision based on Trajectory Divergence: A supervised learning based classifier that combines the occupancy measure and Fréchet distance metrics of demonstrated trajectories can identify parts of the trajectories that have been adversarially modified with acceptable accuracy.
H3. Trajectory Repair: The proposed options-based, trajectory repair technique (Algorithms <ref>) can avoid learning from parts of demonstrated trajectories that have been adversarially modified so that the learning agent's performance does not degrade
H4. Explainability: Using our options-based, trajectory repair technique (Algorithms <ref>) it is possible to determine which portions of a demonstrated trajectory cause the learning agent's performance to degrade.
H5. Time Overhead: The option chaining technique (Algorithm <ref> increases the time overhead of calculating the policy over a conventional RL-based technique by a small, acceptable amount.
To validate our Hypothesis 1 that the strength of the adversarial perturbation in the demonstrations reduces the rewards and learning time of the learned model, we evaluated the effect of gradually increasing the number of trajectories modified (η) and the fraction of modified actions within each trajectory γ_i. While it is intuitive that increasing either η and γ_i will reduce the learned model's rewards. We want to understand the degree to which each of these parameters affect its performance, while applying the perturbations at different locations in demonstrated trajectories. Figure <ref> shows the effect of changing the amount of perturbation in the expert demonstrations on the cumulative median rewards for different attack locations, BEG, MID and END, of the directed attack and for the gradient-based attack (FLP). We observer than when a small fraction of the expert demonstration set is changed (η =0.3), the rewards are affected nominally for all types of attacks. However, for higher values of η, the median rewards drop significantly between -200% and -400%. Within a fixed value of η, 0.6 or 0.9, we see that changing γ_i (no. of actions modified inside each perturbed trajectory) also has the effect of reducing the rewards as the expert demonstrations contain more incorrect actions to learn from. We also observe that the decrease in rewards is less for attack locations MID and END, as compared to BEG for the directed attack. This makes sense because misguiding the learned model to make mistakes via demonstrating incorrect actions early on makes the trajectories veer further off from the correct course and makes it difficult for the learned model to recuperate and return on-track. Finally, we show the number and standard deviation of episodes completed (dashed line) for the different perturbation amounts, averaged over the different attack types. The number of episodes increases with increase in perturbation as with more perturbation the agent fails quickly, right after starting the task and restarts another episode, that is, there are many shorter, failed episodes with η = 0.6, 0.9 than with no or low perturbation (η = 0, 0.3). Overall, these results validate Hypothesis 1 while showing that η (fraction of trajectory set that is modified) has a greater effect than γ_i (fraction of actions modified inside each modified trajectory) on the successful task completion, and, consequently, the rewards of the learned model.
For validating Hypothesis 2, we trained a classifier via supervised learning and evaluated its prediction accuracy and F1-score for trajectory accept/reject decisions. For the training set of the classifier we sampled 400 trajectories , corresponding to nearly 100,000 state-action pairs. The training trajectories were either clean or perturbed with perturbation strengths drawn uniformly from η, γ_i ∈{0.3, 0.6, 0.9} and perturbation locations drawn uniformily from {(BEG, MID, END, FLP)}. Our training set is not very large[We used 400 trajectories in the training set as the sample diversity did not increase beyond this value for our tested LunarLander environment.] and to improve classification accuracy with such smaller training sets, ensemble learning <cit.>, that combines the predictions from multiple classifiers, has been proposed as a suitable technique. We used an ensemble of classifiers with individual classifiers as: K-nearest neighbors with no. of neighbors as 2, support vector machine with polynomial kernel function, decision tree with max-depth of 9, and ada boost classifier with number of estimators =50. For the final prediction, we used ensemble voting with uniform weights given to individual classifier predictions followed by a majority voting between them. The classifier algorithms were implemented using the scikit-learn 1.2 library and the hyper-parameters in the different classifier algorithms were set to their default values given in the library. Figure <ref> shows a profile of the learned model of classifier for different occupancy measure and Fréchet distance values. It indicates that the general rule learned by the classifier is to reject trajectories with very low (near zero) occupancy measure value or very high (>∼ 1.5) Fréchet distance values while for intermediate values the classification boundary exhibits a polynomial dependency on occupancy measure and Fréchet distance values. We tested the classifier with a test set 1000 different trajectories that were either full length (end-to-end), or part trajectories that were either half or a third of the full length, sampled from various portions of trajectories. The classification accuracies and F1 scores for different trajectories are given in Table <ref>. For all of the tested trajectories, the false negatives (accepting an adversarial trajectory) were below 10%. Overall, these results validate Hypothesis 2 by showing that the classifier can be used a reliable method to identify and make accept/reject decisions for demonstrated trajectories.
For validating Hypothesis 3, on trajectory repair, we experimented with 2 and 3 partitions of the trajectory. For each partition, we considered that the adversary had perturbed the trajectory at different locations BEG, END and FLP. Table <ref> shows the results for trajectory repair using our proposed technique when trajectories are divided into two equal parts, including the decision made by the trajectory accept/reject classifier for full and two part trajectories (with and without options), and the rewards before and after trajectory repair. Note that the first three columns of Table <ref>, perturb location, η and γ_i are given for legibility and not known by the learning agent or repair technique. The results show that for all cases adversarially modified trajectories can be identified and repaired at the portions that were modified, while preserving the portions that were not modified, and preventing the performance of the learning agent from degrading as shown by the median rewards similar to the clean trajectory reward values. The main impact of our trajectory repair technique is seen for perturb location END, as part trajectories are repaired to improve the reward to values similar to clean trajectory rewards. Moreover, using trajectory repair, we are able to detect and use the clean part of the trajectory, thereby improving sample efficiency. For perturbation location BEG, trajectories are mostly rejected because, as the decisions are sequential, modifying actions early on in an episode result in incorrect or sub-optimal actions downstream in the episode. For η=0.3, for all perturbation locations, we see that some part trajectories get rejected by the classifier when not using the return ratio condition in Line 7 of Algorithm <ref> (marked with asterisk in Table <ref>. This happens because for our LunarLander task, different episodes start from different initial locations and their occupancy measure and Frétchet distance values show a larger divergence in the initial part of episodes. However, for all these cases, we note that the reward is not degraded using the return ratio condition. The gradient-based attack, FLP, is more difficult to detect as the perturbation locations made the adversary in the trajectories are selected strategically and are not successive, as in the directed attack. The bottom part of Table <ref> shows that our technique works successfully for gradient-based attacks as well and is able to discern and reject modified trajectories and learn only from the clean parts of trajectories, when available. Our experiments with perturb location MID (not reported here) showed similar results as BEG and END - parts of trajectories before the perturb location were accepted by the classifier while those following the perturb location were rejected as they were downstream and affected by the perturbation; rewards in all cases were restored to values similar to clean trajectories following trajectory repair.
Table <ref> shows the trajectory repair results when the trajectories are divided into three parts. We report the results for gradient-based (FLP) attacks only as they are more difficult to detect. Here too, we see that the trajectory repair technique is able to identify parts of trajectories that have lower perturbation and can be used for learning without degrading the task performance, as shown by the restored reward values for these cases.
§.§ Ablation Experiments
We performed two ablation experiments by removing certain features of our algorithms to understand their effect on the results
.
Effect of Return Ratio Condition. In the first experiment, we tested the effect of using the return ratio condition to override a reject decision made by the trajectory classifier ( Line 7, Algorithm <ref>). The reward ratio is provided as a guard rail against false positives from the classifier so that correct, benign trajectories that show a new way to peform the task and have a higher divergence measure from known, clean trajectories do not get discarded. For this experiment, we varied the perturbation strengths, η={0.3, 0.6, 0.9}, the perturbation locations, {BEG, MID, END, FLP}, and the number of trajectory parts, {2, 3}, and recorded the average fraction of trajectories that got changed from 'Accept' to 'Reject' when not using the reward ratio condition. For η={0.6, 0.9}, for all perturbation location, we observed that none of the classifier decisions were changed after removing the reward ratio, for both 2- and 3- part trajectories. This indicates that for higher perturbation strengths, false positives are absent or rare and the reward ratio condition is not triggered. For η=0.3, our results are shown in Figure <ref>. We see that for 2- part trajectories 50% of the trajectory is discarded, while for 3- part trajectories between 50-90% of the trajectory gets discarded. However, although discarding trajectories deteriorates sample efficiency, it does not affect the learning performance as the difference in the rewards with and without the reward ratio was nominal (within 1-3%). In general, our findings from this experiment indicate that when the perturbation strength is low and the divergence measure has difficulty in classifying a trajectory accept/reject decision, the reward ratio condition is important to prevent valid but divergent trajectories from getting discarded.
Overhead introduced by Options. Options are the key component of our technique as they facilitate partitioning trajectories and retaining and learning from only the usable part trajectories. To determine the feasibility of our technique there are two important questions related to using options that need to be addressed: does using options affect the learning performance in terms of rewards and can options be used without degrading the rewards? To answer these questions, we performed our next ablation experiment. We trained the agent to learn to play the LunarLander game using full trajectories versus using part trajectories via options, and recorded the difference in median rewards for these two settings for different perturbation strengths, η={0.3, 0.6, 0.9}, different perturbation locations, {BEG, MID, END, FLP}, and different number of trajectory parts, {2, 3}. Our results are shown in Figure <ref>, the black lines at one end of the bars show the reward without options and the bars show the change in rewards using options. We see that when there is little perturbation (η=0,3), using options has negligible change in rewards, around <1-2%. When the perturbation increases to η={0.6, 0.9}, the rewards for using options either increases or decreases from the reward without options. This indicates that perturbed trajectories make it difficult to chain options. Also, as chaining options has to be done between every pair of trajectory parts, as the number of trajectory parts increases, the decrease in rewards from using options also becomes more pronounced. However, when options are repaired using Algorithm <ref>, the part trajectories can again be chained efficiently and the rewards are again restored to higher values, similar to those learned for clean trajectories. Overall, these experiments show that our approach of repairing part trajectories with options does not introduce significant overhead in the computations of the imitation learning algorithm.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we proposed a novel technique using options to selectively include portions of demonstrated trajectories for training the policy of an imitation learning-based agent in the presence of demonstrations given by potentially adversarial experts. Our results show that using our technique, the learned policy can prevent learning from portions of trajectories that would degrade the agent's reward. Our technique provides two main advantages: it improves the robustness of the policy training as well as the sample complexity of the demonstration samples without resulting in a significant overhead of the policy training time. Closely related to our research is the field of opponent modeling, cross-play and inter-play where agents build models of their opponents' behaviors from observations and train their policies by playing against those models. A potential problem in opponent modeling is deception by opponents where opponents can demonstrate incorrect behaviors via trajectories to misguide an agent. Our proposed trajectory repair technique could be used in such situations to identify deceptive trajectories by comparing them with trajectories of known or rational opponent behaviors and prevent learned policies from getting misled.
One of the requirements in our technique is that it requires a human to identify a base set of clean policies with which the agent's task is performed successfully. While most real-life domains require human subject matter experts to provide such feedback, techniques like inverse reinforcement learning that automatically update the reward function to improve the agent's performance could be used to reduce the technique's reliance on human expertise. Another aspect of our work is that it assumes that clean trajectories have low divergence between them, For tasks that can be solved in different ways, clean trajectories representing different ways to solve the task might have high divergence with each other. In such cases, the different ways of performing the task could be grouped into clean trajectory clusters, and the divergence with demonstrated trajectories could be determined for clean trajectory clusters to make the accept/reject decision for our technique.
We have used behavior cloning as our imitation learning algorithm. More sophisticated imitation learning algorithms like the data aggregation (DAgger) algorithm <cit.> or the generative adversarial imitation learning from options (GAIfO) <cit.> could be used in place of behavior cloning. These algorithms are likely to improve the performance of the imitation learning portion, and our proposed options-based technique could still be used with them to partition demonstrated trajectories and identify acceptable partial trajectories for learning.
Our proposed technique was aimed at enabling an agent to use imitation learning in the presence of adversarial trajectories. It is likely that smart adversaries will discover that its attacks are not effective against a learning agent that has used our technique to avoid accepting adversarial trajectories. It could then craft new types of adversarial trajectory attacks to evade the trajectory accept/reject decision classifier. Such situations could be modeled as a higher-level, adversarial game between the adversary and the learning agent and techniques from hierarchical reinforcement learning <cit.> and Bayesian games <cit.> could be used to solve them.
We envisage that further investigation of the options based technique for adversarial imitation learning described in this paper will lead to new insights into the problem of learning for demonstrations and could be used by a learning agent to quickly and robustly learn effective operations in new, open environments from clean as well as adversarial trajectories.
§ ACKNOWLEDGEMENTS
This work was supported by the U.S. Office of Naval Research as part of the FY21 NRL Base Funding 6.1 project Game Theoretic Machine Learning for Defense Applications.
abbrv
|
http://arxiv.org/abs/2306.06655v1
|
20230611115548
|
Turán problem for $\mathcal{K}_4^-$-free signed graphs
|
[
"Fan Chen",
"Xiying Yuan"
] |
math.CO
|
[
"math.CO"
] |
theoremTheorem[section]
assumption[theorem]Assumptio
corollary[theorem]Corollary
proposition[theorem]Proposition
lemma[theorem]Lemma
definition[theorem]Definition
remark[theorem]Remark
problem[theorem]Problem
claimClaim
conjecture[theorem]Conjecture
factFact
VPUFormer: Visual Prompt Unified Transformer for Interactive Image Segmentation
Xu Zhang,
Kailun Yang2,
Jiacheng Lin,
Jin Yuan12,
Zhiyong Li1, Member, IEEE,
and Shutao Li, Fellow, IEEE
X. Zhang, J. Lin, J. Yuan, and Z. Li are with the College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China.
K. Yang, Z. Li, and S. Li are with the School of Robotics and the National Engineering Research Center of Robot Visual Perception and Control Technology, Hunan University, Changsha 410082, China.
S. Li is also with the College of Electrical and Information Engineering and with the Key Laboratory of Visual Perception and Artificial Intelligence of Hunan Province, Hunan University, Changsha 410082, China.
1Corresponding authors: Jin Yuan and Zhiyong Li. (E-mail: [email protected], [email protected].)
2Equal advising.
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
[0pt]16.5cm0.09em
Abstract
Suppose that Ġ is an unbalanced signed graph of order n with e(Ġ) edges. Let ρ(Ġ) be the spectral radius of Ġ, and 𝒦_4^- be the set of the unbalanced K_4. In this paper, we prove that if Ġ is a 𝒦_4^--free unbalanced signed graph of order n, then e(Ġ)⩽n(n-1)/2-(n-3) and ρ(Ġ)⩽ n-2. Moreover, the extremal graphs are completely characterized.
Keywords: Signed graph, Adjacency matrix, Spectral radius, Turán problem.
[0pt]16.5cm0.05em
§ INTRODUCTION
The graph G is considered to be simple and undirected throughout this paper. The vertex set and the edge set of a graph G will be denoted by V(G) and E(G). A signed graph Ġ=(G,σ) consists a graph G, called the underlying graph, and a sign function σ:E(G) →{-1,+1}. The signed graphs firstly appeared in the work of Harary <cit.>. If all edges get signs +1 (resp. -1), then Ġ is called all positive (resp. all negative) and denoted by (G,+) (resp. (G,-)). The sign of a cycle C of Ġ is σ(C)=∏_e∈ E(C)σ(e), whose sign is +1 (resp. -1) is called positive (resp. negative). A signed graph Ġ is called balanced if all its cycles are positive; otherwise it is called unbalanced. For more details about the notion of signed graphs, we refer to <cit.>.
Let U be a subset of the vertex set V(Ġ) and Ġ_U be the signed graph obtained from Ġ by reversing the sign of each edge between a vertex in U and a vertex in V(Ġ)∖ U. We say the signed graph Ġ_U is switching equivalent to Ġ, and write Ġ∼Ġ_U. The switching operation remains the signs of cycles. So if Ġ is unbalanced, and Ġ_U is also unbalanced.
For an n× n real symmetric matrix M, all its eigenvalues will be denoted by λ_1(M)⩾λ_2(M)⩾⋯⩾λ_n(M), and we write Spec(M)={λ_1(M),λ_2(M),⋯,λ_n(M)} for the spectra of M. The adjacency matrix of a signed graph Ġ of order n is an n× n matrix A(Ġ)=(a_ij). If σ(uv)=+1 (resp. σ(uv)=-1), then a_uv=1 (resp. a_uv=-1) and if u is not adjacent to v, then a_uv=0. The eigenvalues of A(Ġ) are called the eigenvalues of Ġ, denoted by λ_1(Ġ)⩾λ_2(Ġ)⩾⋯⩾λ_n(Ġ). In particular, the largest eigenvalue λ_1(Ġ)
is called the index of Ġ. The spectral radius of Ġ is defined by ρ(Ġ)=max{|λ_i(Ġ)|:1⩽ i⩽ n}.
Since, in general, A(Ġ) is not similar to a non-negative matrix, it may happen that -λ_n(Ġ)>λ_1(Ġ). Thus,
ρ(Ġ)=max{λ_1(Ġ),-λ_n(Ġ)}. For the diagonal matrix S_U=diag(s_1,s_2,⋯,s_n), we have A(Ġ)=S_U^-1A(Ġ_U)S_U where s_i=1 if i∈ U, and s_i=-1 otherwise. Therefore, the signed graphs Ġ and Ġ_U share the same spectra.
A graph may be regarded as a signed graph with all positive edges. Hence, the properties of graphs can be considered in terms of signed graphs naturally. Moreover, there are some special properties in terms of signed graphs. Such as Huang <cit.> solved the Sensitivity Conjecture by the spectral properties of signed hypercubes. For the spectral theory of signed graph, see <cit.> for details, where <cit.> is an excellent survey about some open problems in the spectral theory of signed graphs.
Let ℱ be a family of graphs. Graph G is ℱ-free if G does not contain any graph in ℱ as a subgraph. The classical Turán type problem determines the maximum number of edges of an n vertex ℱ-free graph, called the Turán number.
Let T_r(n) be a complete k-partite graph of order n whose partition sets
have sizes as equal as possible. Turán <cit.> proved that T_r(n) is the unique extremal graph of K_r+1-free graph, which is regarded as the beginning of the extremal graph theory. We refer the reader to <cit.> for more results about Turán number.
<cit.>
If G is a K_r+1-free graph of order n, then
e(G)⩽ e(T_r(n)),
with equality holding if and only if G=T_r(n).
In 2007, Nikiforov <cit.> gave a spectral version of the Turán Theorem for the complete graph K_r+1. In
the past few decades, much attention has been paid to the search for the spectral Turán Theorem such as <cit.>.
<cit.>
If G is a K_r+1-free graph of order n, then
ρ(G)⩽ρ(T_r(n)),
with equality holding if and only if G=T_r(n).
How about the Truán problem of signed graph? The discussions about the complete signed graph kicked off. Let 𝒦_3^- be the set of the unbalanced K_3. Up to switching equivalence, we have 𝒦_3^-={Ḣ}, where Ḣ is the signed triangle with exactly one negative edge. Wang, Hou, and Li <cit.> determined the Turán number of 𝒦_3^- and the spectral Turán number of 𝒦_3^-. The dashed lines indicate negative edges, and ellipses indicate the cliques with all positive edges in Fig. 1 and 2.
<cit.>
Let Ġ=(G,σ) be a connected 𝒦_3^--free unbalanced signed graph of order n. Then
e(Ġ)⩽n(n-1)/2-(n-2),
with equality holding if and only if Ġ∼Ġ(s,t), where s+t=n-2 and s, t⩾1 (see Fig. 1).
<cit.>
Let Ġ=(G,σ) be a connected 𝒦_3^--free unbalanced signed graph of order n. Then
ρ(Ġ)⩽1/2(√(n^2-8)+n-4),
with equality holding if and only if Ġ∼Ġ(1,n-3).
Let 𝒦_4^- be the set of the unbalanced K_4. Up to switching equivalence, we have 𝒦_4^-={Ḣ_̇1̇,Ḣ_̇2̇}, where Ḣ_̇1̇ is the signed K_4 with exactly one negative edge and Ḣ_̇2̇ is the signed K_4 with two independent negative edges. If Ġ is a 𝒦_4^--free signed graph of order n with maximum edges, then e(Ġ)=n(n-1)/2 with equality holding if and only if Ġ∼(K_n,+). Therefore, we would focus the attention on 𝒦_4^--free unbalanced signed graphs. Let
𝒢={Ġ_̇1̇(a,b), Ġ_̇1̇'̇(1,n-4), Ġ_̇2̇(c,d), Ġ_̇3̇(1,n-5), Ġ_̇4̇(1,n-5), Ġ_̇5̇(1,n-5)},
where a+b=n-3, a, b⩾ 0, and c+d=n-4, c, d⩾ 1 (see Fig. 1 and 2). For any signed graph Ġ∈𝒢, we have Ġ is 𝒦_4^--free unbalanced, and
e(Ġ)=n(n-1)/2-(n-3).
If Ġ is a 𝒦_4^--free signed graph of order n with maximum spectral radius, then ρ(Ġ)=n-1 with equality holding if and only if Ġ∼(K_n,+). Therefore, we would focus the attention on 𝒦_4^--free unbalanced signed graphs. In Section 2, for any signed graph Ġ∈𝒢, we will prove that
λ_1(Ġ)⩽ n-2,
with equality holding if and only if Ġ∼Ġ_̇1̇(0,n-3).
Among all unbalanced signed graph, Turán number of 𝒦_4^- will be determined in Theorem <ref>, and spectral Turán number of 𝒦_4^- will be determined in Theorem <ref>. Their proofs will be presented in Section 3 and Section 4.
Let Ġ=(G,σ) be a 𝒦_4^--free unbalanced signed graph of order n (n⩾7). Then
e(Ġ)⩽n(n-1)/2-(n-3),
with equality holding if and only if Ġ is switching equivalent to a signed graph in 𝒢.
Let Ġ=(G,σ) be a 𝒦_4^--free unbalanced signed graph of order n. Then
ρ(Ġ)⩽ n-2,
with equality holding if and only if Ġ∼Ġ_̇1̇(0,n-3).
§ THE INDICES OF SIGNED GRAPHS IN 𝒢
For any signed graph Ġ in 𝒢, we will show that λ_1(Ġ)⩽ n-2, with equality holding if and only if Ġ∼Ġ_̇1̇(0,n-3). The equitable quotient matrix technique and Cauchy Interlacing Theorem are two main tools in our proof.
Let M be a real symmetric matrix with the following block form
M=(
[ M_11 ⋯ M_1m; ⋮ ⋱ ⋮; M_m1 ⋯ M_mm; ]).
For 1⩽ i,j⩽ m, let q_ij denote the average row sum of M_ij. The matrix Q=(q_ij) is called the quotient matrix of M. Moreover, if for each pair i,j, M_ij has a constant row sum, then Q is called a equitable quotient matrix of M.
<cit.>
Let Q be an equitable quotient matrix of matrix M. Then the matrix M has the following two kinds of eigenvalues.
(1) The eigenvalues coincide with the eigenvalues of Q.
(2) The eigenvalues of M not in Spec(Q) remain unchanged if some scalar multiple of the all-one block J is added to block M_ij for each 1⩽ i,j⩽ m.
Furthermore, if M is nonnegative and irreducible, then λ_1(M)=λ_1(Q).
<cit.>
Let A be a symmetric matrix of order n with eigenvalues λ_1⩾λ_2⩾⋯⩾λ_n and B be a principal submatrix of A of order m with eigenvalues μ_1⩾μ_2⩾⋯⩾μ_m. Then the eigenvalues of B interlace the eigenvalues of A, that is, λ_i⩾μ_i⩾λ_n-m+i for i=1,⋯,m.
The clique number of a graph G, denoted by ω(G), is the maximum order of a clique in G. The balanced clique number of a signed graph Ġ, denoted by ω_b(Ġ), is the maximum order of a balanced clique in Ġ.
<cit.> Let G be a graph of order n. Then
λ_1(G)⩽(1-1/ω(G))n.
<cit.>
Let Ġ be a signed graph of order n. Then
λ_1(Ġ)⩽(1-1/ω_b(Ġ))n.
<cit.>
Every signed graph Ġ contains a balanced spanning subgraph, say Ḣ, which satisfies λ_1(Ġ)⩽λ_1(Ḣ).
<cit.>
There is a switching equivalent graph Ġ_U such that eigenvector x of A(Ġ_U) corresponding to λ_1(Ġ) is non-negative. By the proof of <cit.>, the balanced spanning subgraph Ḣ in Lemma <ref> may be obtained from Ġ_U by removing all negative edges.
If Ġ is a connected signed graph, then λ_1(Ġ)⩽λ_1(G). Moreover, the equality holds if and only if Ġ is balanced.
Let Ġ_U be a signed graph defined in Remark <ref>, and Ḣ be a spanning subgraph of Ġ_U by removing all negative edges and λ_1(Ġ)⩽λ_1(Ḣ). Thus A(Ḣ) is a nonnegative matrix, A(G) is a nonnegative irreducible matrix, and Ḣ is a subgraph of G. By Perron-Frobenius Theorem, we know that λ_1(Ḣ)⩽λ_1(G). So, λ_1(Ġ)⩽λ_1(G).
If λ_1(Ġ)=λ_1(G), then λ_1(Ḣ)= λ_1(G). By Perron-Frobenius Theorem, we know that A(Ḣ)=A(G) and then Ḣ=G and Ġ_U=(G,+), so Ġ is balanced. If Ġ is balanced, then Ġ∼(G,+), and then λ_1(Ġ)=λ_1(G).
Recall that
𝒢={Ġ_̇1̇(a,b), Ġ_̇1̇'̇(1,n-4), Ġ_̇2̇(c,d), Ġ_̇3̇(1,n-5), Ġ_̇4̇(1,n-5), Ġ_̇5̇(1,n-5)},
where a+b=n-3, a, b⩾ 0, and c+d=n-4, c, d⩾ 1.
For any graph Ġ in 𝒢, we have λ_1(Ġ)⩽ n-2, with equality holding if and only if Ġ∼Ġ_̇1̇(0,n-3).
We will complete the proof by showing the following five claims. Firstly, we claim that λ_1(Ġ_̇1̇(0,n-3))=n-2.
λ_1(Ġ_̇1̇(0,n-3))=n-2.
For the signed graph Ġ_̇1̇(0,n-3), we give a vertex partition with V_1={u}, V_2={v}, V_3={w} and V_4=V(Ġ)∖{u,v,w}. Then the adjacency matrix A(Ġ_̇1̇(0,n-3)) and its corresponding equitable quotient matrix Q_1 are as following
A(Ġ_̇1̇(0,n-3))=[ 0 -1 1 0^T; -1 0 1 j^T_n-3; 1 1 0 j^T_n-3; 0 j_n-3 j_n-3 (J-I)_n-3; ] andQ_1=[ 0 -1 1 0; -1 0 1 n-3; 1 1 0 n-3; 0 1 1 n-4; ].
By Lemma <ref> (1), the eigenvalues of Q_1 are also the eigenvalues of A(Ġ_̇1̇(0,n-3)). The characteristic polynomial of Q_1 is
P_Q_1(x)=(x-n+2)(x-1)(x+1)(x+2).
Hence, λ_1(Q_1)=n-2. Add some scalar multiple of the all-one block J to block of A(Ġ_̇1̇(0,n-3)) and A(Ġ_̇1̇(0,n-3)) becomes
A_1=[ 0 0 0 0^T; 0 0 0 0^T; 0 0 0 0^T; 0 0 0 -I_n-3; ].
By Lemma <ref> (2), there are n-4 eigenvalues of Ġ_̇1̇(0,n-3) contained in the spectra of A_1. Since Spec(A_1)={-1^[n-3],0^[3]}, we have λ_1(Ġ_̇1̇(0,n-3))=λ_1(Q_1)=n-2.
Next, we claim that λ_1(Ġ)< n-2 for any graph Ġ∈𝒢∖{Ġ_̇1̇(0,n-3)}.
λ_1(Ġ_̇1̇(a,b))<n-2, 1⩽ a⩽ b.
We prove that Claim by showing
λ_1(Ġ_̇1̇(⌊n-3/2⌋,⌈n-3/2⌉))<⋯<λ_1(Ġ_̇1̇(1,n-4))<n-2.
Partition the vertices set of Ġ_̇1̇(a,b) as V_1={u}, V_2={v}, V_3={w}, V_4=N(u)∖{v,w} and V_5=N(v)∖{u,w}. Then the adjacency matrix A(Ġ_̇1̇(a,b)) and its corresponding equitable quotient matrix Q_2(a,b) are as following
A(Ġ_̇1̇(a,b))=[ 0 -1 1 j^T_a 0^T; -1 0 1 0^T j^T_b; 1 1 0 j^T_a j^T_b; j_a 0 j_a (J-I)_a J_b; 0 j_b j_b J_a (J-I)_b; ] andQ_2(a,b)=[ 0 -1 1 a 0; -1 0 1 0 b; 1 1 0 a b; 1 0 1 a-1 b; 0 1 1 a b-1; ].
By Lemma <ref> (1), the eigenvalues of Q_2(a,b) are also the eigenvalues of A(Ġ_̇1̇(a,b)). The characteristic polynomial of Q_2(a,b) is
P_Q_2(a,b)(x,a,b)= x^5-(a+b-2)x^4-(3a+3b+2)x^3+(2ab-a-b-4)x^2
+(5ab+3a+3b)x+2ab+2a+2b+2.
Add some scalar multiple of the all-one block J to block of A(Ġ_̇1̇(a,b)) and A(Ġ_̇1̇(a,b)) becomes
A_2=[ 0 0 0 0^T 0^T; 0 0 0 0^T 0^T; 0 0 0 0^T 0^T; 0 0 0 -I_a 0; 0 0 0 0 -I_b; ].
By Lemma <ref> (2), there are n-5 eigenvalues of Ġ_̇1̇(a,b) contained in the spectra of A_2. Since λ_1(Q_2(a,b))>0 and Spec(A_2)={-1^[n-3],0^[3]}, we have λ_1(Ġ_̇1̇(a,b))=λ_1(Q_2(a,b)).
Noting that
P_Q_2(a,b)(x,a,b)-P_Q_2(a-1,b+1)(x,a-1,b+1)=(b-a+1)(2x+1)(x+2),
then P_Q_2(a,b)(x,a,b)>P_Q_2(a-1,b+1)(x,a-1,b+1) when x>-1/2. Hence, λ_1(Q_2(a,b))<λ_1(Q_2(a-1,b+1)). Thus, λ_1(Ġ_̇1̇(a,b))<λ_1(Ġ_̇1̇(a-1,b+1)), and then
λ_1(Ġ_̇1̇(⌊n-3/2⌋,⌈n-3/2⌉))<⋯<λ_1(Ġ_̇1̇(1,n-4)).
Now we will prove that λ_1(Ġ_̇1̇(1,n-4))<n-2. By (<ref>), we have
P_Q_2(1,n-4)(x,1,n-4) =(x+2)g_2(x),
where g_2(x)=x^4-(n-3)x^3-(n-1)x^2+(3n-11)x+2n-6. Note that, for n⩾ 7,
g_2(n-2)=2n^2-11n+12>0.
To complete the proof, it suffices to prove that λ_2(Ġ_̇1̇(1,n-4))<n-2. In fact, by Lemmas <ref> and <ref>, we have
λ_2(Ġ_̇1̇(1,n-4)) ⩽λ_1(Ġ_̇1̇(1,n-4)-v)
⩽(1-1/ω_b(Ġ_̇1̇(1,n-4)-v))(n-1)
=(n-1)(n-3)/n-2<n-2.
Hence, λ_1(Ġ_̇1̇(1,n-4))<n-2.
λ_1(Ġ_̇1̇'̇(1,n-4))<n-2.
Partition the vertices set of Ġ_̇1̇'̇(1,n-4) as V_1={u}, V_2={v}, V_3={w}, V_4={u_1} and V_5=N(v)∖{u,w}. The corresponding equitable quotient matrix Q_3 of A(Ġ_̇1̇'̇(1,n-4)) is as following
Q_3=[ 0 -1 1 -1 0; -1 0 1 0 n-4; 1 1 0 1 n-4; -1 0 1 0 n-4; 0 1 1 1 n-5; ].
Furthermore, by Lemma <ref>, we know that the λ_1(Ġ_̇1̇'̇(1,n-4))=λ_1(Q_3). The characteristic polynomial of Q_3 is P_Q_3(x)=xg_3(x),
where g_3(x)=x^4 + (5 - n)x^3 + (7 - 3n)x^2 + (n - 5)x + (4n - 12).
Note that, for n⩾ 7,
g_3(n-2)=2n^2-7n+2>0.
To complete the proof, it suffices to prove that λ_2(Ġ_̇1̇'̇(1,n-4))<n-2. In fact, by Lemmas <ref> and <ref>, we have
λ_2(Ġ_̇1̇'̇(1,n-4)) ⩽λ_1(Ġ_̇1̇'̇(1,n-4)-v)
⩽(1-1/ω_b(Ġ_̇1̇'̇(1,n-4)-v))(n-1)
=(n-1)(n-3)/n-2<n-2.
Hence, λ_1(Ġ_̇1̇'̇(1,n-4))<n-2.
λ_1(Ġ_̇2̇(c,d))<n-2, 1⩽ c⩽ d.
We may consider the underlying graph G_2(c,d) for Ġ_̇2̇(c,d) and we prove that Claim by showing
λ_1(G_2(⌊n-4/2⌋,⌈n-4/2⌉))<⋯<λ_1(G_2(1,n-5))<n-2.
Partition the vertices set of G_2(c,d) as V_1={u}, V_2={v}, V_3={w,w_1}, V_4=N(u)∖{v,w,w_1} and V_5=N(v)∖{u,w,w_1}. Then the corresponding equitable quotient matrix Q_4(c,d) of A(G_2(c,d)) is
Q_4(c,d)=[ 0 1 2 c 0; 1 0 2 0 d; 1 1 0 c d; 1 0 2 c-1 d; 0 1 2 c d-1; ].
Since A(G_2(c,d)) is nonnegative and irreducible, we have λ_1(A(G_2(c,d)))=λ_1(Q_4(c,d)) by Lemma <ref>. The characteristic polynomial of Q_4(c,d) is
P_Q_4(c,d)(x,c,d)= x^5+(2-c-d)x^4-(4c+4d+4)x^3+(2cd-2c-2d-14)x^2
+(3cd+5c+5d-13)x+4c+4d-4cd-4.
Noting that
P_Q_4(c,d)(x,c,d)-P_Q_4(c-1,d+1)(x,c-1,d+1)=(d-c+1)(2x^2+3x-4),
then P_Q_4(c-1,d+1)(x,c-1,d+1)<P_Q_4(c,d)(x,c,d) when x> 1. Hence, λ_1(Q_4(c,d))<λ_1(Q_4(c-1,d+1)). Thus, λ_1(G_2(c,d))<λ_1(G_2(c-1,d+1)) and,
λ_1(G_2(⌊n-4/2⌋,⌈n-4/2⌉))<⋯<λ_1(G_2(1,n-5)).
Noting that, by (<ref>)
P_Q_4(1,n-5)(x,1,n-5)=x^5+(6-n)x^4+(12-4n)x^3-16x^2+(8n-48)x,
then, for n⩾ 7, we have
P_Q_4(1,n-5)(n-2,1,n-5)=4n(n-2)(n-6)>0.
To complete the proof, it suffices to prove that λ_2(G_2(1,n-5))<n-2. In fact, by Lemmas <ref> and <ref>, we have
λ_2(G_2(1,n-5)) ⩽λ_1(G_2(1,n-5)-v)
⩽(1-1/ω(G_2(1,n-5)-v))(n-1)
=(n-1)(n-4)/n-3<n-2.
Therefore, we have
λ_1(G_2(1,n-5))<n-2. By Lemma <ref>, we have λ_1(Ġ_̇2̇(c,d))<λ_1(G_2(c,d))<n-2.
λ_1(Ġ_̇i̇(1,n-5))<n-2, i=3,4,5.
Write G_i(1,n-5) as the underlying graph of Ġ_̇i̇(1,n-5) for i=3,4,5. Noting that G_2(1,n-5)≅ G_i(1,n-5) for i=3,4,5. Hence, by Lemma <ref> we have λ_1(Ġ_̇i̇(1,n-5))<λ_1(G_i(1,n-5))=λ_1(G_2(1,n-5))<n-2.
§ A PROOF OF TURÁN NUMBER OF 𝒦_4^-
Let Ġ be an unbalanced signed graph of order n (n⩾7). For any vertex v in V(Ġ), N_Ġ(v) (or N(v)) is the set of the neighbors of v and N_Ġ[v]=N_Ġ(v)∪{v} (or N[v]). For U⊆ V(G), let Ġ[U] be the subgraph induced by U.
Let Ġ be a 𝒦_4^--free unbalanced signed graph with maximum edges. In fact, Ġ is connected. Otherwise, for some two vertices u and v in distinct components, we add the edge uv to Ġ. Then Ġ+uv is a 𝒦_4^--free unbalanced signed graph with more edges than Ġ, which is a contradiction.
Let Ġ be a 𝒦_4^--free unbalanced signed graph with maximum edges. Then Ġ contains at least one negative cycle, and assume the smallest length of the negative cycles is ℓ. Since each signed graph in 𝒢 is 𝒦_4^--free and unbalanced, from (<ref>) we have
e(Ġ)⩾n(n-1)/2-(n-3).
If ℓ⩾ 4, then Ġ is 𝒦_3^--free. Noting that Ġ is connected, by Theorem <ref>, we have
e(Ġ)⩽n(n-1)/2-(n-2)<n(n-1)/2-(n-3),
which is a contradiction to (<ref>).
Hence, assume that uvwu is an unbalanced K_3 with σ(uv)=-1 and σ(uw)=σ(vw)=+1. Suppose that e(Ġ)=n(n-1)/2-q. Since e(Ġ)⩾n(n-1)/2-(n-3), we have q⩽ n-3. Now the proof will be divided into two cases.
Case 1. |N(u)∩ N(v)|=1.
Let N(u)∖{v,w}={u_1,⋯,u_a} and N(v)∖{u,w}={v_1,⋯,v_b}. Then, a+b⩽ n-3. In this case, we have
e(Ġ) =e(Ġ[V(Ġ)∖{u,v}])+e(Ġ[{u,v},V(Ġ)∖{u,v}])+1
⩽(n-2)(n-3)/2+(a+1)+(b+1)+1
⩽n(n-1)/2-(n-3).
Hence, by (<ref>) we have e(Ġ)=n(n-1)/2-(n-3).
Furthermore, Ġ[V(Ġ)∖{u,v}] is a clique and a+b=n-3, namely each vertex in V(Ġ)∖{u,v,w} is adjacent to u or v. Since the switching operation remains the sign of cycles, any Ġ_U is still 𝒦^-_4-free and unbalanced. We may suppose σ(uu_i)=+1 for any 1⩽ i⩽ a. Otherwise, we do switching operation at some u_i. Similarly, suppose σ(vv_j)=+1 for any 1⩽ j⩽ b.
Without loss of generality, assume that a⩽ b. The assumption n⩾ 7 ensures that b⩾ 2. For any two vertices, say v_i and v_j, in N(v)∖{u,w}, we have Ġ[{v,w,v_i,v_j}]∼ (K_4,+). Hence, σ(wv_i)=σ(v_iv_j)=+1 for 1⩽ i,j⩽ b, and then Ġ[N(v)∖{u}] is a clique with all positive edges.
If a=0, then Ġ∼Ġ_̇1̇(0,n-3). If a=1, then Ġ[{w,u_1,v_i,v_j}]∼ (K_4,+) and σ(u_1w)=σ(u_1v_i) for 1⩽ i⩽ b. If σ(u_1w)=σ(u_1v_i)=+1, then Ġ∼Ġ_̇1̇(1,n-4). If σ(u_1w)=σ(u_1v_i)=-1, then Ġ∼Ġ_̇1̇'̇(1,n-4).
If a⩾ 2, for any two vertices, say u_i and u_j, in N(u)∖{v,w}, we have Ġ[{u,w,u_i,u_j}]∼ (K_4,+). Hence, Ġ[N(u)∖{v}] is a clique with all positive edges. Noting that Ġ[{w,u_i,v_j,v_t}] ∼ (K_4,+), we have σ(u_i v_j)=+1 for 1⩽ i⩽ a and 1⩽ j⩽ b. Therefore, Ġ∼Ġ_1(a, b) with a⩾ 2.
Hence, in this case, Ġ=Ġ_̇1̇(a,b) with a, b⩾ 0, a+b=n-3, or Ġ=Ġ_̇1̇'̇(1,n-4).
Case 2. |N(u)∩ N(v)|⩾ 2.
Let N(u)∩ N(v)={w,w_1,⋯,w_k}, N[u]∖ N[v]={u_1,⋯,u_c}, and N[v]∖ N[u]={v_1,⋯,v_d}. For any 1⩽ i⩽ k, we claim that w is not adjacent to w_i. Otherwise, suppose that w is adjacent to w_1. Since Ġ[{u,v,w}] is an unbalanced K_3, we have Ġ[{u,v,w,w_1}] is an unbalanced K_4, which is a contradiction to the assumption that Ġ is 𝒦_4^--free and unbalanced.
We further claim that each vertex of Ġ is adjacent to the vertex u or v. Otherwise suppose there are r vertices in V(Ġ)∖(N(u)∪ N(v)). Then 3+c+d+k+r=n holds. From the inequality
n-3⩾ q⩾ c+d+k+2r=n-3+r,
we have r=0. Furthermore q=c+d+k=n-3. Then Ġ[V(Ġ)∖{u,v,w}] is a clique and w is adjacent to u_i and v_j for 1⩽ i⩽ c and 1⩽ j⩽ d.
We may suppose σ(uu_i)=σ(vv_j)=σ(vw_t)=+1 for 1⩽ i⩽ c, 1⩽ j⩽ d, and 1⩽ t⩽ k. Without loss of generality, assume that 0⩽ c⩽ d. If c=0, then we may set Ġ^*=Ġ_{u}, namely, Ġ^* is obtained from Ġ by a switching operation at the vertex u. Then in Ġ^*, we have |N(u)∩ N(w)|=1 and σ(u w)=-1. Thus Ġ^* satisfies the condition of Case 1. Then we have
e(Ġ)=e(Ġ^*)=n(n-1)/2-(n-3),
and Ġ∼Ġ^*∼Ġ_̇1̇(a,b) or Ġ_̇1̇'̇(1,n-4). Now suppose c⩾ 1, and we will distinguish two subcases.
Subcase 2.1. |N(u)∩ N(v)|=2.
Since k=1 and n⩾ 7, we have d⩾ 2. For any two vertices, say v_i and v_j, in N[v]∖ N[u], we have Ġ[{v,w,v_i,v_j}]∼ (K_4,+). Hence, σ(wv_i)=+1 for 1⩽ i⩽ d and Ġ[N[v]∖ N[u]] is a clique with all positive edges. Similarly, we have σ(w_1v_i)=+1 for 1⩽ i⩽ d.
If c=1, then d=n-5. Since the clique Ġ[{w,u_1,v_i,v_j}]∼ (K_4,+), we have σ(u_1w)=σ(u_1v_i), and similarly, we get σ(u_1w_1)=σ(u_1v_i). Thus σ(u_1w)=σ(u_1w_1)=σ(u_1v_i) for 1⩽ i⩽ n-5. Suppose σ(uw_1)=+1. If σ(u_1w)=σ(u_1w_1)=σ(u_1v_i)=+1, then Ġ∼Ġ_̇2̇(1,n-5). If σ(u_1w)=σ(u_1w_1)=σ(u_1v_i)=-1, then we do switching operation at {u_1} and Ġ∼Ġ_̇3̇(1,n-5). Suppose σ(uw_1)=-1. If σ(u_1w)=σ(u_1w_1)=σ(u_1v_i)=+1, then Ġ∼Ġ_̇4̇(1,n-5). If σ(u_1w)=σ(u_1w_1)=σ(u_1v_i)=-1, then we do switching operation at {u_1} and Ġ∼Ġ_̇5̇(1,n-5).
Now suppose c⩾ 2. For any two vertices, say u_i and u_j, in N[u]∖ N[v], then Ġ[{u,w,u_i,u_j}] ∼ (K_4,+). Hence, σ(wu_i)=+1 for 1⩽ i⩽ c, and then Ġ[N[u]∖ N[v]] is a clique with all positive edges. Noting that Ġ[{w,u_i,v_j,v_t}]∼(K_4,+), then σ(u_iv_j)=+1 for 1⩽ i⩽ c and 1⩽ j⩽ d. By the fact that the cliques Ġ[{w_1,u_i,v_j,v_t}]∼ (K_4,+) and Ġ[{u,w_1,u_i,u_j}]∼ (K_4,+), we have σ(w_1u_i)=+1 for 1⩽ i⩽ c and σ(uw_1)=+1, respectively. Therefore, Ġ∼Ġ_̇2̇(c,d) in this subcase.
Subcase 2.2. |N(u)∩ N(v)|⩾3.
If c=1, then we may set Ġ^*=Ġ_{u}, namely, Ġ^* is obtained from Ġ by a switching operation at the vertex u. Then in Ġ^*, we have |N(u)∩ N(w)|=2, σ(u w)=-1, {w_1,w_2,⋯,w_k}=N[u]∖ N[w], and {v_1,⋯,v_d}=N[w]∖ N[u]. Thus Ġ^* satisfies the conditions of Subcase 2.1. Hence, Ġ∼Ġ^̇*̇∼Ġ_̇2̇(k,d).
Now suppose c⩾ 2. The fact q=n-3 ensures that Ġ[{w_1,⋯,w_k}] is a clique. Noting that σ(vw_t)=+1, we have σ(uw_t)=-1 for 1⩽ t⩽ k. Otherwise, if σ(uw_1)=+1, then Ġ[{u,v,w_1,w_2}] is an unbalanced K_4, which is a contradiction. Furthermore, Ġ[{w_1,⋯,w_k}] is a clique with all positive edges.
Since Ġ[{v,w,v_i,v_j}]∼ (K_4,+), we have σ(w v_i)=+1 and σ(v_jv_t)=+1 for 1⩽ i,j,t⩽ d. Similarly, we have σ(wu_i)=+1 and σ(u_ju_t)=+1 for 1⩽ i,j,t⩽ c. Since Ġ[{w,u_i,v_j,v_t}]∼(K_4,+), we have σ(u_i v_j)=+1 for 1⩽ i⩽ c and 1⩽ j⩽ d. Considering the cliques Ġ[{u,u_i,w_j,w_t}] and Ġ[{v,v_i,w_j,w_t}], we have σ(v_i w_j)=+1 for 1⩽ i⩽ d and 1⩽ j⩽ k, and σ(u_i w_j)=-1 for 1⩽ i⩽ c and 1⩽ j⩽ k. While Ġ[{w_1,w_2,u_1,v_1}] is an unbalanced K_4, which is a contradiction.
§ A PROOF OF SPECTRAL TURÁN NUMBER OF 𝒦_4^-
Let Ġ be a signed graph of order n. By the table of the spectra of signed graphs with at most six vertices <cit.>, we can check that Theorem <ref> is true for n⩽ 6. Therefore, assume that n⩾ 7. The following celebrated upper bound of ρ(G) is very crucial for our proof.
<cit.>
Let G be a graph of order n with the minimum degree δ=δ(G) and e=e(G). Then
ρ(G)⩽δ-1+√(8e-4δ n+(δ+1)^2)/2.
The negation of Ġ (denoted by -Ġ) is obtained by reversing the sign of every edge in Ġ. Obviously, the eigenvalues of -Ġ are obtained by reversing the sign of the eigenvalues of Ġ.
Let Ġ=(G,σ) be a 𝒦_4^--free unbalanced signed graph with maximum spectral radius. Since Ġ_̇1̇(0,n-3) is a 𝒦_4^--free unbalanced signed graph, by Lemma <ref>,
ρ(Ġ)⩾ρ(Ġ_̇1̇(0,n-3))=n-2.
We claim that ρ(Ġ)=λ_1(Ġ). Otherwise ρ(Ġ)=max{λ_1(Ġ),-λ_n(Ġ)}=-λ_n(Ġ). Assume that Ġ_̇1̇=-Ġ. Hence, λ_1(Ġ_̇1̇)=-λ_n(Ġ). Since Ġ is 𝒦_4^--free, we have ω_b(Ġ_̇1̇)⩽ 3. By Lemma <ref>, for n⩾ 7, we have
ρ(Ġ)=-λ_n(Ġ)=λ_1(Ġ_̇1̇)⩽(1-1/ω_b(Ġ_̇1̇))n⩽2/3n<n-2,
which is a contradiction to (<ref>).
We claim that Ġ is connected. Let x=(x_1,x_2,⋯,x_n)^T be a unit eigenvector of A(Ġ) corresponding to λ_1(Ġ). Let vertices u and v be any two vertices belonging to distinct components, then we construct a signed graph Ġ_̇2̇=(G+uv,σ_2) with σ_2(e)=σ(e) when e∈ E(Ġ). If x_ux_v⩾ 0 (resp. x_ux_v< 0), then we take σ_2(uv)=+1 (resp. σ_2(uv)=-1). Hence, by Rayleigh-Ritz Theorem we have
λ_1(Ġ_2)-λ_1(Ġ)⩾x^TA(Ġ_2)x-x^TA(Ġ)x= 2σ_2(uv)x_ux_v⩾0.
Since Ġ_2 is also 𝒦_4^--free and unbalanced, we have λ_1(Ġ_2)⩽λ_1(Ġ). So, λ_1(Ġ_2)=λ_1(Ġ). Furthermore, λ_1(Ġ_2)=x^TA(Ġ_2)x holds, and then A(Ġ_2)x=λ_1(Ġ_2)x. From (<ref>), x_ux_v=0 holds. Without loss of generality, suppose x_u=0. By A(Ġ)x=λ_1(Ġ)x and A(Ġ_2)x=λ_1(Ġ_2)x, we have
λ_1(Ġ)x_u=∑_w∈ N_Ġ(u)σ(uw)x_w=0,
and then
λ_1(Ġ_2)x_u=∑_w∈ N_Ġ_̇2̇(u)σ(uw)x_w+σ_2(uv)x_v=σ_2(uv)x_v=0.
Hence, x_v=0. Then x is a zero vector, which is a contradiction.
We claim that δ(Ġ)⩾ 2. Otherwise there exits a vertex u with d_Ġ(u)=1 and uv∉ E(Ġ) for some vertex v. Then we construct a signed graph Ġ_̇3̇=(G+uv,σ_3). Noting that d_Ġ_̇3̇(u)=2, Ġ_̇3̇ is still 𝒦_4^--free unbalanced. Furthermore, we may obtain a contradiction as above.
If e(Ġ)⩽n(n-1)/2-(n-2), for the underlying graph G, by Theorem <ref> we have
ρ(G) ⩽δ-1+√(8e(G)-4δ n+(δ+1)^2)/2
⩽δ-1+√(8(n(n-1)/2-(n-2))-4δ n+(δ+1)^2)/2
=δ-1+√(4n^2-4(δ+3)n+δ^2+2δ+17)/2
⩽δ-1+√(4n^2-4(δ+3)n+δ^2+6δ+9)/2
=n-2.
By Lemma <ref>, we have ρ(Ġ)=λ_1(Ġ)<ρ(G)⩽ n-2, which is a contradiction to (<ref>). Hence, e(Ġ)⩾n(n-1)/2-(n-3). By Theorem <ref>, we have e(Ġ)=n(n-1)/2-(n-3) and Ġ is switching equivalent to a signed graphs in 𝒢. Noting that ρ(Ġ)⩾ n-2, by Lemma <ref>, we have Ġ∼Ġ_̇1̇(0,n-3), and ρ(Ġ)= n-2.
Conflicts of Interest: The authors declare no conflict of interest.
100
ABH S. Akbari, F. Belardo, F. Heydari, et al., On the largest eigenvalue of signed unicyclic graphs, Linear Algebra Appl. 581 (2019) 145-162.
BCKW F. Belardo, S. Cioabă, J. Koolen, J. Wang, Open problems in the spectral theory of signed graphs, Art Discrete Appl. Math. 1 (2018) P2. 10.
BCST F. C. Bussemaker, P. J. Cameron, J. J. Seidel, et al., Tables of signed graphs, Eut Report 91-WSK-01, Eindhoven, 1991.
BH A. E. Brouwer, W. H. Haemers, Spectra of graphs, Springer, 2011.
BS J. A. Bondy, M. Simonovits, Cycles of even length in graphs, J. Combin. Theory Ser. B 16 (1974) 97-105.
CDT S. Cioabă, D. N. Desai, M. Tait, The spectral radius of graphs with no odd wheels, European J. Combin. 99 (2022) 103420.
CH D. Cartwright, F. Harary, Structural balance: a generalized of Heider’s theory, Psyshol Rev. J. 63 (1956) 227-293.
DKL D. N. Desai, L. Y. Kang, Y. T. Li, et al., Spectral extremal graphs for intersecting cliques, Linear Algebra Appl. 644 (2022) 234-258.
FG Z. Füredi, D. S. Gunderoson, Extremal numbers for odd cycles, Comb. Probab. Comput. 24 (2015) 641-645.
H F. Harary, On the notion of balance of a signed graph, Michigan Math. J2 (1953) 143-146.
H1 H. Huang, Induced graphs of the hypercube and a proof of the Sensitivity Conjecture, Ann. of Math. 190 (2019) 949-955.
HSF Y. Hong, J. Shu, K. Fang, A sharp upper bound of the spectral radius of graphs, J. Combin. Theory Ser. B 81 (2001) 177-183.
KP M. R. Kannan, S. Pragada, Signed spectral Turán type theorems, https://arxiv.org /abs/2204.09870.
KS T. Koledin, Z. Stanić, Connected signed graphs of fixed order, size, and number of negative edges with maximal index, Linear and Multilinear Algebra 65 (2017) 2187-2198.
N V. Nikiforov, Some inequalities for the largest eigenvalue of a graph, Combin. Probab. Comput. 11 (2002) 179-189.
N1 V. Nikiforov, Bounds on graph eigenvalues II, Linear Algebra Appl. 427 (2007) 183-189.
S Z. Stanić, Bounding the largest eigenvalue of signed graphs, Linear Algebra Appl. 573 (2019) 80-89.
T P. Turán, On an extremal problem in graph theory, Matematikaiés Fizikai Lapok (in Hungarian) 48 (1941) 436-452.
W H. S. Wilf, Spectral bounds for the clique and independence numbers of graphs, J. Combin. Theory Ser. B 40 (1986) 113-117.
WHL D. J. Wang, Y. P. Hou, D. Q. Li, Extremed signed graphs for triangle, https://arxiv. org/abs/2212.11460.
WKX J. Wang, L. Y. Kang, Y. S. Xue, On a conjecture of spectral extremal problems, J. Combin. Theory Ser. B 159 (2023) 20-41.
WYQ W. Wang, Z. D. Yan, J. G. Qian, Eigenvalues and chromatic number of a signed graph, Linear Algebra Appl. 619 (2021) 137-145.
YZ L. T. Yuan, X. D. Zhang, Turán numbers for disjoint paths, J. Graph Theory 98 (2021) 499-524.
Z T. Zaslavsky, Signed graphs, Discrete Appl. Math. 4 (1982) 47-74.
|
http://arxiv.org/abs/2306.07753v1
|
20230613131038
|
Forbidden dark matter annihilation into leptons with full collision terms
|
[
"Amin Aboubrahim",
"Michael Klasen",
"Luca Paolo Wiggering"
] |
hep-ph
|
[
"hep-ph"
] |
a]Amin Aboubrahim,
a]Michael Klasen,
a]and Luca Paolo Wiggering
[a]Institut für Theoretische Physik, Westfälische Wilhelms-Universität Münster,
Wilhelm-Klemm-Straße 9, 48149 Münster, Germany
[email protected]
[email protected]
[email protected]
The standard approach of calculating the relic density of thermally produced dark matter based on the assumption of kinetic equilibrium is known to fail for forbidden dark matter models since only the high momentum tail of the dark matter phase space distribution function contributes significantly to dark matter annihilations. Furthermore, it is known that the computationally less expensive Fokker-Planck approximation for the collision term describing elastic scattering processes between non-relativistic dark matter particles and the Standard Model thermal bath breaks down if both scattering partners are close in mass. This, however, is the defining feature of the forbidden dark matter paradigm. In this paper, we therefore include the full elastic collision term in the full momentum-dependent Boltzmann equation as well as in a set of fluid equations that couple the evolution of the number density and dark matter temperature for a simplified model featuring forbidden dark matter annihilations into muon or tau leptons through a scalar mediator. On the technical side, we perform all angular integrals in the full collision term analytically and take into account the effect of dark matter self-interactions on the relic density. The overall phenomenological outcome is that the updated relic density calculation results in a significant reduction of the experimentally allowed parameter space compared to the traditional approach, which solves only for the abundance. In addition, almost the entire currently viable parameter space can be probed with CMB-S4, next-generation beam-dump experiments or at a future high-luminosity electron-position collider, except for the resonant region where the mediator corresponds to approximately twice the muon or tau mass.
MS-TP-23-15
Forbidden dark matter annihilation into leptons with full collision terms
[
=========================================================================
§ INTRODUCTION
The existence of dark matter (DM) is confirmed by numerous astrophysical observations <cit.> and the associated energy density has been determined precisely to the value Ω_χ h^2 = 0.12±0.001
through a series of analyses of the Cosmic Microwave Background (CMB) within the ΛCDM model <cit.>. Nevertheless, the nature and intrinsic properties of DM remain unknown. A theoretically appealing explanation is that all of the observed DM consists of a single new elementary particle species with sufficiently strong interactions with the Standard Model (SM) to have established full thermal equilibrium at some point in the early Universe. In this picture, today's DM relic abundance is set once the DM annihilation cross section falls below the Hubble expansion rate so that the dark sector drops out of chemical equilibrium and the DM number density becomes an effective comoving constant (freezes out). Within the usual approach to determine the relic density, the classical Boltzmann equation describing the evolution of the DM phase space distribution function in a Friedmann-Robertson-Walker (FRW) Universe is solved. To simplify the calculation, the majority of numerical DM codes, e.g. <cit.>, assume that kinetic equilibrium holds until long after chemical decoupling which then allows to trace only the integral over the DM phase space distribution function (the number density) <cit.>. However, kinetic decoupling might occur much earlier than chemical decoupling even in simple models given that the DM annihilation cross section exhibits a strong velocity dependence caused by e.g. resonances, thresholds or the Sommerfeld enhancement effect <cit.>. As a consequence, the final value of the relic abundance can be altered by more than an order of magnitude compared to the traditional number density approach <cit.>. In order to adequately model the effect of early kinetic decoupling,
extensions of this “standard” number density Boltzmann equation (nBE) approach have been developed. Possibilities are for example (1) to solve a set of coupled Boltzmann equations (cBE) assuming that deviations from equilibrium are entirely described by the chemical potential and the temperature or (2) to obtain a numerical solution for the full Boltzmann equation (fBE) at the level of the phase space distribution function. Compared to the nBE treatment, the fluid approximation consists of two coupled Boltzmann equations, one for the number density and one for the velocity dispersion (“the DM temperature”) by keeping the assumption of a thermal DM distribution, but at a temperature different from the photon temperature. It should be noted that the same kind of hydrodynamical formalism is also used to estimate the mass of the smallest dark matter subhalos <cit.> and to model the dynamics of domain walls within cosmological phase transitions <cit.>. Both of these two methods are available in the publicly available and based numerical precision code <cit.>. However, the default implementation of the elastic collision term for both approaches relies on a Fokker-Planck (FP) type operator derived under the assumption of a small-momentum transfer between DM and SM particles in the thermal bath compared to the average DM momentum which is not necessarily the case if the DM particle and the scattering partner are close in mass. This, however, is a defining feature of forbidden or sub-threshold DM, which is a class of models where DM dominantly annihilates into heavier states <cit.>. These annihilations are made possible through the sufficiently large temperatures in the early Universe.
Going beyond the current state of research described above, we design our own -based Boltzmann equation solver with full elastic collision terms for both approaches and analyze, as an example, the forbidden DM model <cit.> in which a singlet Dirac DM particle couples to SM leptons via a new scalar mediator. We stress again that the analysis is carried out at the level of the phase space density without relying on simplifying approximations of the elastic collision term or on the Fokker-Planck version of the cBE approach alone as in Ref. <cit.>. For this purpose, we perform the angular integrals of the full elastic collision term analytically. This calculation and the associated methodology respond to the increasing interest in full solutions of the momentum-dependent Boltzmann equation not only in the context of the DM relic density <cit.>, but also in many other areas, e.g. the precise computation of the effective number of neutrino species in the early Universe <cit.>, leptogenesis <cit.>, cosmic inflation <cit.> and gravitational waves from first order phase transitions <cit.>. As an alternative to the Boltzmann framework, Langevin simulations have been proposed to deal with non-equilibrium momentum distributions of non-relativistic DM in a FRW background <cit.>.
This paper is organized as follows: in Sec. <ref> we introduce the momentum-dependent Boltzmann equation as well as the fluid equations and describe the numerical solution strategy. The particle content of the forbidden DM model is introduced in Sec. <ref> along with a detailed comparison of the relic density obtained with the different approaches including a discussion of the evolution of the phase space distribution function and the effect from DM self-scattering processes, followed by the presentation of the impact of current and projected limits besides the relic density from CMB observations, beam-dump and collider experiments on the parameter space. Conclusions are given in Sec. <ref>. Self-interaction cross sections as well as form factors for loop-induced processes are provided in App. <ref> and the details of the derivation of the elastic scattering collision term are given in App. <ref>. The latter constitutes one of our main results in this work.
§ FROM THE BOLTZMANN EQUATION TO THE RELIC ABUNDANCE OF THERMAL DM
The standard starting point for the calculation of the DM relic density is the classical Boltzmann equation
f_χt - H p f_χp = C_coll[f_χ],
describing the evolution of the phase space distribution function f_χ(p,t) of a DM particle χ with momentum p and corresponding energy E=√(p^2 + m_χ^2) in time t within the FRW cosmological model, where H=ȧ/a denotes the Hubble expansion rate and a(t) the scale factor <cit.>. The collision term C_coll[f_χ] takes into account the loss and gain of DM particles at the momentum p through interactions with the SM and is defined in detail in the next section. The momentum derivative responsible for modeling the cosmological redshift can be absorbed into the phase space density by rewriting the Boltzmann equation in terms of a dimensionless comoving momentum q ∼ p a, thus turning the partial differential equation into an infinite set of coupled ordinary differential equations
f_χ(q,t)t = C_coll[f_χ],
where one has to make the identification f_χ(p,t) = f_χ(q,t). Since the thermal equilibrium distribution functions f_±(E) = 1/(1 ± e^-E/T) for a Fermi-Dirac/Bose-Einstein distribution or f_MB(E) = e^-E/T for a Maxwell-Boltzmann distribution
depend only implicitly on time through the temperature, it is instructive to express the evolution of f_χ not terms of time but through the photon temperature T or alternatively through the dimensionless quantity x=m_0/T, where m_0 is in general some reference scale which we set to the DM mass m_χ.
This is achieved by using the scale factor as a time variable in the intermediate step f_χa = 1/H a C_coll[f_χ] and replacing the derivative with respect to a(t) afterwards by assuming entropy conservation x (s a^3)=0 giving
f_χx = -sx1/3 s H C_coll[f_χ].
During radiation domination, i.e. for T ≳100, the entropy density s = h_eff(T) 2 π^2/45 T^3 and the energy density ρ = g_eff(T) π^2/30 T^4 can be safely expressed in terms of the SM effective number of degrees of freedom, h_eff(T) and g_eff(T), for which we use the tabulated values from the lattice QCD calculation by Drees et al. <cit.>.
Applying the Friedmann equation for a flat Universe H^2 = ρ/(3 M^2_Pl) with the reduced Planck mass M_Pl=1/√(8 π G)
gives the final equation
f_χ(q,x)x = √(90/π^2)g̃(T)/ g^1/2_eff(T)x M_Pl/m_0^2 C_coll[f_χ],
where we have introduced the shorthand notation
g̃(T) = 1 + 1/3ln h_eff(T)ln T.
Note that there already occur differences in the DM abundance at the level of Eq. (<ref>) due to e.g. uncertainties in the SM equation of state <cit.> or unknown additional light degrees of freedom before Big Bang nucleosynthesis which change the expansion history of the Universe <cit.>.
Lastly, it is necessary to pick a convention for the proportionality factor relating the comoving momentum q to the physical momentum p. Since it is only possible to consider the ratio of the scale factor relative to its value at some other reference temperature T', we define the comoving momentum to be
q =[h_eff(T')]^-1/3p /T'a(T)/a(T') = [h_eff(T)]^-1/3p/T.
§.§ The collision terms
As a general expression for the collision operator is rather cumbersome, we provide as the only example relevant to this work the collision term for a generic two-particle interaction a b ↔ 12 in a CP-conserving theory
C_coll[f_a] = 1/2 E_a g_a∫Π_bΠ_1Π_2 |ℳ_a b→ 1 2|^2
× (2π)^4 δ^(4)(p_a + p_b - p_1 - p_2) 𝒫(f_a,f_b,f_1,f_2),
where a carries g_a internal degrees of freedom and Π_i = [3]p_i/[(2π)^32 E_i] is the Lorentz invariant integration measure. In our convention |ℳ_a b→ 1 2|^2=|ℳ_1 2→ a b|^2 always represents the squared matrix element summed (not averaged) over both initial and final internal degrees of freedom and includes a symmetry factor 1/2 for identical particles in either the initial or final state.
The phase space densities are contained in the population factor
𝒫(f_a,f_b,f_1,f_2) = f_1 f_2 (1 ± f_a)(1 ± f_b)
- f_a f_b (1 ± f_1)(1 ± f_2),
with +(-) accounting for Bose enhancement (Pauli blocking) of the final states. The first term on the right-hand-side is often called the gain term, whereas the second is referred to as the loss term. The associated statistical factors can be dropped for a non-relativistic gas, i.e. if 1 ± f ≈ 1. According to the number of appearances of the unknown distribution function f_χ in the loss and gain terms, the collision term can be split further into a number changing contribution, C_ann, from DM annihilation processes into SM final states, a number conserving part, C_el, from elastic scattering processes of DM with the SM thermal bath responsible for maintaining kinetic equilibrium and a collision term C_self describing self-scattering processes. In the following part, the relevant individual contributions to C_coll are worked out in more detail.
For non-relativistic DM, the Bose enhancement and Pauli blocking factors can be safely neglected, thus implying that in C_ann, SM particles are, due to energy conservation, in very good approximation described by a Boltzmann distribution and that consequently the statistical factors accompanying the SM densities can be dropped as well. These simplifications allow to express the annihilation collision term as
C_ann[f_χ] = g_χ/2π^2 ∫_0^∞p_b p_b^2 ⟨ v_Mø lσ_ann⟩_θ[f_MB(p_a) f_MB(p_b) - f_χ(p_a) f_χ(p_b) ],
with the azimuthally averaged annihilation cross section
⟨ v_Mø lσ_ann⟩_θ = 1/2∫_-1^1 cosθ v_Mø lσ_ann .
Here the Møller velocity is v_Mø l = √(s(s - 4 m^2_χ))/(2 E_a E_b), with θ denoting the angle between the incoming momenta p_a and p_b.
In contrast to the annihilation term, the elastic collision term
C_el[f_χ] = 1/2 E_a g_a∫Π_bΠ_1Π_2 |ℳ_χ j →χ j|^2 (2π)^4 δ^(4)(p_a + p_b - p_1 - p_2)
×[f_χ(p_1) f^(j)_±(p_2) (1 ∓ f^(j)_±(p_b)) - f_χ(p_a) f^(j)_±(p_b)(1 ∓ f^(j)_±(p_2)],
contains the unknown DM distribution function f_χ along with the equilibrium density f^(j)_± of a SM particle j in both the loss and gain terms, significantly increasing the evaluation complexity of C_el
since there remain in general four integrals after imposing four-momentum conservation and using the rotational symmetry around the axis defined by the incoming momentum. Such a general, yet numerically very expensive, parametrization of the collision term for generic two-particle interactions has been put forward by Hannestad et al. <cit.> and has been, to the best of our knowledge, the method of choice so far to evaluate the full collision term in the context of predicting the DM relic abundance, see e.g. <cit.>.
In contrast to the method usually used in the literature,
we provide, as an integral part of this work, a parametrization of the general 2-to-2 collision term along with an integration scheme which together allow to
perform seven out of the nine momentum integrals analytically for the particular case where the matrix element depends only on one Mandelstam variable whose spatial component we label as k^2 and that can be brought into the form
|ℳ_ab→ 1 2|^2 = c_0 + c_1/Δ - k^2 + c_2/(Δ - k^2)^2,
with k-independent coefficients c_0, c_1, c_2 and another free parameter Δ. In the end, the collision operator contains the integration kernel Π and takes the form
C_coll[f_a] = 1/128 π^3 E_a p_a ∫^∞_m_1E_1∫^∞_max(m_b,E_1-E_a+m_2)E_b Π(E_a,E_b,E_1) 𝒫(f_a,f_b,f_1,f_2),
thus requiring the same, or even less, computational effort as the annihilation term in Eq. (<ref>).
More details on how Eq. (<ref>) was derived and an explicit example for the integration kernel are provided in App. <ref>. Let us stress that the application of this technique is not limited to elastic scatterings but can also be applied to e.g. the annihilation operator if quantum statistical effects need to be included.
In contrast to the full expression for C_el, the DM code implements by default the scattering term through the Fokker-Planck type operator <cit.>
C_FP[f_χ] = γ(T)/2[T E ∂_p^2 + (p + 2 T E/p + T p/E) ∂_p + 3] f_χ(p),
which is valid under the assumption of non-relativistic DM and if the momentum transfer is small compared to the DM mass.
Here, the particle physics model enters through the momentum exchange rate which is an integral over the energy ω = √(k^2 + m_j^2) of the bath particle j and reads
γ(T) = 1/48 π^3 g_χ m_χ^3 T∫_m_j^∞ω f_±(ω) [1 ∓ f_±(ω)] (k^4 ⟨ |ℳ_χ j →χ j|^2 ⟩_t ),
while the averaged matrix element is given by
⟨ |ℳ_χ j →χ j|^2 ⟩_t = 1/8 k^4∫^0_-4 k^2_cmt (-t) |ℳ_χ j →χ j|^2,
where the lower integration limit is defined through the center-of-mass momentum k^2_cm = ( s - (m_χ - m_j)^2)(s - (m_χ + m_j)^2)/4 s evaluated at s = m_χ^2 + 2ω m_χ + m_j^2.
§.§ The fluid dynamics approach
An alternative to the computationally expensive momentum-dependent Boltzmann equation is to take a hydrodynamic approach and consider averaged quantities
instead. For DM, the most interesting ones are the DM number density
n_χ=g_χ∫[3]p/(2π)^3 f_χ(p),
and the velocity dispersion or DM “temperature” T_χ which one can define through the second moment of f_χ as
T_χ = ⟨p^2/3 E⟩ = g_χ/n_χ∫[3]p/(2π^3)p^2/3 E f_χ(p).
This temperature definition has the advantage of becoming an identity if f_χ has a Maxwellian shape f_χ∼ e^-E/T_χ.
However, in order to close the Boltzmann hierarchy, this approach requires further assumptions on the phase space distribution function. The simplest and traditional approach is to assume that DM is non-relativistic and therefore obeys the shape of a Maxwell-Boltzmann distribution f_χ∼ e^-E/T with the same temperature as the visible sector. Then, the Boltzmann equation for the number density is obtained by integrating Eq. (<ref>) over g_χ[3]p/(2π)^3 so that only the contribution from the number changing annihilation collision term survives and one arrives at the often quoted number density Boltzmann equation
n_χt + 3 H n_χ = ⟨σ v⟩_T ((n_χ^eq)^2 - n_χ^2),
with n_χ^eq = g_χ m_χ^3 K_2(x)/(2 π^2 x) being the equilibrium number density for a vanishing chemical potential μ = 0. The thermally averaged DM annihilation cross section into SM particles can be stated in terms of a single integral over the collision energy <cit.>
⟨σ v ⟩_T = 1/8 m_χ^4 T K_2^2(m_χ/T)∫_4 m_χ^2^∞sσ_χχ→SM(s)√(s)(s - 4 m_χ^2) K_1(√(s)/T),
where K_1 and K_2 are the modified Bessel functions of the second kind and degrees one and
two, respectively. If elastic scattering processes between the dark sector and the SM are not efficient enough to keep both sectors in kinetic equilibrium, one can open up the second moment of the Boltzmann equation and treat it as an equation for the DM temperature T_χ. In the resulting system of equations
1/Y_χY_χx = s Y_χ/x H̃[(Y_χ^eq)^2/Y_χ^2⟨σ v ⟩_T - ⟨σ v ⟩_T_χ],
1/yyx = ⟨ C_el⟩_2/x H̃ + s Y_χ/x H̃[⟨σ v⟩_T_χ - ⟨σ v ⟩_2,T_χ] + s Y_χ/x H̃(Y_χ^eq)^2/Y_χ^2[ y_eq/y⟨σ v ⟩_2,T - ⟨σ v ⟩_T ] + g̃/x ⟨ p^4/E^3 ⟩_T_χ/3 T_χ,
the number density is expressed through the yield Y_χ = n_χ/s and the temperature through the dimensionless version y = T_χ m_χ s^-2/3 of the second momentum moment with y_eq = m_χ T s^-2/3 and H̃ = H/g̃.
The temperature subscript on the (thermal) averages indicates whether the SM or DM distribution is used to perform the average while a `2' as subscript refers to the additional appearance of p^2/3E in the averaging process. For a more detailed discussion of the cBE approach as well as the precise definition of ⟨σ v ⟩_2 and ⟨ p^4/E^3 ⟩_T_χ, we refer to Refs. <cit.>. Let us close this brief summary of the fluid equations by highlighting that the elastic collision term does not drop out for the second moment in contrast to the zeroth moment but enters through the average
⟨ C_el⟩_2 = g_χ/n_χ T_χ∫[3]p/(2π)^3p^2/3 E C_el,
which simplifies to ⟨ C_el⟩_2 →γ(T)(y_eq/y - 1) for the Fokker-Planck approximation in Eq. (<ref>), where we have neglected the additional relativistic ⟨ p^4/E^3 ⟩_T_χ correction from C_FP.
In all cases, today's DM relic density is determined from the DM number density
through
Ω_χ h^2=κn_χ m_χ/ρ_c/h^2h_ eff(T_0)/h_ eff(T_∞)(T_0/T_∞)^3,
with κ=1(2) if χ is self-conjugate (non-self-conjugate), today's photon temperature T_0, the photon temperature T_∞ long after freeze-out and the critical density ρ_c.
§.§ Discretization technique and numerical strategy
To solve Eq. (<ref>) for the phase space distribution numerically, we restrict the comoving momenta to lie in the range 10^-2≤ q≤ 10^2 and discretize it into N=200 points q_1, …, q_N since we found that this range together with the number of momentum slices allows to accurately solve the full Boltzmann equation for a wide range of DM masses.
Inspired by <cit.> and <cit.>, we choose to discretize the comoving momentum space according to the Gauss–Laguerre (GL) quadrature formula, a method designed for integrals of the type
∫_0^∞ x^α e^-b x f(x) x≈∑_i=0^N-1 w^(α,b)_i f(x_i).
This quadrature rule is also valid for the two-dimensional energy or momentum integrals appearing in the collision term since the integrand is, through the phase space distribution functions, exponentially suppressed in both integration variables.
A suitable, yet arbitrary, choice is to generate the weights w^(α,b)_i for the parameters α=0 and b=1/2 which is e.g. possible with the GNU scientific library. As the distribution functions we encounter are not exactly proportional to e^-b x, we define e^b x into f(x).
If, on the other hand, the scattering term is approximated by the Fokker-Planck type operator in Eq. (<ref>), we deviate from the GL prescription and choose a logarithmic spacing instead as we use sixth order central difference formulas for the numerical evaluation of the first and second momentum derivatives to achieve a high accuracy, i.e. we take f_χ as a function of log q with
a uniform spacing Δln q = ln(q_i+1/q_i).
For the in total six points outside the solution domain, the conditions f_-2=f_-1=f_0=f_1 as well as f_N=f_N+1=f_N+2=f_N+3 with f_l = f_χ(q_l,t) are used. In addition, the momentum derivatives are computed in log-space, ln f_χ(p,t), to obtain also a high accuracy for large momenta where the distribution function can differ by several orders of magnitude for neighboring points.
Even though the elastic collision term manifestly conserves the number of particles in the continuum limit, the discretized version can lead to a spurious change of the comoving number density in the initial high-temperature regime. We therefore follow the same prescription as in Ref. <cit.> and assume kinetic equilibrium if the ratio γ(T)/H(T) is larger than 10^5 and solve the nBE instead.
The set of N Boltzmann equations obtained after discretizing Eq. (<ref>) are stiff differential equations which require special integration routines to overcome the stiffness difficulty. For this purpose, the solver based on the backward differentiation formula of the library <cit.> is used in our -code.
§ LEPTOPHILIC FORBIDDEN DARK MATTER
The forbidden DM model under consideration consists of a Dirac fermion χ as DM with g_χ=2 degrees of freedom, which couples only to SM leptons through a real scalar ϕ as mediator. After electroweak symmetry breaking, the effective Lagrangian reads
ℒ = ℒ_SM + 1/2ϕ ( - m_ϕ^2) ϕ + χ̅ (i ∂ - m_χ) χ - g^S_ijϕl̅_i l_j - i g^P_ijϕl̅_i γ_5 l_j - g^S_χϕχ̅χ - i g_χ^P ϕχ̅γ_5 χ,
with the flavor indices i,j = e,μ,τ. In the absence of lepton flavor violation, the associated DM annihilation cross section into leptons reads
(σ v_lab)_χχ̅→ ll̅ = 4π√(1 - 4 m_l^2/s)/ s - 2 m^2_χα^P_χ s+ α^S_χ(s-4 m_χ^2)/(s-m_ϕ^2)^2 + Γ_ϕ^2 m_ϕ^2 (α^P_ll s+ α^S_ll(s-4 m_l^2)),
with the laboratory velocity v_lab = [s(s - 4 m_χ^2)]^1/2/(s - 2m_χ^2),
α^S,P_i(i)=(g^S,P_i(i))^2/(4π)
and the decay width of the mediator
Γ_ϕ = 1 /2 ∑_i =χ,l√(m_ϕ^2-4 m_i^2)[α^S_i(i)(1-4 m_i^2/m_ϕ^2) + α^P_i(i)],
where the sum runs over all kinematically accessible decay channels.
The phenomenology of this model in the sub-threshold regime and its relic density within the standard kinetic equilibrium approach for a small mass difference δ = (m_l - m_χ)/m_χ have already been explored extensively in the work by D’Agnolo et al. <cit.>.
Given that Eq. (<ref>) seems to contradict invariance under the electroweak gauge group, it is necessary to find an ultraviolet-complete alternative. This is e.g. possible by extending the two-Higgs-doublet model by an SU(2)_L scalar singlet which then couples to the new fermion acting as DM <cit.>. With a focus on the early kinetic decoupling effect, the study of the effective theory has been repeated in Ref. <cit.> where the relic density was computed with the cBE treatment based on the small momentum transfer approximation resulting in a reduction of the viable parameter space. However, as already pointed out by Refs. <cit.>, the FP approximation breaks down if the particles scattering off of each other are very close in mass, as it is the case in forbidden scenarios. For this reason, the main objective of this work is to include the full elastic collision term not only in the set of coupled Boltzmann equations but also to go beyond the cBE treatment and investigate the relic density as a solution of the full Boltzmann equation at the level of the phase space density (fBE).
Similar to the original work <cit.>, we identify throughout this whole paper the DM couplings with the values α_χ^S=0 and α_χ^P=0.1, which we define at the energy scale corresponding to m_ϕ. The scalar coupling is set to zero since expanding the squared CM energy in the laboratory velocity s = 4 m^2_χ (1 + v^2_lab/4) + v^4_lab shows that only the coupling α_χ^P contributes (at tree-level) an s-wave component. In addition, this choice ensures that both couplings remain perturbative α^S_χ,α_χ^P < 1 below 1 for the investigated mediator mass range 0.1≤ m_ϕ≤100 as dictated by the one-loop renormalization group equation <cit.>
μα_χ^S/Pμ = 5/2πα_χ^S/P(α_χ^S/P + α_χ^P/S) .
We also do not include ϕ-coannihilations in our calculation which is justified since we restrict our investigation to the mass region r̃ = m_ϕ/m_l ≥ 1.25. We have therefore already set the cubic (quartic) ϕ^3(ϕ^4)-interaction terms explicitly to zero. We also neglect scatterings off the mediator, as the same exponential suppression of the averaged coannihilation cross section due to this mass splitting appears in the associated momentum transfer rate γ_ϕ(x) ∼ e^- x r̃. However, as this line of reasoning only holds assuming i) the mediator is in equilibrium with the SM and ii) there are no large hierarchies present between the DM and lepton couplings as well as the mediator self-couplings which could compensate the suppression at early times, it would be interesting to drop these assumptions and include the mediator in the network of momentum-dependent Boltzmann equations.
§.§ The relic density beyond kinetic equilibrium
We begin the numerical analysis by comparing the DM relic density obtained from solving the Boltzmann equations (cBE and fBE) using the FP approximation on one hand and the full collision term on the other.
In order to make a direct comparison with Ref. <cit.> possible, our results for the four benchmarks considered in Ref. <cit.> are displayed in Fig. <ref>.
The relic density obtained with the cBE and fBE approaches for the two different implementations of the elastic collision term relative to the number density result are shown in Fig. <ref> as a function of the inverse DM mass. For every m_χ, the lepton coupling α^S_ll is fixed by the requirement that the nBE result matches the experimentally observed relic density. Curves in bold colors correspond to the use of the full collision term, whereas curves with a lighter hue correspond to the use of the FP approximation. The upper panels deal with annihilation into muons, while the lower ones display the results for tau leptons in the final state. Furthermore, the left panels shows the results for α^P_ll=0, while the right panels are for α^S_ll=α^P_ll. We find in this model that the early kinetic decoupling effect increases the relic density significantly and that the small-momentum approximation in general implies a smaller contribution from the elastic scattering collision term, which then leads to an overestimation of the relic density. Put differently, the full collision term keeps the DM distribution closer to equilibrium and, as a consequence, moves the associated relic density towards the number density result. This can be clearly seen in Fig. <ref> by comparing darker and lighter hues of same-color curves. It is worth mentioning that the FP approximation tracks the full collision solution for a significant range of values of r in most of the cases. However, the two solutions significantly depart from each other for larger r values with the full collision solution in the fBE case approaching the nBE solution. For r=1, DM still does not maintain kinetic equilibrium for this particular benchmark since for decreasing r, 2 m_χ moves towards the mediator mass and therefore the distribution function starts to be affected by the resonance.
In order to not only restrict our discussion to mediator masses far away from the di-muon or di-tau resonance, we display in Fig. <ref> the same quantities but with a mediator corresponding to approximately twice the muon (upper panels) or tau mass (lower panels). As a result, the DM distribution function is in this case not only driven out of equilibrium through the forbidden nature of the model, but also significantly through the resonance.
This enhancement can be clearly seen in the increased deviation from the nBE approach compared to the previous case where the mediator mass is much larger than the corresponding lepton mass. For example, the relic density from the fBE for the benchmark with a non-zero pseudoscalar coupling is more than 50 times that predicted using the nBE and ≳ 2.5 times than the cBE result. It should be noted that the ratio of the relic densities in Figs. <ref> and <ref> is larger for the cases when α_ll^S=α_ll^P than it is when α_ll^P=0. By examining Eq. (<ref>), we see that this is due to the fact that the s-wave component of the annihilation cross-section is suppressed for α_ll^P=0 through the small mass difference between leptons and DM which is not the case for α_ll^S=α_ll^P. On a qualitative level, this effect can also be understood through the difference between the phase space distribution functions obtained with the fBE and the corresponding equilibrium distribution functions f_ Eq^ fBE∼ e^-E/T_χ which is shown in the lowermost panels of Fig. <ref> for five values of x for both choices of α^P_ττ and r=1.15. Here, the DM “temperature” T_χ is computed from f_χ itself as defined in Eq. (<ref>). It becomes clear that the strong velocity dependence of the annihilation cross section leads to dips in the distribution function which cause the departure from equilibrium and are more pronounced for the α_ll^S=α_ll^P case than for α_ll^P=0 and in particular near freeze-out at x∼ 20. The deeper the dip the more inefficient the DM annihilation becomes due to the smaller occupation number at the relevant momenta. This correlation then leads to a higher DM relic density as can be seen in Figs. <ref> and <ref>. From the upper panels of Fig. <ref> showing the evolution of the distribution function for five different values of x it becomes clear that the distributions obtained from the cBE and the fBE evolve in general to lower momenta compared to the equilibrium distribution evaluated at the photon temperature as the high momentum DM particles get depleted in order to overcome the annihilation threshold. It is also noticeable that the solution of the fBE departs away from f_ Eq^ fBE at x∼ 20. While this deviation remains strong for the α^P_ττ=α^S_ττ case shown in the right panel, the deviation almost vanishes at later times for the α^P_ττ=0 case in the left panel.
This difference can be understood by comparing the momentum transfer rate γ(T) with the Hubble rate H(T). Both are displayed in Fig. <ref> as a function of x with the left panel corresponding to the case α^P_ττ=0 while the right panel shows the case α^P_ττ=α^S_ττ. It is clear that for α^P_ττ=0, γ(T) is still comparable to H(T) around x∼ 24 which means that elastic scattering is still effective enough to keep the phase space distributions from deviating too far away from equilibrium as we have seen in the left panel of Fig. <ref>. However, for α^P_ττ=α^S_ττ, γ(T) is already much smaller than H(T) at x∼ 24 which suggests that elastic scatterings are not strong enough leading to a larger deviation away from equilibrium. Fig. <ref> also shows the evolution of the DM yield in the nBE, cBE and fBE approaches. Notice how for cBE and fBE, DM freeze-out happens earlier than for the nBE case leading to a higher relic density. The effect of the phase space distribution shifting to lower momenta can also be seen in the lower panel of Fig. <ref> where we plot the DM temperature T_χ. A clear drop away from the photon temperature is visible just before x∼ 20, where DM starts cooling faster than the SM bath. Note that the splitting between the cBE and fBE predictions can be attributed to the phase space distributions shown in Fig. <ref>.
Lastly, the question remains whether for this DM model with the particular choice α^S_χ = 0 and α^P_χ = 0.1 for the DM couplings, the cBE or fBE approach gives a more correct description of the freeze-out process. Since the cBE framework becomes exact under the assumption of maximally efficient DM self-interactions, the associated rate given by
Γ_self = 2 n_χ⟨σ_self v ⟩ = 2 n_χ∫[3]p_a∫[3]p_bσ_self v_Mø l f_χ(p_a) f_χ(p_b)/∫[3]p_a∫[3]p_b f_χ(p_a) f_χ(p_b)
= g^2_χ/(2π)^4 n_χ∫^∞_m_χE_a f_χ(p_a) ∫^∞_m_χE_b f_χ(p_b) ∫^s_+_s_-s√(s(s - 4 m^2_χ))σ_self(s),
is shown in Fig. <ref> for the same tau benchmarks used for the illustration of the evolution of the distribution functions. In Eq. (<ref>), we have the integration limits s_± = (E_a + E_b)^2 - (p_a ∓ p_b)^2, the self-scattering cross section σ_self = σ_χχ̅→χχ̅ + σ_χχ→χχ which is for reference explicitly given in App. <ref> and the factor two in front of the number density accounting for particles and antiparticles. For the nBE and cBE approaches, the average in the self-interaction rate reduces to the single integral over the collision energy defined in Eq. (<ref>). It is clear from Fig. <ref> that even long after freeze-out, the self-scattering rate remains more than five orders of magnitude above the Hubble rate meaning that the cBE treatment gives in this case a more correct depiction of the DM thermodynamics. In fact, the self-interactions are so strong, that the addition of the self-scattering collision term C_self to the right-hand side of the momentum-dependent Boltzmann equation makes a numerical solution of the fBE impossible while using the same number of momentum bins, not compromising on accuracy and ensuring that the implementation of C_self conserves the number of particles. Since the self-interactions are so strong, we have also checked that these are not in conflict with astrophysical bounds <cit.>.
§.§ Exclusion limits
In this section, we update the exclusion limits on forbidden DM annihilations into SM leptons based on our new calculation of the relic density.
To compare our results with those of Ref. <cit.>, the numerical analysis is performed in the plane spanned by the mediator mass and the scalar lepton coupling. The results for annihilations into μ^+μ^- are shown in Fig. <ref> while those for annihilations into τ^+τ^- are presented in Fig. <ref>. For both channels, the pseudoscalar coupling is set to zero, α_ll^P=0, on the left panel and equal to the scalar coupling, α_ll^P=α_ll^S, on the right. As our analysis in the previous section shows that the small-momentum approximation still holds for significant ranges of DM masses in the forbidden regime, the calculation of the relic density in the following analysis is still based on the FP approximation with the advantage of a major reduction in run time. Even though we assume the cBE to give a more correct result, we still show for completeness the results from the fBE approach in Figs. <ref> and <ref>. Deriving exclusion limits based on both techniques also serves as a means to gauge the effect from DM self-scatterings.
For every given pair of parameters (m_ϕ, α^S_ll) in Figs. <ref> and <ref>, the DM mass is fixed through the requirement that the relic density corresponds to the experimentally observed value Ω_χ h^2 = 0.120. This calculation can be carried out efficiently using e.g. a logarithmically spaced bisection search. The thick gray, green and black curves corresponding to the nBE, cBE and fBE calculations, respectively, indicate the boundary δ=0 between the forbidden and non-forbidden regions. In the region above those curves the DM mass is smaller than the corresponding lepton mass and larger below. The boundary from the cBE and fBE approaches is almost identical and both approaches leave less parameter space with δ>0 compared to the nBE. The allowed white region is further constrained by terrestrial and space-based experiments as we discuss next.
For both channels, the most important experimental constraint comes from DM annihilations into electromagnetically charged particles during the recombination epoch. The sensitivity of CMB anisotropies to such energy injection processes into the intergalactic medium (IGM) allows Planck to place the upper limit f_eff⟨σ v ⟩/m_χ≤3.5e-28// on the annihilation parameter where the efficiency factor f_eff describes the fraction of energy that is released in the annihilation and then transferred to the IGM <cit.>. It should be noted that this limit is only valid for an s-wave dominated and therefore almost constant annihilation cross section, i.e. ⟨σ v ⟩≃σ v_lab≃const <cit.>.
For the numerical evaluation of the efficiency factor we use the tabulated f_eff curves for DM masses below 5 provided in Ref. <cit.>. As a consequence of these robust energy injection constraints, the non-forbidden region where δ≤ 0 and direct annihilations into leptons become possible is immediately ruled out. As already mentioned, the corresponding areas in Figs. <ref> and <ref> are marked in gray for the nBE approach and in green for the cBE treatment. As the fBE result overlaps almost everywhere with the cBE one, only the boundary δ^fBE=0 is marked in black. In the forbidden region defined through δ>0, loop induced annihilations into photons can be sufficiently large to distort CMB anisotropies at a measurable level, even though being too small to have to be included in the relic density calculation. We obtain for the associated annihilation cross section the expression
(σ v_lab)_χχ̅→γγ
= ∑_l=e,μ,τ 4 α_em^2 m_l^2 /π (s - 2 m_χ^2) α_χ^P s+α^S_χ(s-4 m_χ^2)/(s-m^2_ϕ)^2 + m_ϕ^2 Γ_ϕ^2
×{α^S_ll|1+(1 - τ_l^-2) ^2(τ_l) |^2 + α_ll^P | ^2(τ_l) |^2},
with τ_l = √(s)/2 m_l and the fine-structure constant α_em. The forbidden region ruled out in this way based on the nBE calculation is shown in orange and in blue for the fBE treatment. Only the boundary of the cBE limit is marked in violet, as it is almost everywhere identical to the fBE result. These energy injection limits have been determined in Refs. <cit.> based on DM masses obtained with the nBE approach. Recalculating them based on the cBE and fBE approaches shows that the limits from annihilations into photons are actually more stringent and exclude a larger region of the parameter space. Included in red are also fBE projections from the CMB-S4 experiment <cit.>, which is expected to improve the limit on DM annihilation by a factor of two (dotted) to three (dashed).
For the muon channel of Fig. <ref>, existing constraints from the electron beam-dump experiment E137 <cit.> on the light dark scalar are displayed in gray. In addition, projected limits from the experiments BDX <cit.>, M^3 <cit.> and NA64-μ <cit.> are shown. Here, we include, for the first time, the highest sensitivity limit from NA64-μ with 10^13 muons on target (MOT) which is the goal the M^3-experiment plans on achieving in phase two after starting out with comparably less 10^10 MOT in phase one. The exclusion limits from beam-dump experiments for the benchmark with a non-vanishing pseudoscalar coupling are recasted from the associated limit with α^P_μμ=0 through the replacement α^S_μμ→α^S_μμ/2 for every m_ϕ. This is motivated by the fact that according to the improved Weizsacker-Williams approximation <cit.>
the dominant contribution of the radiative production cross section Nμ→ Nμϕ of the scalar in presence of a nucleon N in the target material is proportional to α^S_μμ.
For the tau channel of Fig. <ref>, one existing constraint comes from the LEP measurement of the partial Z decay width into tau leptons Γ_Z→τ^+τ^-=84.08±0.22 <cit.> since it is sensitive to the process Z→τ^+τ^- ϕ followed by a subsequent invisible decay of the new scalar into DM. To obtain a limit at the 2σ confidence level from this measurement, the new contribution to the Z decay width is required to be less than two times the uncertainty of the measured Z →τ^+τ^- width, i.e. the upper limit Γ_Z→τ^+τ^- ϕBR(ϕ→χχ̅) < 0.44 is applied. For the purpose of constraining annihilations into tau leptons, we make here and in the following the simplifying assumption that m_χ≈ m_τ within the computation of the branching ratio BR(ϕ→χχ̅). Z decays can also be probed with a better sensitivity at future electron-positron colliders like FCC-ee <cit.> or CEPC <cit.> since these come with Giga-Z (Tera-Z) options which means adjusting the beam energy to the Z-pole and producing 10^9 (10^12) Z bosons. Assuming similar efficiencies and acceptances of the future experiments for tau leptons as for electrons and muons, these colliders can probe the exotic Z decay branching ratio BR(Z →τ^- τ^+ E) down to approximately 10^-8 (10^-9.5) for a Giga-Z factory (Tera-Z factory) <cit.>.
Another relevant experimental constraint comes from mono-photon searches at BaBar <cit.>, i.e. searches for a highly energetic monochromatic photon in association with missing energy. There are also projected limits for the same kind of search at BaBar's successor experiment Belle II <cit.> for integrated luminosities of 20 and 50. Both present and future constraints are recasted from mono-photon bounds on axion-like particles (ALPs) <cit.>. To do so, we
first consider the production cross section of an ALP a in association with a photon given by
σ_e^+e^-→ a γ = α_em g^2_aγγ/24 s^7/2 (s-m_a^2)^3 (s + 2 m_e^2) (s - 4 m_e^2)^-1/2,
where g_aγγ is the ALP-photon coupling and m_a is the ALP mass. The same production cross section in our model, i.e. for the mediator ϕ instead of an ALP, is
σ_e^+e^-→ϕγ = ∑_l 2 α_em^3 m_l^2 /3π s^7/2s + 2 m_e^2 /s-m_ϕ^2 (s-4 m_e^2)^-1/2( α_ll^S |F_l^S(s)|^2 + α_ll^P |F_l^P(s)|^2),
where the functional form of the form factors F_l^S(q^2) and F_l^P(q^2) due to the lepton loop are defined in App. <ref>. We then recast the limit by solving the equation σ_e^+e^-→ a γ = σ_e^+e^-→ϕγBR(ϕ→χχ̅) for every m_ϕ at the collision energy √(s) = 10.58 corresponding to the Υ(4S) resonance. Future Z-factories are able to perform the same mono-photon searches and can therefore put constraints on the branching ratio BR(Z→Eγ ) <cit.> which in our model corresponds to an upper limit on the product BR(Z→ϕγ)BR(ϕ→χχ̅). The associated decay width is given by
Γ_Z →ϕγ = ∑_l 3 α_em G_F (g^V_Z,l)^2 m_l^2/√(2)π^3 m_Z ( m_Z^2 - m_ϕ^2) ( α_ll^S |F_l^S(m_Z^2)|^2 + α_ll^P |F_l^P(m_Z^2)|^2),
with the Fermi constant G_F and the vector part g^V_Z,l = -1/4+ sin^2θ_W of the Z-lepton coupling.
The implications of our new calculation on the available parameter space can be seen by comparing our results to the equilibrium results in Ref. <cit.>. A very interesting outcome is that forbidden annihilations into muons for the case of a vanishing pseudoscalar coupling can now be entirely probed with the future CMB-S4 experiment alone[Note that Fig. <ref> differs from the corresponding plot in Fig. 6 (left) of Ref. <cit.> due to a potential error in their calculation of the mediator's width as a similar result is recovered by keeping Γ_Φ constant for the whole scan with a value corresponding to one far away from the resonance. In addition, the calculation of the limits on DM annihilation into photons provided in Figs. 6 and 7 of Ref. <cit.> seem to be still based on the masses obtained with the number density treatment.]. Furthermore, there is a significant reduction in the experimentally viable parameter space of the model, with more regions available for the τ^+τ^- final state. Many future experiments will be able to probe the remaining parts of the parameter space, such as NA64-μ and M^3 for the di-muon and the Giga-Z and Tera-Z experiments for the di-tau final state. The inclusion of the latter sensitivity limits is another novel aspect of this work and shows that almost the entire model parameter space can be probed in the near future. We note in passing that the discussed limits are more stringent for the α^P_ll=0 case than they are for the α^P_ll=α^S_ll case.
§ CONCLUSION
We have studied the early kinetic decoupling effect for forbidden DM annihilations into SM leptons by means of the momentum-dependent Boltzmann equation. We carefully compared the resulting DM relic density with predictions obtained using the fluid approximation and the traditional number density approach for both, the full elastic collision term as well as the corresponding small-momentum transfer approximation, resulting in general in a significant increase of the DM relic abundance by more than an order of magnitude. From a technical side, we put particular emphasis on the analytical integration of all angular integrals appearing in the full elastic scattering collision term, as this possibility has not been explicitly addressed before in the context of DM. Along this line, we also highlighted improvements in the numerical strategy. With that, we derived from the relic density new experimental exclusion limits for the investigated model which are especially strong for the muon channel, however, by using the Fokker-Planck approximation instead of the full operator, since we found both to be in very good agreement in the relevant regions of the model parameter space. These results highlight again the necessity to take the early kinetic decoupling effect seriously and display the need to develop fast, reliable and general methods for the evaluation of full collision integrals in order to make the investigation of this effect in more complicated models like the Minimal Supersymmetric Standard Model feasible.
§ CROSS SECTIONS
§.§ DM self-interaction
As only the case α^S_χ=0 is covered in this work, the self-scattering cross sections are for brevity only displayed with the scalar coupling α^S_χ set to zero. The cross sections are given by
σ_χχ→χχ = (α_χ^P)^2 π/2 s (3 s - 12 m^2_χ + 5 m_ϕ^2) {1/s - 4 m_χ^2 + m_ϕ^2.
. + 2 m_ϕ^2/(s - 4m_χ^2)(s - 4m_χ^2 + 2 m_ϕ^2)ln(m_ϕ^2/s - 4 m_χ^2 + m_ϕ^2)},
σ_χχ̅→χχ̅ = (α_χ^P)^2 π/s |D_ϕ(s)|^2 {m_ϕ^2/s - 4 m_χ^2[2 |D_ϕ(s)|^-2 + s (m_ϕ^2 - s)]ln(m_ϕ^2/s - 4 m_χ^2 + m_ϕ^2) .
. + m^2_ϕ/s - 4 m_χ^2 + m_ϕ^2[s (Γ_ϕ^2+4 m_χ^2-2 m_ϕ^2)+2 (Γ_ϕ^2+m_ϕ^2) (m_ϕ^2-2
m_χ^2)] + s^2},
with the propagator |D_ϕ(s)|^2=1/((s - m_ϕ^2)^2 + m_ϕ^2 Γ_ϕ^2).
§.§ Loop-induced processes
The form factors used to describe the processes e^+ e^-→ϕγ and Z→ϕγ are defined by
F_l^S(q^2) = (q^2 - m^2_ϕ)[2 + (q^2 +4
m_l^2-m_ϕ^2) C_0 ] +2 q^2 [ Λ(q^2,m_l,m_l) - Λ(m_ϕ^2,m_l,m_l)],
F_l^P(q^2) = (q^2 - m^2_ϕ)^2 C_0 ,
with the Passarino-Veltman integral
C_0(0, m_ϕ^2, q^2; m_l, m_l, m_l) = 2/q^2 - m_ϕ^2[^2(m_ϕ/2 m_l) - ^2(√(q^2)/2 m_l)],
and the branch cut function
Λ(p^2; m, m) = √(1- 4 m^2/p^2)ln(√(p^2 (p^2-4 m^2))+2 m^2-p^2/2 m^2).
§ THE ELASTIC SCATTERING KERNEL
For the parametrization of the collision term
C_coll[f_a] = 1/16 (2π)^5 E_a g_1∫[3]p_b[3]p_1[3]p_2δ^(4)(p_a+p_b-p_1-p_2) |ℳ_ab→ 1 2|^2/E_b E_1 E_2𝒫(f_a,f_b,f_1,f_2),
for a matrix element |ℳ_ab→ 1 2|^2 which depends only on one Mandelstam variable, we follow the strategy outlined in Refs. <cit.>. Without loss of generality, we choose this variable to be t=(p_1 - p_a)^2=(p_b - p_2)^2, define the corresponding three-momentum variable
k⃗ = p⃗_1 - p⃗_a= p⃗_b - p⃗_2,
and use it as reference direction to define the explicit coordinate system:
k⃗ = k (0,0,1),
p⃗_a = |p⃗_a| (0,sinη,cosη),
p⃗_b = |p⃗_b| (cosφsinϑ,sinφsinϑ,cosϑ).
To avoid ambiguities, we differentiate in this section between a four-momentum p, its spatial component p⃗ and the associated absolute value |p⃗|.
The four-momentum p_2 is integrated out using four-momentum conservation
∫[3]p_2/2 E_2δ^(4)(p_a+p_b-p_1-p_2) = Θ(E_a+E_b-E_1-m_2)
×1/2 k |p⃗_b|δ(cosϑ - |p⃗_b|^2 + k^2 + m_2^2 - (E_a+E_b-E_1)^2/2 |p⃗_b| k),
where the remaining δ-function sets p_2 on-shell. The absolute value of p⃗_2 is then fixed by energy conservation.
The spatial components of p_1 are turned into the integration variable k⃗, giving
∫[3]p_1/2 E_1 = ∫[3]kE_1δ(E_1^2 - |k⃗ + p⃗_a|^2 - m_1^2) Θ(E_1 - m_1)
= ∫[3]kE_11/2 k |p⃗_a|δ(cosη - E_1^2 - k^2 - |p⃗_a|^2 - m_1^2 /2 |p⃗_a| k) Θ(E_1 - m_1).
After averaging over the direction of the incoming particle (∫cosη)/2 and performing the trivial angular integrals, we find
C_coll[f_a] = 1/128 π^3 E_a |p⃗_a| g_1∫E_1E_bkcosϑcosηδ(cosη - …) δ(cosϑ - …)
× |ℳ_ab→ 1 2|^2 𝒫(f_a,f_b,f_1,f_2) Θ(E_1 - m_1) Θ(E_a+E_b-E_1-m_2) .
The delta functions can be used to constrain the integration region of k resulting in the limits
k_- ≡max(||p⃗_a|-|p⃗_1||,||p⃗_b|-|p⃗_2||)≤ k ≤min(|p⃗_a|+|p⃗_1|,|p⃗_b|+|p⃗_2|) ≡ k_+.
With that, we define the collision kernel
Π(E_a,E_b,E_1) = Θ(k_+ - k_-) ∫^k_+_k_-k |ℳ_ab→ 1 2|^2,
so that the final form of the full collision term reads
C_coll[f_a] = 1/128 π^3 E_a |p⃗_a| ∫^∞_m_1E_1∫^∞_max(m_b,E_1-E_a+m_2)E_bΠ(E_a,E_b,E_1) 𝒫(f_a,f_b,f_1,f_2).
For a general 2-to-2 matrix element that can be brought into the form
|ℳ_ab→ 1 2|^2 = c_0 + c_1/Δ - k^2 + c_2/(Δ - k^2)^2 ,
with k-independent coefficients c_0, c_1 and c_2, the kernel reads
Π(E_a,E_b,E_1) = [c_0 (k_+ - k_-) + c_1 I_1(Δ,k_-,k_+) + c_2 I_2(Δ,k_-,k_+)]Θ(k_+ - k_-),
where we have introduced the two integrals
I_1(Δ,a,b) = ∫_a^b x1/Δ - x^2 =
1/2√(Δ)[ln(√(Δ)-a/√(Δ)-b)+ln(b+√(Δ)/a+√(Δ))] , Δ>0
1/b - 1/a , Δ =0
1/√(-Δ)[atan(a/√(-Δ)) -atan(b/√(-Δ))] , Δ <0
as well as
I_2(Δ,a,b) = ∫_a^b x1/(Δ - x^2)^2 =
1/2Δ(a/a^2-Δ - b/b^2-Δ + I_1(Δ,a,b) ).
For the elastic scattering matrix element in our model
|ℳ_χ l →χ l|^2 = 64π^2/(t-m_ϕ^2)^2(α^P_χ t+ α^S_χ(t-4 m_χ^2))
(α^P_ll t+ α^S_ll(t-4 m_l^2)),
one has Δ = (E_a - E_1)^2 - m_ϕ^2
and the coefficients c_0, c_1 and c_2 are given by
c_0 =64π^2(α_χ^S+α_χ^P)(α_ll^S+α_ll^P),
c_1 =128π^2{(α_ll^S+α_ll^P)(α_χ^S(m_ϕ^2-2m_χ^2)+α_χ^P m^2_ϕ)-2α_ll^S(α_χ^S+α_χ^P)m_l^2},
c_2 =64π^2(α_χ^S(m_ϕ^2-4m_χ^2)+α_χ^P m^2_ϕ)(α_ll^S(m_ϕ^2-4m_l^2)+α_ll^P m^2_ϕ).
We would like to thank Di Liu for correspondence regarding limits from future Z-factories. The research of AA, MK and LPW was supported by the DFG through the Research Training Group 2149 “Strong and Weak Interactions - from
Hadrons to Dark matter”. The work by AA and MK was also funded by the BMBF under contract 05P21PMCAA and grant KL 1266/10-1.
Matrix elements and cross sections have been computed with the help of <cit.> and <cit.>.
|
http://arxiv.org/abs/2306.05746v1
|
20230609082429
|
Martin's conjecture for regressive functions on the hyperarithmetic degrees
|
[
"Patrick Lutz"
] |
math.LO
|
[
"math.LO"
] |
Regressive functions on the hyperarithmetic degrees]Martin's conjecture for regressive functions on the hyperarithmetic degrees
Department of Mathematics, UCLA
[email protected]
We answer a question of Slaman and Steel by showing that a version of Martin's conjecture holds for all regressive functions on the hyperarithmetic degrees. A key step in our proof, which may have applications to other cases of Martin's conjecture, consists of showing that we can always reduce to the case of a continuous function.
[
Patrick Lutz
================
§ INTRODUCTION
Martin's conjecture is a proposed classification of the limit behavior of functions on the Turing degrees under strong set theoretic hypotheses (namely the Axiom of Determinacy). The full conjecture is still open, but several special cases have been proved. In particular, in <cit.>, Slaman and Steel proved that Martin's conjecture holds for all “regressive” functions on the Turing degrees.
If f 2^ω→ 2^ω is a Turing-invariant function such that f(x) ≤_T x for all x then either f is constant on a cone or f(x) ≡_T x on a cone.
They also asked whether the analogous theorem for hyperarithmetic reducibility holds. In other words, is it possible to prove a version of Martin's conjecture for regressive functions on the hyperarithmetic degrees? Their motivation was as follows. A regressive function on the Turing degrees can be written as a countable union of continuous functions. Their argument works by using this fact to reduce to the case where f is continuous and then showing that if such an f is not constant on any cone then for all x in some cone, it is possible to find y ≡_T x such that x is coded into f(y).
In their coding argument, they relied strongly on the properties of continuous functions. In contrast with regressive functions on the Turing degrees, regressive functions on the hyperarithmetic degrees can only be written as countable unions of Borel functions. Thus Martin's conjecture for regressive functions on the hyperarithmetic degrees forms a natural test case to see whether their coding argument can be extended to deal with functions which our not continuous.
The main result of this paper is to answer their question in the affirmative. Namely we will prove the following theorem.
Let f 2^ω→ 2^ω be a hyp-invariant function such that f(x) ≤_H x for all x. Then either f is constant on a cone of hyperarithmetic degrees or f(x) ≡_H x on a cone of hyperarithmetic degrees.
There are a few interesting things to note about our proof. First, instead of adapting Slaman and Steel's methods to work with non-continuous functions, we instead show that f—despite potentially being far from continuous—can be replaced by a hyp-equivalent function which is continuous. We still have to modify their coding argument to work with hyperarithmetic reducibility rather than Turing reducibility, but in doing so we make heavy use of the fact that we can assume we are dealing with a continuous function.
This suggests that in some cases of Martin's conjecture where the functions being considered are not continuous, it may still be possible to replace them with related functions which are continuous. This idea has already borne fruit in the form of <cit.>, where it is combined with a refined version of the coding arguments introduced in this paper to prove part 1 of Martin's conjecture for order-preserving functions.
Second, our results cast at least a little doubt on the idea that any use of determinacy in proving Martin's conjecture will be “local” (that is, the idea that only Borel determinacy is needed when dealing with Borel functions, and so on). Our proof seems to use more than Borel determinacy, even when the functions being considered are assumed to be Borel (specifically, our proof uses analytic determinacy). In section <ref>, we show that Borel determinacy is sufficient, but this requires a more careful analysis that was not needed for the proof.
Third, our reduction to the case of a continuous function is quite flexible and seems to work in many different degree structures, including the arithmetic degrees. Somewhat surprisingly, it seems much harder to adapt the coding argument used by Slaman and Steel, even once we are allowed to assume we are dealing with a continuous function. In this paper, we have to use a somewhat different coding argument than the one used by Slaman and Steel, and in doing so we have to rely on the Σ^1_1-bounding theorem. Also, we have so far not been able to modify either our coding argument or Slaman and Steel's to work for arithmetic reducibility (in our opinion, the regressive case of Martin's conjecture on the arithmetic degrees is an interesting open question).
§ PRELIMINARIES
In this section we will provide some background on hyperarithmetic reducibility and on Martin's conjecture and then state some lemmas that we will use in the proof of Theorem <ref>. All the lemmas are standard, with the exception of Lemma <ref>, which we will see has a simple proof using standard techniques. For the reader intimidated by the axiom of determinacy, we note that the only way we will use determinacy in this paper is in the form of Lemma <ref>.
§.§ Background on Hyperarithmetic Reducibility
The easiest definition of hyperarithmetic reducibility is that y ≤_H x if y is Δ^1_1(x) definable (in which case we will often say that y is hyperarithmetic in x). It is not very hard to see that this relation is transitive and thus deserves the title “reducibility.” As usual, we can then define hyperarithmetic equivalence and the structure of the hyperarithmetic degrees.
But there is another characterization of hyperarithmetic reducibility which is often useful and which we will now explain. Let ω_1^x denote the least countable ordinal with no presentation computable from x. Work of Davis, Kleene and Spector shows that for any α < ω_1^x, there is a notion of the α^th iterate of the jump of x which is well-defined up to Turing equivalence <cit.>. We denote this α^th jump of x by x^(α). Kleene proved in <cit.> that y is hyperarithmetic in x if and only if y ≤_T x^(α) for some α < ω_1^x.
It will be helpful later in the paper if we make some of this more precise. Suppose r is a real which codes a linear order ≤_r on which has a minimum element, 0_r. If x is any real, then a jump hierarchy on r which starts with x is a set H ⊂^2 such that the 0_r^th column of H is x and for each n 0_r, the n^th column of H is equal to the jump of the smaller columns of H (smaller according to the ordering given by ≤_r). In other words, if we define
H_n = {i |⟨ n, i⟩∈ H}
H_< n = {⟨ m, i⟩| m <_r n and ⟨ m, i⟩∈ H}
then we have H_0_r = x and H_n = (H_< n)' for all n 0.
If ≤_r happens to be a presentation of a well-order then there is always a unique H satisfying the conditions above. Moreover, if α < ω_1^x and r codes a presentation of α which is computable from x then the Turing degree of the unique jump hierarchy on r starting from x is independent of the specific choice of r. Such a jump hierarchy is considered to be the α^th jump of x (which is only well-defined up to Turing degree). This makes precise the alternative characterization of hyperarithmetic reducibility mentioned above.
It is also worth mentioning here that hyperarithmetic reducibility is closely connected to Borel measurability. Just as every continuous function is computable relative to some oracle, every Borel function is hyperarithmetic relative to some oracle. More precisely, if f is Borel then there is some countable ordinal α, some r which codes a presentation of α, some real y and some Turing functional Φ such that for all x, f(x) = Φ((x ⊕ y)^(α)), where (x⊕ y)^(α) is taken to mean the unique jump hierarchy on r starting from x ⊕ y.
§.§ Background on Martin's Conjecture
As mentioned in the introduction, Martin's conjecture is a proposed classification of the limit behavior of functions on the Turing degrees under strong set theoretic hypotheses. It is traditionally divided into two parts. We will only discuss the first part here, since that is all that is relevant for this paper.
Very roughly, part 1 of Martin's conjecture states that if f is a function from the Turing degrees to the Turing degrees then either f(x) is constant for all large enough x or f(x) ≥_T x for all large enough x. There are three things to explain here. First, a caveat: the conjecture is actually stated not in terms of functions on the Turing degrees, but in terms of Turing invariant functions on the reals. Second, we need to state precisely what “for all large enough x” really means. Third, the conjecture is false in and is instead stated as a conjecture in the theory + (or sometimes + + _, though we will not need to use _ in this paper). We will now explain each of these points in more detail.
First, let's define precisely what we mean by a Turing invariant function on the reals. A function f 2^ω→ 2^ω is called Turing invariant if for all x and y in 2^ω,
x ≡_T y f(x) ≡_T f(y).
The point is that a Turing invariant function f induces a function on the Turing degrees. Using the Axiom of Choice, it is clear that every function on the Turing degrees arises from a Turing invariant function on the reals, but this may fail in (though it is true again if we assume _, a strengthening of the Axiom of Determinacy). So Martin's conjecture is actually only classifying the behavior of functions on the Turing degrees which come from Turing invariant functions on the reals.
Since it will be useful to us, we will also mention here the definition of a Turing invariant set of reals. A subset A ⊆ 2^ω is called Turing invariant if for all x and y in 2^ω,
x ≡_T y (x ∈ A ↔ y ∈ A).
Next, let's explain what we mean by “all large enough x.” The key concept is that of a cone of Turing degrees (which is actually a Turing invariant subset of 2^ω rather than a subset of the Turing degrees): a cone of Turing degrees is a set of the form {x ∈ 2^ω| x ≥_T y} for some fixed y. This y is called the base of the cone and the cone is sometimes referred to as the cone above y. What we mean by “all large enough x” is simply “for all x in some cone.”
Third, we will mention a few things about the Axiom of Determinacy. The Axiom of Determinacy (often written ) is an axiom of set theory which is inconsistent with the Axiom of Choice and equiconsistent with the existence of infinitely many Woodin cardinals <cit.>. We will not give a definition of the Axiom of Determinacy here, but simply mention the following fact, which is one of the main consequences of for computability theory.
If A is a Turing invariant subset of 2^ω then either A contains a cone or A is disjoint from a cone.
There is also a weak form of determinacy called “Borel determinacy” which is provable in and which is enough to prove a theorem <ref> if the set A is assumed to be Borel.
We can now give a formal statement of part 1 of Martin's conjecture.
Assuming +, if f 2^ω→ 2^ω is a Turing invariant function then either f(x) ≥_T x for all x in some cone or there is some y such that f(x) ≡_T y for all x in some cone.
In a slight abuse of terminology, the latter possibility in the conjecture is often written as “f is constant on a cone” (even though it is the function that f induces on the Turing degrees that is constant, not f itself).
Finally, we mention that for many degree structures besides the Turing degrees (and in particular for the hyperarithmetic degrees), it is possible to state a sensible version of Martin's conjecture by just swapping out Turing reducibility for the appropriate alternative notion of reducibility in the definitions of “Turing invariant function” and “cone of Turing degrees.” This is reasonable to do in part because Theorem <ref> works for pretty much any notion of reducibility stronger than Turing reducibility (and also for many which are weaker).
§.§ Determinacy Lemmas
We now state a few lemmas that will help us apply determinacy even in situations where we have to deal with non-Turing invariant sets of reals. The key notion is that of a “pointed perfect tree.”
A perfect tree is a tree, T, such that every node in T has a pair of incompatible extensions which are both in T.
A pointed perfect tree is a perfect tree, T, such that every path through T computes T.
If T is a tree, we will use [T] to refer to the set of paths through T.
The reason pointed perfect trees are useful to work with is that if T is a pointed perfect tree then [T] contains a representative of every Turing degree which is above the Turing degree of T. Next, we will see that determinacy can be used to get pointed perfect trees. For a proof of Lemma <ref>, see <cit.>, Lemma 3.5.
A set A ⊆ 2^ω is cofinal in the Turing degrees if for all x there is some y ≥_T x such that y ∈ A (note that A is not required to be Turing invariant).
Suppose A ⊆ 2^ω is cofinal in the Turing degrees and h is a function on A with countable range. Then there is a pointed perfect tree on which h is constant.
The following lemma will be our only use of determinacy in the proofs in the rest of this paper. Essentially it is a kind of computable uniformization principle provable from .
Suppose R is a binary relation on 2^ω such that
* The domain of R is cofinal in the Turing degrees: for all z there is some x ≥_T z and some y such that (x, y) ∈ R
* and R is a subset of Turing reducibility: for every (x, y) ∈ R, x ≥_T y.
Then there is a pointed perfect tree T and a Turing functional Φ such that for every x ∈ [T], Φ(x) is total and (x, Φ(x)) ∈ R. In other words, Φ is a computable choice function for R on [T].
For each x in the domain of R there is some e such that Φ_e(x) is total and R(x, Φ_e(x)) holds. Let e_x denote the smallest such e. By determinacy (in the form of lemma <ref>), there is a pointed perfect tree T on which e_x is constant. Let e be this constant value. Then T and Φ_e satisfy the conclusion of the lemma.
It will be useful below to note that if A and h in Lemma <ref> are Borel then the result is provable in , and similarly that if R in the above lemma is assumed to be Borel, then that result, too, is provable in .
§.§ Pointed Perfect Tree Lemmas
Now we will state a couple of lemmas that are helpful when working with pointed perfect trees. These lemmas do not require the Axiom of Determinacy. The first lemma can be proved using the same kind of arguments as in Spector's construction of a minimal degree and the second is a routine application of compactness (see <cit.> for proofs).
Suppose T is a pointed perfect tree and Φ is a Turing functional such that Φ(x) is total for every x ∈ [T]. Then either Φ is constant on a pointed perfect subtree of T or Φ is injective on a pointed perfect subtree of T.
Suppose T is a perfect tree and Φ is a Turing functional such that Φ(x) is total for every x ∈ [T] and Φ is injective on [T]. Then for each x ∈ [T],
Φ(x) ⊕ T ≥_T x.
In fact, this reduction is even uniform in x (though we won't need to use that fact in this paper).
§.§ Computable Linear Orders
To work with hyperarithmetic reducibility, we will need to make use of a few facts about computable linear orders and computable well-orders. Proofs can be found in <cit.>.
One of the most important facts about computable well-orders is the Σ^1_1-bounding theorem. Essentially it says that every Σ^1_1-definable collection of well-orders is bounded below a computable ordinal. The theorem comes in multiple flavors, depending on whether we are talking about sets of programs which compute presentations of well-orders, or real numbers which are presentations of well-orders and depending on whether the Σ^1_1 definition is boldface, lightface, or lightface relative to some fixed real. Below, we just state the two versions that we will need in this paper.
Suppose that x is a real and A is a Σ^1_1(x) definable set of codes for programs such that for every e in A, Φ_e(x) is a presentation of a well-order. Then there is some α < ω_1^x which is greater than every ordinal with a presentation coded by an element of A.
If A is a Σ^1_1 definable set of presentations of well-orders then there is some α < ω_1 which is greater than every ordinal with a presentation in A.
We will also need some ideas originally introduced by Harrison in <cit.>.
If x is a real and r is a real computable from x that codes a presentation of a linear order, then r is a pseudo-well-order relative to x if it is ill-founded but contains no infinite descending sequence which is hyperarithmetic in x.
If r is a presentation of a linear order that is computable from a real x then the assertion “r has no infinite descending sequence which is hyperarithmetic in x” is equivalent to a Σ^1_1(x) formula.
If r is a pseudo-well-order relative to x and H is a jump hierarchy on r that starts with x then H computes every real which is hyperarithmetic in x.
§ PROOF OF THE MAIN THEOREM
In this section, we will prove Theorem <ref>. Before we launch into the details of the proof, we will give an outline of the general strategy. And before we do that, we will recall the general strategy followed by Slaman and Steel in their proof of Theorem <ref>. The steps of their proof are essentially as follows.
* First, use determinacy to show that there is a pointed perfect tree on which f is computable. Then use Lemma <ref> to show that we can also assume f is injective.
* Next, show that if x is in the pointed perfect tree, then every function computable from x is dominated by a function computable from f(x). The idea is that if x computes a function which is not dominated by any function computable from f(x) then x can diagonalize against f(x) by using this function to guess convergence times for f(x) programs. The diagonalization produces a real y in the same Turing degree as x such that f(x) cannot compute f(y), thereby contradicting the Turing invariance of f.
* Once you can assume that f is computable and injective on a pointed perfect tree and that if x is in this tree then every function computed by x is dominated by a function computed by f(x), use a coding argument to show that f(x) ≥_T x. The coding argument works by coding bits of x into the relative growth rates of two fast growing functions computed by f(x).
Our proof makes three main modifications to this outline. First, instead of showing that f is computable on some pointed perfect tree, we show that f is hyp-equivalent to some computable function on a pointed perfect tree. Thus we may work with that function instead of f. Second, instead of showing that every fast growing function computed by x is dominated by a function computed by f(x), we show that every well-order computed by x embeds into a well-order computed by f(x)—in other words that ω_1^x = ω_1^f(x). Third, instead of coding bits of x into the relative growth rates of fast growing functions computed by f(x), we code the bits of x into the Kolmogorov complexities of initial segments of reals computed by f(x) (though it is not necessary to know anything about Kolmogorov complexity to follow our argument). Also, to be able to carry out the coding argument, we will first have to use a trick involving Σ^1_1-bounding. To sum up, here's an outline of our proof.
* First, we will use determinacy to replace f with a hyp-equivalent function which is computable on a pointed perfect tree. By using Lemma <ref>, we can also assume that f is injective.
* Next we show that ω_1^f(x) = ω_1^x for all x in the pointed perfect tree. The idea is if ω_1^f(x) was less than ω_1^x then x would be able to diagonalize against f(x) by using ω_1^f(x) jumps.
* Once we are able to assume that f is computable and injective and that ω_1^f(x) = ω_1^x, we will use a coding argument to show that f(x) ≥_H x. In our coding argument, it will be important to know that there is a single ordinal α < ω_1^x such that for every real y in the same Turing degree as x, f(x)^(α) computes f(y). We will prove this fact using Σ^1_1-bounding.
And now it's time to present the actual proof.
§.§ Replacing f with an injective, computable function
First we will show that f can be replaced by a computable function. This is the only part of the proof that uses determinacy.
Suppose f 2^ω→ 2^ω is hyp-invariant and hyp-regressive. Then there is a Turing functional Φ and a pointed perfect tree T such that for all x ∈ [T], Φ(x) is total and Φ(x) ≡_H f(x).
Consider the following binary relation, R.
R(x, y) x ≥_T y and f(x) ≡_H y.
The idea is that a computable function which is hyp-equivalent to f is exactly a computable function which uniformizes R. To show that such a function exists, it suffices to check that we can apply Lemma <ref>.
To check that we can apply Lemma <ref>, we need to check that R is cofinal in the Turing degrees and that R is a subset of Turing reducibility. The latter is clear from the definition of R. For the former, fix any real x and we will show that some real which computes x is in the domain of R. Since f(x) ≤_H x, there is some α < ω_1^x such that x^(α)≥_T f(x). Since x^(α)≡_H x and f is hyp-invariant, f(x^(α)) ≡_H f(x). Thus R(x^(α), f(x)) holds and so x^(α) is an element of the domain of R which computes x.
Thus we may apply Lemma <ref> to get a pointed perfect tree T and a Turing functional Φ such that for all x ∈ [T], Φ(x) is total and Φ(x) ≡_H f(x).
For the rest of the proof we will simply assume that f is computable on a pointed perfect tree. It will also be convenient to assume that f is injective on a pointed perfect tree, which we show next.
Suppose T is a pointed perfect tree and f 2^ω→ 2^ω is a hyp-invariant function which is computable on [T]. Then either f is constant on a cone of hyperdegrees or f is injective on a pointed perfect subtree of T.
By Lemma <ref>, either f is constant on a pointed perfect subtree of T or f is injective on a pointed perfect subtree of T. In the former case, f is constant on a cone of hyperdegrees and in the latter case, we are done.
For the rest of the proof, we will deal with the case of a hyp-invariant function, f, which is computable and injective on a pointed perfect tree, T. We will show that for any x in [T], f(x) ≥_H x. There are two cases: when ω_1^f(x) < ω_1^x and when ω_1^f(x) = ω_1^x. We will show that the first case is impossible and that if we are in the second case then we can use the coding argument mentioned above.
§.§ Proving that f preserves ω_1^x
We will now show that for any x ∈ [T], ω_1^f(x) = ω_1^x. We will do this by deriving a contradiction from the assumption that ω_1^f(x) < ω_1^x (note that since f(x) ≤_H x we cannot have ω_1^f(x) > ω_1^x). The basic idea is that in this case we can diagonalize against f(x). Namely, we can use ω_1^f(x) jumps of x to compute a real y so that f(x) cannot compute f(y) with fewer than ω_1^f(x) jumps (and hence f(x) cannot be hyp-equivalent to f(y)). Since ω_1^x > ω_1^f(x), this y can be made hyp-equivalent to x, which violates the hyp-invariance of f. We now give the formal proof.
Suppose T is a pointed perfect tree and f is a hyp-invariant function which is computable and injective on [T]. Then for every x ∈ [T], ω_1^f(x) = ω_1^x.
Suppose for contradiction that for some x ∈ [T], ω_1^f(x) < ω_1^x. Let α = ω_1^f(x). The key point is that for every y ∈ [T] which is hyp-equivalent to x, x^(α) computes y.
Why is that? Well, if y is in the same hyperdegree as x then f(y) is in the same hyperdegree as f(x). So by definition of α, there is some β < α such that f(x)^(β)≥_T f(y). We then have the following calculation.
x^(α) ≥_T x^(β) because β < α
≥_T x^(β)⊕ T because T is pointed
≥_T f(x)^(β)⊕ T because f(x) ≤_T x
≥_T f(y)⊕ T by definition of β
≥_T y by Lemma <ref>.
We can now finish the proof easily. Since T is pointed, we can pick some y ∈ [T] which is Turing equivalent to x^(α + 1). Since α < ω_1^x, this y is hyp-equivalent to x. But it obviously is not computable from x^(α), so we have reached a contradiction.
§.§ Coding argument
In this part of the proof, we will explain how to code x into some real of the same hyperarithmetic degree as f(x). The argument has some similarity to the proof of a basis theorem for perfect sets given by Groszek and Slaman in <cit.> (which itself has some similarity to the coding argument used in <cit.>). Before giving the coding argument, however, we will first show that for every x ∈ [T] there is a uniform bound on the number of jumps that f(x) takes to compute f(y) for any y ∈ [T] which is Turing equivalent to x.
Suppose T is a pointed perfect tree, f is a hyp-invariant function which is computable on [T], and x ∈ [T]. Then there is some α < ω_1^x such that if y ∈ [T] is Turing equivalent to x then f(y) ≤_T f(x)^(α).
The main idea is just to use Σ^1_1-bounding. Let A be the set of programs e such that Φ_e(f(x)) computes a linear order r for which
* r has no infinite descending sequence which is hyperarithmetic in f(x)
* and there is some y ≡_T x in [T] and some jump hierarchy H on r starting from f(x) such that H does not compute f(y).
By Lemma <ref> (and since f(x) is computable from x), A is Σ^1_1(x). I claim that every program in A computes a well-order.
Suppose instead that A contains a program e computing an ill-founded order, r. Thus r is a pseudo-well-order relative to f(x). Since e is in A, there must be some y≡_T x in [T] and some jump hierarchy on r starting with f(x) which does not compute f(y). And since f is hyp-invariant, we must have f(x) ≡_H f(y). But by Lemma <ref>, any jump hierarchy on r which starts with f(x) computes everything in the hyperdegree of f(x), and in particular f(y). This is a contradiction, so all programs in A must compute well-orders.
Since A is Σ^1_1(x) and contains only programs computing well-orders, Σ^1_1-bounding implies that there is some α < ω_1^x which bounds every well-order in A. This implies that for every y ≡_T x in [T], f(y) is computable from f(x)^(α + 1).
We now come to the coding argument. As we have discussed, it replaces a different coding argument used by Slaman and Steel, and while their argument codes information into the relative growth rates of two fast-growing functions, ours codes information into the relative Kolmogorov complexities of initial segments of three reals (though the reader does not need to be familiar with Kolmogorov complexity to understand the proof below).
Suppose T is a pointed perfect tree and f is a hyp-invariant function which is computable and injective on [T]. Then f(x) ≥_H x for all x ∈ [T].
Let x ∈ [T]. Our goal is to show that f(x) ≥_H x. By Lemma <ref>, we know that ω_1^x = ω_1^f(x). By thinning T, we may assume that x is the base of T (i.e. T is a pointed perfect tree such that x ≡_T T), and hence that any element of T can compute x. We will use this fact below without further comment.
By Lemma <ref>, there is some α < ω_1^x such that for all y ∈ [T] in the same Turing degree as x, we have f(x)^(α)≥_T f(y). For the remainder of the proof, we will explain how to find reals a, b, c ∈ [T] which are hyp-equivalent to x such that x ≤_T f(x)^(α + 2)⊕ f(a) ⊕ f(b) ⊕ f(c).
To see why this is sufficient to complete the proof, first note that since f is hyp-invariant, f(a), f(b), and f(c) are all hyp-equivalent to f(x). Next, note that since ω_1^f(x) = ω_1^x, α is less than ω_1^f(x) and thus f(x)^(α + 2) is also hyp-equivalent to f(x). Therefore f(x)^(α + 2)⊕ f(a)⊕ f(b)⊕ f(c) is hyp-equivalent to f(x) and so if x is Turing below the former then it is hyp below the latter.
We will build a, b, and c in stages. At each stage we will keep track of the following data (supposing that the current stage is n):
* Initial segments A_n, B_n, and C_n of a, b, and c.
* Reals a_n, b_n, and c_n in [T] and Turing equivalent to x, which A_n, B_n, and C_n, respectively, are initial segments of. Think of a_n, b_n, c_n as the current “targets” for a, b, c.
* Initial segments Ã_n, B̃_n, and C̃_n of f(a), f(b), and f(c). These are the longest initial segments of f(a), f(b), and f(c) that can be determined from knowing the initial segments A_n, B_n, and C_n of a, b, and c (recall that f is continuous on T).
* Indices for programs e_a, n, e_b, n, and e_c, n. Think of these as “guesses” as to which programs compute f(a_n), f(b_n), and f(c_n) from f(x)^(α).
At the same time, f(x) will be using f(a), f(b), and f(c) to try to follow along with this construction by keeping track of the initial segments Ã_n, B̃_n, and C̃_n and the “guesses” e_a, n, e_b, n, and e_c, n. On each step of the construction we will update the data to code the next bit of x.
On each step, two of a, b, and c will be used to code the next bit of x and the third will play a “helper” role of coding some information to help f(x) follow along with the construction. Which of a, b, and c is playing this “helper” role will simply rotate between them on each step. So, for instance, a will play the helper role every third step.
We will make sure that at the beginning of step n, the “guess” corresponding to whichever real is playing the helper role on step n is correct. E.g. if a is in the helper role on step n then we will need that e_a, n is really the index of a program computing f(a_n) from f(x)^(α). We will see that the construction ensures this.
We will code the next bit of x into the relative sizes of the guesses for the two reals which are not playing the helper role. E.g. if a is playing the helper role on step n then we will code the next bit of x into which of e_b, n + 1 and e_c, n+ 1 is larger—if x(n) = 0 then we will make sure e_b, n + 1 > e_c, n + 1 and if x(n) = 1 then we will make sure e_b, n + 1 < e_c , n + 1.
To make things more concrete, let's suppose that we are on step n, a is in the helper role, and the next bit of x is a 0 (so we need to make sure e_b, n + 1 > e_c, n + 1). We can assume that e_a, n is correct—i.e. that Φ_e_a, n(f(x)^(α)) = a_n—and we need to make sure that this holds of e_b, n + 1 at the end of this step. Here's what we do.
* The target for c will stay the same—i.e. set c_n + 1 = c_n.
* Let e_c, n + 1 be the true guess for c_n = c_n + 1—i.e. the least e such that Φ_e(f(x)^(α) = f(c_n) (we know that such an e must exist because we are assuming c_n is in [T] and Turing equivalent to x and thus f(c_n) is computable from f(x)^(α)).
* Choose some new target b_n + 1 in [T] of the same Turing degree as x so that b_n + 1 extends B_n and so that for the least e for which Φ_e(f(x)^(α)) = f(b_n + 1), we have e > e_c , n + 1. We can do this because f is injective on T and there are infinitely many reals in [T] extending B_n which are Turing equivalent to x.
* Let m be a number large enough that e_c, n + 1 is the least e such that Φ_e(f(x)^(α)) is total and agrees with the first m bits of f(c_n + 1) and likewise for e_b, n + 1.
* Choose some new target a_n + 1 in [T] of the same Turing degree as x which also agrees with the old initial segment A_n of a but which disagrees with a_n and for which the first place such that f(a_n + 1) disagrees with f(a_n) is greater than m, say m'.
* Let Ã_n + 1 = f(a_n + 1) m'.
* Let A_n + 1 be a long enough initial segment of a_n + 1 to ensure that f(a) m' = f(a_n + 1) m' and thus that the first place at which f(a_n) and f(a_n + 1) disagree is m'.
* Set B̃_n + 1 and C̃_n + 1 to be f(b_n + 1) m' and f(c_n + 1) m'.
* Let B_n + 1 and C_n + 1 be long enough initial segments of b_n + 1 and c_n + 1 to ensure that f(b) and f(c) agree with the first m' bits of f(b_n + 1) and f(c_n + 1) (recall that f is continuous on [T]).
Note that by construction, the guesses e_b, n + 1 and e_c, n + 1 are correct. Now let's describe what's happening from f(x)'s perspective.
* First we look at f(a). Since it's a's turn to be the helper, we (as f(x)) know we should look for the first place where f(a) disagrees with Φ_e_a, n(f(x)^(α)) (which, recall, agrees with f(a_n)). So this allows us to retrieve m'.
* Now look at f(b) m' and f(c) m'. These are the new B̃_n and C̃_n. Calculate the least e such that Φ_e(f(x)^(α)) is total and agrees with f(b) up to m'. This is e_b, n + 1. Do the same thing for c.
* Now we check which of e_b, n + 1 and e_c, n + 1 is bigger. That tells us the next bit of x.
* At this point we have the correct guesses for b and c. We may not have a correct guess for a (or even a guess at all) but that doesn't really matter. The only one for which it is vital we have a correct guess at the beginning of the next step is the one that is going to be in helper mode and this will not be a twice in a row (since helper mode always rotates).
To carry out the entire construction to build a, b, and c, we just need to know x and f(x)^(α + 2) (the +2 is needed to figure out which programs are total). Since x computes f(x) and α < ω_1^x, this means that a, b, and c are hyperarithmetic in x. And since x ≡_T T and T is pointed, x is computable by a, b, and c and thus they are all in the same hyperdegree. At the same time, all that is required to do the parts “from f(x)'s perspective” is f(x)^(α + 2) (again, the +2 is needed to check which programs are total) along with f(a), f(b), and f(c). Hence x ≤_T f(x)^(α + 2)⊕ f(a) ⊕ f(b) ⊕ f(c).
§ THE CASE OF BOREL FUNCTIONS
It is popular to suppose that any proof of Martin's conjecture will only use determinacy in a “local” way—that is, the proof will still work in when restricted to Borel functions, just by replacing the original uses of with analogous uses of Borel determinacy.
In this section, we will see that the main result of this paper does hold in when restricted to Borel functions, but that proving this requires using a trick not present in the proof presented above. The trouble is that even if we only consider Borel functions, the proof of Lemma <ref> appears to require analytic determinacy rather than Borel determinacy. However, this can be avoided by a more careful analysis and an appeal to Σ^1_1-bounding.
Here's the key idea. If f is hyp-regressive then we know that for each x there is some α < ω_1^x such that x^(α) computes f(x). We will use Σ^1_1-bounding to find a single α which works for all x. After this, it will be straightforward to modify the proof of Lemma <ref> to only use Borel determinacy.
In the next lemma we will prove this key point. Note that since we are restricting ourselves to Borel functons, we can drop the “hyp-regressive” requirement—every Borel function f is automatically regressive on a cone of hyperarithmetic degrees.
Let f 2^ω→ 2^ω be a Borel function. Then there is some α < ω_1 such that for all x on a cone of hyperdegrees, α < ω_1^x and x^(α)≥_T f(x).
As noted above, since f is Borel, f(x) ≤_H x on a cone of hyperdegrees. For the rest of the proof, we will implicitly work on this cone and thus we may assume f(x) ≤_H x for all x.
We start by simply writing down the definition of hyperarithmetic reducibility: for each x, we know that f(x) ≤_H x and hence that there is some α < ω_1^x such that x^(α) computes f(x). Our goal is to show that there is some α < ω_1 which is large enough to work for all x. We will do so by using Σ^1_1-bounding.
Let A be the set of reals r which code presentations of linear orders such that for some x,
* x computes r
* r has no infinite descending sequences which are hyperarithmetic in x
* and there is a jump hierarchy H on r starting from x such that H does not compute f(x).
By Lemma <ref> plus the fact that f is Borel, the set A is Σ^1_1 definable (note that this is boldface rather than lightface because f is Borel but not necessarily lightface Δ^1_1).
Next, I claim that A only contains well-orders. Suppose not and that A contains an ill-founded order, r. Let x witness that r is in A. Then r is a pseudo-well-order relative to x. But by Lemma <ref>, this means that any jump hierarchy on r starting with x computes everything hyperarithmetic in x, and in particular, computes f(x). This contradicts the definition of A.
Since A is Σ^1_1 and contains only well-orders, Σ^1_1-bounding implies that there is some α < ω_1 which bounds everything in A. By the definition of A this means that for every x either ω_1^x ≤α or x^(α + 1)≥_T f(x). So if we go to a cone on which everything computes a presentation of α then we obtain the conclusion of the lemma.
We can now prove the Borel version of Theorem <ref>.
Let f 2^ω→ 2^ω be a hyp-invariant Borel function. Then either f is constant on a cone of hyperdegrees or f(x) ≥_H x on a cone of hyperdegrees.
By the previous lemma, we can assume there is some α < ω_1 such that for all x on a cone of hyperdegrees, α < ω_1^x and x^(α)≥_T f(x). Let a be the base of such a cone and let r be a presentation of α computable from a. For the rest of the proof, we will work on the cone above a and we will interpret x^(α) to mean the unique jump hierarchy on r that starts with x.
The main idea of the proof is to go through the proof of Theorem <ref> and make sure that every time that proof used determinacy, we can actually get by with just Borel determinacy. The only part of that proof in which we used determinacy was in the proof of Lemma <ref>. In particular, we used determinacy by applying Lemma <ref> to the binary relation R defined by
R(x, y) x ≥_T y and f(x) ≡_H y.
The problem is that even if f is Borel, this relation is not Δ^1_1, but only Π^1_1 (since, in general, the formula x ≡_H y is only Π^1_1). We will remedy this problem by showing that the relation R can be replaced by the relation S defined by
S(x, z) x ≥_T z and ∃ y ≤_T x (y ≥_T a x ≤_T y^(α) f(y) = z).
In particular, we will show that the domain of S is cofinal in the Turing degrees. The requirement that y must compute a is necessary to ensure that y^(α) is well-defined (and note that it implies that x must also compute a).
Why is this sufficient? Let's first assume that we can show that the domain of S is cofinal and see why that is enough to complete the proof. Since the definition of S is Δ^1_1, and satisfies the conditions of Lemma <ref>, there is a pointed perfect tree T and a Turing functional Φ such that for all x ∈ [T], S(x, Φ(x)) holds.
We now claim that Φ(x) ≡_H f(x). To see why, let y be a witness to the truth of S(x, Φ(x)). Then y ≥_T a and so α < ω_1^y. Also y ≤_T x and x ≤_T y^(α), hence x and y are hyp-equivalent. Since f is hyp-invariant, this implies that Φ(x) = f(y) ≡_H f(x).
Thus we have recovered the conclusion of Lemma <ref> and the rest of the proof works unchanged.
Why is this true? Now we will show that S has cofinal domain. The proof is very similar to the proof of lemma <ref>. Let x be any real. By joining with a if necessary, we may assume that x is in the cone above a. Since x is in the cone above a, we know that f(x) ≤_T x^(α) and so S(x^(α), f(x)) holds (as witnessed by x itself). Since x ≤_T x^(α), we have succeeded in finding something in the domain of S which is above x.
plain
|
http://arxiv.org/abs/2306.03033v1
|
20230605165101
|
Entropic mean-field min-max problems via Best Response and Fisher-Rao flows
|
[
"Razvan-Andrei Lascu",
"Mateusz B. Majka",
"Łukasz Szpruch"
] |
math.OC
|
[
"math.OC",
"math.PR"
] |
Maxwell Institute for Mathematical Sciences, Edinburgh, UK
[email protected]
School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK, and Maxwell Institute for Mathematical Sciences, Edinburgh, UK
[email protected]
School of Mathematics, University of Edinburgh, UK, and The Alan Turing Institute, UK and Simtopia, UK
[email protected]
We investigate convergence properties of two continuous-time optimization methods, the Mean-Field Best Response and the Fisher-Rao (Mean-Field Birth-Death) flows, for solving convex-concave min-max games with entropy regularization. We introduce suitable Lyapunov functions to establish exponential convergence to the unique mixed Nash equilibrium for both methods, albeit under slightly different conditions. Additionally, we demonstrate the convergence of the fictitious play flow as a by-product of our analysis.
Entropic mean-field min-max problems via Best Response and Fisher-Rao flows
Łukasz Szpruch
July 31, 2023
===========================================================================
§ INTRODUCTION
Learning equilibria in min-max games has gained tremendous popularity motivated by the recent advances in machine learning such as Generative Adversarial Networks <cit.>, adversarial learning <cit.>, multi-agent reinforcement learning <cit.> and fairness in machine learning <cit.>. In this work, we are concerned with the continuous-time convergence analysis of the Mean-Field Best Response (MF-BR) and Fisher-Rao (FR) flows to the unique mixed Nash equilibrium of an entropy-regularized min-max game.
Let 𝒳, 𝒴 be any subsets of ℝ^d (in particular, we allow 𝒳 = 𝒴 = ℝ^d), and let U^π: 𝒳→ℝ, U^ρ: 𝒴→ℝ be two measurable functions such that ∫_𝒳 e^-U^π(x)dx = ∫_𝒴 e^-U^ρ(y)dy = 1.[We omit the normalizing constants Z_π and Z_ρ since we adopt the convention that the potential functions U^π and U^ρ are shifted by log Z_π and log Z_ρ, respectively.] For any 𝒵⊆ℝ^d, by 𝒫_ac(𝒵) we denote the space of probability measures on 𝒵 which are absolutely continuous with respect to the Lebesgue measure. Following a standard convention, we use the same symbol to denote a probability measure in 𝒫_ac(𝒵) as well as its density. If π(x) := e^-U^π(x) and ρ(y) := e^-U^ρ(y),
then the relative entropy D_KL(· | π):𝒫_ac(𝒳) → [0, ∞) with respect to π is given for any ν∈𝒫_ac(𝒳) by
D_KL(ν|π) = ∫_𝒳log(ν(x)/π(x)) ν(x)dx,
and we define D_KL(μ|ρ) analogously for any μ∈𝒫_ac(𝒴). Let F:𝒫(𝒳) ×𝒫(𝒴) →ℝ be a convex-concave (possibly non-linear) function and σ > 0 be a regularization parameter. The min-max problem we study is given by
min_ν∈𝒫(𝒳)max_μ∈𝒫(𝒴) V^σ(ν, μ), with V^σ(ν, μ) F(ν, μ) + σ^2/2(D_KL(ν|π)-D_KL(μ|ρ)).
In this setting, one is typically interested in searching for mixed Nash equilibria (MNEs) <cit.>, which are defined as pairs of measures (ν^*, μ^*) ∈𝒫(𝒳) ×𝒫(𝒴) that satisfy
V^σ(ν^*, μ) ≤ V^σ(ν^*, μ^*) ≤ V^σ(ν, μ^*), for all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴).
We establish the existence of an MNE for (<ref>) in Theorem <ref> in Appendix <ref>. Since we do not assume compactness on 𝒳 and 𝒴, the existence of an MNE does not directly follow from Glicksberg's min-max theorem <cit.>, cf. the discussion in Subsection <ref>. Since ν↦ F(ν, μ) and μ↦ F(ν, μ) are convex and concave (see Assumption <ref>), respectively, Lemma <ref> in Appendix <ref> guarantees uniqueness of the MNE of game (<ref>).
In what follows, we will introduce the MF-BR and FR flows on the spaces (𝒫_ac(𝒳) ×𝒫_ac(𝒴), TV) and (𝒫_ac(𝒳) ×𝒫_ac(𝒴), FR), respectively, where TV and FR are the Total Variation and Fisher-Rao distances, respectively (see Definitions <ref> and <ref> in Appendix <ref>).
§.§ Mean-field best response dynamics
Best response (BR) is a learning algorithm initially proposed in <cit.> for games on ℝ^d (i.e., with finite dimensional sets of strategies) with the purpose of evaluating the payoff function of two-player zero-sum games at the Nash equilibrium. In this learning process, at each round of the game, each player plays their best response against the current strategies of the other players. The convergence analysis of BR both in the discrete and continuous time setup has been studied in detail for games on ℝ^d; see e.g. <cit.>. In the present paper, we introduce the Mean-Field Best Response (MF-BR) flow, which is an infinite-dimensional counterpart of the classical BR algorithm.
In order to motivate the introduction of the MF-BR gradient flow, we start by observing that the MNE (ν^*, μ^*) of (<ref>) solves
ν^* = _ν∈𝒫(𝒳){F(ν, μ^*) + σ^2/2D_KL(ν|π)},
μ^* = _μ∈𝒫(𝒴){F(ν^*, μ) - σ^2/2D_KL(μ|ρ)}.
According to Proposition <ref> in Appendix <ref>, which characterizes (ν^*, μ^*) via a first-order condition, we have that the MNE (ν^*, μ^*) satisfying (<ref>) is given implicitly by the equations
ν^*(x) = 1/Z(ν^*, μ^*)exp( -2/σ^2δ F/δν (ν^*, μ^*, x) - U^π(x) ),
μ^*(y) = 1/Z'(ν^*,μ^*)exp( 2/σ^2δ F/δμ (ν^*, μ^*, y) - U^ρ(y) ),
where Z(ν^*, μ^*) and Z'(ν^*, μ^*) are normalizing constants, whereas δ F/δν and δ F/δμ are flat derivatives of F (see Definition <ref> in Appendix <ref>). The key idea is to show that the MNE for which the equations (<ref>) and (<ref>) hold, satisfies a fixed-point problem. We define Ψ: 𝒫(𝒳) ×𝒫(𝒴) →𝒫_ac(𝒳) and Φ: 𝒫(𝒳) ×𝒫(𝒴) →𝒫_ac(𝒴) which are given by
Ψ(ν, μ)(x) = 1/Z(ν, μ)exp(-2/σ^2δ F/δν(ν, μ, x) - U^π(x)),
Φ(ν, μ)(y) = 1/Z'(ν, μ)exp(2/σ^2δ F/δμ(ν, μ, y) - U^ρ(y)),
for all (x,y) ∈𝒳×𝒴 Lebesgue almost surely, and where Z(ν,μ) and Z'(ν,μ) are normalizing constants depending on ν and μ. Observe that the MNE (ν^*, μ^*) satisfies (<ref>) and (<ref>) if and only if it is also a solution to the fixed point problem
ν(x) = Ψ(ν, μ)(x)
μ(y) = Φ(ν, μ)(y).
Let (ν_t)_t ∈ [0, ∞)⊂𝒫_ac(𝒳) and (μ_t)_t ∈ [0, ∞)⊂𝒫_ac(𝒴) denote the strategies of each player. Then,
since finding the unique MNE of (<ref>) is equivalent to finding the unique fixed point which solves (<ref>), it is natural that the pair of strategies (ν_t, μ_t)_t ≥ 0 evolves on (𝒫_ac(𝒳) ×𝒫_ac(𝒴), TV) along the flow given by
dν_t(x) = α(Ψ(ν_t, μ_t)(x) - ν_t(x) )dt,
dμ_t(y) = α(Φ(ν_t, μ_t)(y) - μ_t(y) )dt, t ≥ 0,
for some initial condition (ν_0, μ_0) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴), and a parameter (learning rate) α > 0. .
Note that a similar algorithm has been studied by <cit.> in the context of a different class of optimization problems on (𝒫_p(ℝ^d), 𝒲_p).
Since the only solution (ν, μ) to the fixed point problem (<ref>) is automatically the unique MNE of the game, then (<ref>) gives a strong indication for considering the map
t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))
as a suitable Lyapunov function in the subsequent convergence analysis of the MF-BR dynamics. Indeed, it holds that D_KL(ν|Ψ(ν,μ)) + D_KL(μ|Φ(ν,μ)) ≥ 0, for all (ν, μ) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴), with equality if and only if the fixed point problem (<ref>) is satisfied. Hence, if we can show that t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) converges to zero as t →∞, then we know that the unique MNE of (<ref>) has been attained.
Another appropriate Lyapunov function, especially in the case of the best response dynamics in games (as demonstrated in the context of games on ℝ^d in e.g. <cit.>), is the so-called Nikaidò-Isoda (NI) error <cit.>, which, for all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴), can be defined as
NI(ν,μ) max_μ' ∈𝒫(𝒴) V^σ(ν, μ') - min_ν' ∈𝒫(𝒳) V^σ(ν', μ).
From the saddle point condition (<ref>), it follows that NI(ν,μ) ≥ 0 and NI(ν,μ) = 0 if and only if (ν, μ) is a MNE. Therefore, if we prove that t ↦NI(ν_t, μ_t) converges to zero as t →∞, then we have precisely shown convergence of the flow to the unique MNE of (<ref>).
§.§.§ Sketch of convergence proof for the MF-BR flow
Our convergence result for the MF-BR flow extends the work <cit.> from the case of a single-player optimization problem to the class of games (<ref>), which requires novel Lyapunov functions.
We also work with the Total Variation instead of the Wasserstein distance, which allows for less regularity of F. For any m,m' ∈𝒫(ℳ), with ℳ⊆ℝ^d, let D_J(m,m') D_KL(m|m') + D_KL(m'|m) denote Jeffreys divergence <cit.> between m and m'. Assuming the existence of the flow (ν_t, μ_t)_t ≥ 0 satisfying (<ref>), and the differentiability of the map t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) for all t > 0, which will be established in Proposition <ref> and Theorem <ref>, we can show that
d/dt(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))) ≤ -α(D_J(ν_t|Ψ(ν_t, μ_t)) + D_J(μ_t|Φ(ν_t, μ_t))).
Applying Gronwall's inequality gives
D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) ≤ e^- α t(D_KL(ν_0|Ψ(ν_0,μ_0)) + D_KL(μ_0|Φ(ν_0,μ_0))).
Using Lemma <ref> from Appendix <ref>, that is
NI(ν_t, μ_t) ≤σ^2/2(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))),
we obtain that
NI(ν_t, μ_t) ≤σ^2/2e^-α t(D_KL(ν_0|Ψ(ν_0,μ_0)) + D_KL(μ_0|Φ(ν_0,μ_0))).
§.§ Fisher-Rao (mean-field birth-death) dynamics
Recently, birth-death dynamics have been explored in connection with improving sampling schemes based on overdamped Langevin dynamics <cit.> and accelerating convergence of mean-field one-hidden-layer neural networks trained using vanilla gradient descent <cit.>. These works were generalized in <cit.> where the convergence of a wider class of mean-field birth-death processes is studied. The dynamical formulation of the Fisher-Rao metric (see e.g. <cit.>) states that for any m_0, m_1 ∈𝒫_ac(ℳ), with ℳ⊆ℝ^d,
FR(m_0,m_1) inf{∫_0^1 ∫_ℳ |r_s|^2 λ_s(dx)ds: ∂_t λ_t = r_t λ_t with λ_0 = m_0, λ_1 = m_1},
where the infimum is taken over all curves [0,1] ∋ t ↦ (λ_t,r_t) ∈𝒫_ac(ℳ) × L^2(ℳ;λ_t) solving ∂_t λ_t = r_t λ_t in the distributional sense, such that t ↦λ_t is weakly continuous with endpoints λ_0 = m_0 and λ_1 = m_1. Inspired by the work in <cit.>, we consider the Fisher-Rao (mean-field birth-death) gradient flow on the space (𝒫_ac(𝒳) ×𝒫_ac(𝒴), FR) in the setting of (<ref>). As opposed to the MF-BR dynamics (<ref>) which rely on introducing the fixed point perspective on min-max games, the FR dynamics utilize a gradient flow (ν_t, μ_t)_t ≥ 0 in the Fisher-Rao geometry. As a first attempt at defining a Fisher-Rao gradient flow for solving (<ref>), consider
∂_t ν_t(x) = -δ V^σ/δν(ν_t, μ_t, x) ν_t(x),
∂_t μ_t(y) = δ V^σ/δμ(ν_t, μ_t, y) μ_t(y),
with initial condition (ν_0, μ_0) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴).
§.§.§ Sketch of convergence proof for the FR flow
For the sake of presenting an intuitive heuristic argument, we ignore here for now that (ν, μ, x) ↦δ V^σ/δν(ν, μ, x) and (ν, μ, y) ↦δ V^σ/δμ(ν, μ, y) may not exist for the V^σ defined in (<ref>), due to the relative entropy term D_KL being only lower semicontinuous (for this reason, in our analysis in Section <ref>, we will replace these two derivatives with appropriately defined auxiliary functions a and b). Nevertheless, we will now demonstrate that choosing the flow (ν_t, μ_t)_t ≥ 0 as in (<ref>), makes the function
t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)
decrease in t, under the assumption of convexity-concavity of F.
Indeed, assuming the existence of the flow (ν_t, μ_t)_t ≥ 0 satisfying (<ref>), and the differentiablity of the map t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t), we formally have that
d/dt(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)) = ∫_𝒳∂_t (ν^*(x) logν^*(x)/ν_t(x)) dx + ∫_𝒴∂_t (μ^*(y) logμ^*(y)/μ_t(y)) dy
= -∫_𝒳(ν^*(x) - ν_t(x)) ∂_t ν_t(x)/ν_t(x)dx - ∫_𝒴(μ^*(y) - μ_t(y)) ∂_t μ_t(y)/μ_t(y)dy
= ∫_𝒳δ V^σ/δν(ν_t, μ_t, x) (ν^*-ν_t)(dx) - ∫_𝒴δ V^σ/δμ(ν_t, μ_t, y) (μ^*-μ_t)(dy),
where the second equality follows from the fact that ∫∂_t ν_t(x) dx = ∫∂_t μ_t(y) dy = 0. Then, assuming that ν↦ F(ν, μ) and μ↦ F(ν, μ) are convex and concave, respectively (see Assumption <ref>), we observe that ν↦ V^σ(ν, μ) and μ↦ V^σ(ν, μ) are σ-strongly-convex and σ-strongly-concave relative to D_KL, respectively (see Lemma <ref> in Appendix <ref>), that is
V^σ(ν^*, μ) - V^σ(ν, μ) ≥∫_𝒳δ V^σ/δν(ν, μ, x) (ν^*-ν)(dx) + σ^2/2D_KL(ν^*| ν),
V^σ(ν, μ^*) - V^σ(ν, μ) ≤∫_𝒴δ V^σ/δμ(ν, μ, y)(μ^*-μ)(dy) - σ^2/2D_KL(μ^*| μ).
Therefore, we obtain that
d/dt(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)) ≤ V^σ(ν^*, μ_t) - V^σ(ν_t, μ_t) + V^σ(ν_t, μ_t) - V^σ(ν_t, μ^*)
- σ^2/2D_KL(ν^*|ν_t) - σ^2/2D_KL(μ^*|μ_t) ≤ - σ^2/2D_KL(ν^*|ν_t) - σ^2/2D_KL(μ^*|μ_t),
where V^σ(ν^*, μ_t) - V^σ(ν_t, μ^*) ≤ 0 follows from the saddle point condition (<ref>). Hence, applying Gronwall's inequality gives
D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) ≤ e^-σ^2/2t(D_KL(ν^*|ν_0) + D_KL(μ^*|μ_0)).
Since D_KL(ν^*|ν) + D_KL(μ^*|μ) ≥ 0 with equality if and only if ν = ν^* and μ = μ^*, it follows that the unique MNE of (<ref>) is achieved with exponential rate e^-σ^2/2t.
§.§ Comparison between the MF-BR and FR flows
As we will show in the convergence analysis, both the MF-BR and FR dynamics converge exponentially but with rates which differ significantly in terms of σ. The rate for MF-BR with respect to the map t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) is independent of σ (and with respect to t ↦NI(ν_t, μ_t) the rate degenerates quadratically fast with σ→ 0), while for FR the rate degenerates exponentially fast with σ→ 0. Another important aspect to compare is the assumptions used for both dynamics. While the results for both flows rely on fairly standard assumptions such as convexity-concavity of F (see Assumption <ref>) and boundedness of first and second order flat derivatives of F (see Assumption <ref> and <ref>), it is worth noting that the FR flow (<ref>) needs
an additional Assumption <ref> about comparability of the initial condition (ν_0,μ_0) to the reference measures π and ρ. This is a "warm start" condition typically needed for birth-death flows, see the discussions in <cit.>.
On the other hand, the MF-BR flow only needs an additional assumption that the mixed second order flat derivative satisfies δ^2 F/δνδμ(ν, μ, ·, ·) = δ^2 F/δμδν(ν, μ, ·, ·) (see Assumption <ref>), which is trivially satisfied for all F of the form F(ν, μ) = ∫_𝒴∫_𝒳 f(x,y) ν(dx) μ(dy). The initial condition (ν_0,μ_0) in our analysis of the MF-BR flow can be an arbitrary pair of measures in 𝒫_ac(𝒳) ×𝒫_ac(𝒴).
§.§ Summary of contributions
The main contributions of this work are as follows:
* We prove the existence of the MF-BR flow (ν_t, μ_t)_t ≥ 0 and, independently of initialization, we prove its exponential convergence to the unique MNE of (<ref>) via the Lyapunov function
t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)),
which, to the best of our knowledge, has not appeared in the literature before. Consequently, using (<ref>), we obtain exponential convergence of the MF-BR flow with respect to t ↦NI(ν_t, μ_t).
We show that for games with F(ν, μ) = ∫_𝒴∫_𝒳 f(x,y) ν(dx) μ(dy), where f:𝒳×𝒴→ℝ is bounded, the convergence rate with respect to t ↦NI(ν_t, μ_t) becomes e^-α t (and is hence independent of the regularization parameter σ).
* We prove the existence of the FR flow (ν_t, μ_t)_t ≥ 0 and show that it converges exponentially to the unique MNE of (<ref>) with respect to the Lyapunov function
t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t).
We also show that NI(1/t∫_0^t ν_s ds, 1/t∫_0^t μ_s ds) converges to zero linearly with rate 1/t.
§.§ Literature review
Recently, there has been intensive research in analyzing the convergence of various types of gradient flows to the set of MNEs in a particular setup of game (<ref>) in which F is bilinear, that is F(ν, μ) = ∫_𝒴∫_𝒳 f(x,y) ν(dx) μ(dy), regularized by the entropy instead of the relative entropy D_KL,
and where 𝒳 and 𝒴 are compact smooth manifolds without boundary,
embedded in the Euclidean space or they are Euclidean tori, and f has sufficient regularity, i.e., it is at least continuously differentiable and ∇_x f, ∇_y f satisfy Lipschitz conditions,
see e.g. <cit.>.
§.§.§ Wasserstein gradient flow for games
In this particular setting, <cit.> study the convergence of the Wasserstein gradient flow and obtain exponential convergence to the MNE in the case where the flows of the players convergence at different speeds. In <cit.> the speeds of convergence of the flows (ν_t)_t ≥ 0 and (μ_t)_t ≥ 0 are assumed to be different in the sense that one of the flows has achieved equilibrium while the other one is still governed by the Wasserstein gradient flow equation. <cit.> states that under these separated dynamics, the flow (ν_t,μ_t)_t ≥ 0 converges (without explicit rate) to the unique MNE of the game.
In contrast to <cit.>, <cit.> studies the case where the flows of the players converge at different speeds but the timescale separation is finite, i.e., one of the players evolves faster (slower) in time but none of them is at equilibrium. <cit.> shows that the finitely timescale separated Wasserstein gradient flow converges exponentially (with rate depending on the regularization and timescale separation parameters) to the unique MNE of the game.
Under the same setup in which F is bilinear but with σ = 0, <cit.> studies the convergence of the Wasserstein-Fisher-Rao gradient flow. For t_0 > 0 (depending on parameters of the individual contributions of the Wasserstein and the Fisher-Rao components in the WFR flow) and when the Fisher-Rao component dominates the Wasserstein component of the WFR flow, <cit.> shows that the pair (1/t_0∫_0^t_0ν_s ds, 1/t_0∫_0^t_0μ_s ds) is an approximate MNE of the game, i.e., NI(1/t_0∫_0^t_0ν_s ds, 1/t_0∫_0^t_0μ_s ds) ≤ϵ with ϵ > 0 arbitrary.
In <cit.>, they consider the discrete-time convergence of the WFR flow when F is bilinear, σ=0 and the MNE of the game is unique. Requiring that the flow is initialized sufficiently close to the MNE, <cit.> shows local exponential convergence with respect to the NI error and the WFR distance to the unique MNE of the game.
§.§.§ Best response and fictitious play dynamics
For the class of two-player zero-sum games with payoff function ℝ^d_1×ℝ^d_2∋ (x,y) ↦ x^T A y ∈ℝ, where x,y denote the strategies of the players and A ∈ℝ^d_1 × d_2 denotes the payoff matrix, and assuming that the game may have multiple Nash equilibria, <cit.> establishes that continuous-time BR converges to the set of Nash equilibria with exponential rate e^-t along the Nikaidò-Isoda (NI) error <cit.>. Later, assuming that the payoff function of the game is continuous convex-concave, that the strategy spaces are compact and convex and that the game may have multiple Nash equilibria, <cit.> proves that continuous-time BR converges to the set of Nash equilibria with rate e^-t along the NI error.
In contrast to <cit.> and <cit.>, we consider an infinite-dimensional two-player zero-sum game on the space of probability measures. In our setting, the strategy spaces can be any subsets of ℝ^d, not necessarily compact and convex. An argument from <cit.>, which was later formalized in <cit.>, showed that continuous-time BR and continuous-time fictitious play (see e.g. <cit.> for details on the fictitious play algorithm) are in fact equivalent up to a rescale in time (see Remark <ref>).
More recently, there has been interest in the convergence analysis of continuous-time fictitious play, continuous-time BR and their discrete-time counterparts in the context of zero-sum stochastic games; see e.g. <cit.>, and mean-field games; see e.g. <cit.>.
§ MAIN RESULTS
As we explained in the introduction, we study the convergence of the MF-BR and FR dynamics to the unique MNE of the entropy-regularized two-player zero-sum game given by (<ref>),
where F:𝒫(𝒳) ×𝒫(𝒴) →ℝ is a non-linear function and σ > 0. Throughout the paper, we have the following assumptions on F.
[Convexity-concavity of F]
Suppose F admits first order flat derivatives with respect to both ν and μ as stated in Definition <ref>.
Furthermore, suppose that F is convex in ν and concave in μ, i.e., for any ν, ν' ∈𝒫(𝒳) and any μ, μ' ∈𝒫(𝒴), we have
F(ν', μ) - F(ν, μ) ≥∫_𝒳δ F/δν(ν,μ,x) (ν'-ν)(dx),
F(ν, μ') - F(ν, μ) ≤∫_𝒴δ F/δμ(ν,μ,y) (μ'-μ)(dy).
[Boundedness of first order flat derivatives]
There exist constants C_ν, C_μ > 0 such that for all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴) and for all (x, y) ∈𝒳×𝒴, we have
| δ F/δν (ν, μ, x) | ≤ C_ν, | δ F/δμ (ν, μ, y) | ≤ C_μ.
[Boundedness of second order flat derivatives]
Suppose F admits second order flat derivatives and that there exist constants C_ν, ν, C_μ, μ, C_ν, μ, C_μ, ν > 0 such that for all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴) and for all (x, y), (z, w) ∈𝒳×𝒴, we have
| δ^2 F/δν^2 (ν, μ, x, z) | ≤ C_ν, ν, | δ^2 F/δμ^2 (ν, μ, y, w) | ≤ C_μ, μ,
| δ^2 F/δνδμ (ν, μ, y, z) | ≤ C_ν, μ, | δ^2 F/δμδν (ν, μ, x, w) | ≤ C_μ, ν.
[Symmetry of second order flat derivatives]
For all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴) and for all (x, y) ∈𝒳×𝒴, we have that
δ^2 F/δνδμ (ν, μ, y, x) = δ^2 F/δμδν (ν, μ, x, y),
Using Assumption <ref>, it is straightforward to check that there exist constants C_ν', C_μ' > 0 such that for all (ν,μ) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴), (ν',μ') ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴) and all (x, y) ∈𝒳×𝒴, we have that
|δ F/δν(ν, μ, x) - δ F/δν(ν', μ', x)| ≤ C_ν'(TV(ν,ν') + TV(μ,μ')),
|δ F/δμ(ν, μ, y) - δ F/δμ(ν', μ', y)| ≤ C_μ'(TV(ν,ν') + TV(μ,μ')).
An example of a function F which satisfies Assumptions <ref>, <ref>, <ref>, <ref> is F(ν, μ) = ∫_𝒴∫_𝒳 f(x,y) ν(dx) μ(dy) provided that f:𝒳×𝒴→ℝ is bounded. Indeed, Assumptions <ref> and <ref> are trivially satisfied by such F, while Assumptions <ref> and <ref> hold due to the boundedness of f.
The following result extends Proposition 2.8 from <cit.> by showing the existence and uniqueness of the pair of flows (ν_t, μ_t)_t ≥ 0 which solve the MF-BR system (<ref>) on (𝒫_ac(𝒳) ×𝒫_ac(𝒴), TV).
Let Assumptions <ref>, <ref> and <ref> hold and let (ν_0, μ_0) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴). Then there exists a unique pair of flows (ν_t,μ_t)_t ≥ 0 in (𝒫_ac(𝒳) ×𝒫_ac(𝒴), TV) satisfying (<ref>). Moreover, the solutions depend continuously on the initial conditions and, for all (x,y) ∈𝒳×𝒴, the maps [0, ∞) ∋ t ↦ν_t(x) ∈𝒳 and [0, ∞) ∋ t ↦μ_t(y) ∈𝒴 are in C^1([0, ∞)).
We are ready to state one of the main results of the paper.
Let Assumptions <ref>, <ref>, <ref>, <ref> hold. Then the map t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) is differentiable for all t > 0, and we have that
d/dt(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))) ≤ -α(D_J(ν_t|Ψ(ν_t, μ_t)) + D_J(μ_t|Φ(ν_t, μ_t))),
where, for any m,m' ∈𝒫(ℳ), with ℳ⊆ℝ^d, D_J(m,m') D_KL(m|m') + D_KL(m'|m) denotes Jeffreys divergence <cit.> between m and m'. Furthermore, suppose that (ν_0, μ_0) are chosen such that D_KL(ν_0|Ψ(ν_0,μ_0)) + D_KL(μ_0|Φ(ν_0,μ_0)) < ∞. Then,
D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) ≤ e^-α t(D_KL(ν_0|Ψ(ν_0,μ_0)) + D_KL(μ_0|Φ(ν_0,μ_0))),
NI(ν_t, μ_t) ≤σ^2/2e^-α t(D_KL(ν_0|Ψ(ν_0,μ_0)) + D_KL(μ_0|Φ(ν_0,μ_0))).
Under the same assumptions as in Theorem <ref>, for F(ν, μ) = ∫_𝒴∫_𝒳 f(x,y) ν(dx) μ(dy) with f:𝒳×𝒴→ℝ bounded and NI(ν_0, μ_0) < ∞, it holds that
NI(ν_t, μ_t) ≤ e^-α tNI(ν_0, μ_0).
Lastly, we would like to demonstrate that in the continuous time setting the convergence study for the MF-BR flow (<ref>) consequently leads to the convergence of a related type of flow known as fictitious play (FP) in the literature of games on ℝ^d (see e.g. <cit.>). In the setup of min-max games on ℝ^d, it is showed in <cit.> that the continuous-time best response and fictitious play dynamics are equivalent up to a time rescale.
An intrinsic feature of the best response algorithm is that players know the opponent's strategy at the exact same round when they make their move as opposed to fictitious play, where players respond best against the historical distribution of the opponent's strategies. The distinction between the FP and BR flows is that for fictitious play, the flow equations hold at the level of the averaged-in-time strategies (ν̂_t, μ̂_t) (1/t∫_0^t ν_s ds, 1/t∫_0^t μ_s ds). We show how to recover the (mean-field) fictitious play flow from the MF-BR flow (<ref>). From Proposition <ref>, we have that, for all (x,y) ∈𝒳×𝒴, the maps [0, ∞) ∋ t ↦ν_t(x) ∈𝒳 and [0, ∞) ∋ t ↦μ_t(y) ∈𝒴 are in C^1([0, ∞)), and solve (<ref>).
Therefore, by setting ν̂_t = ν_log t and μ̂_t = μ_log t, for all t ≥ t_0 > 0, with initial condition (ν̂_t_0, μ̂_t_0) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴), and applying the chain rule, we obtain that
dν̂_t(x) = 1/tdν_log t(x) = α/t(Ψ(ν_log t, μ_log t)(x) - ν_log t(x))dt = α/t(Ψ(ν̂_t, μ̂_t)(x) - ν̂_t(x))dt,
dμ̂_t(y) = 1/tdμ_log t(y) = α/t(Φ(ν_log t, μ_log t)(y) - μ_log t)(y)dt = α/t(Φ(ν̂_t, μ̂_t)(y) - μ̂_t(y))dt,
for all (x,y) ∈𝒳×𝒴, which is precisely the mean-field version of the classical fictitious play flow studied for instance in <cit.>.
This fact suggests that in continuous time one could arbitrarily choose to work with either the MF-BR flow (<ref>) or the fictitious play flow (<ref>) since the convergence rates for the flow (<ref>) can be obtained from Theorem <ref> via a change in timescale. Specifically, we can show that the maps t ↦D_KL(ν̂_t|Ψ(ν̂_t,μ̂_t)) + D_KL(μ̂_t|Φ(ν̂_t,μ̂_t)) and t ↦NI(ν̂_t, μ̂_t) decrease along the flow (<ref>) with rates α/t and ασ^2/2t, respectively.
Further, we analyze the convergence of the FR dynamics to the unique MNE of the game (<ref>) in the case where the following additional assumption is satisfied.
[Ratio condition]
Suppose (ν_0, μ_0) ∈𝒫(𝒳) ×𝒫(𝒴) are absolutely continuous and comparable with π and ρ, respectively, in the sense that
* There exist constants r_ν, r_μ>0 such that
inf_x ∈𝒳ν_0(x)/π(x)≥ r_ν, inf_y ∈𝒴μ_0(y)/ρ(y)≥ r_μ.
* There exist constants R_ν, R_μ > 1 such that
sup_x ∈𝒳ν_0(x)/π(x)≤ R_ν, sup_y ∈𝒴μ_0(y)/ρ(y)≤ R_μ.
Combining (<ref>) and (<ref>) and Assumption <ref>, we deduce that Assumption <ref> is equivalent to assuming that there exist constants r̅_ν, r̅_μ > 0 and R̅_ν, R̅_μ > 1 such that for all (x, y) ∈𝒳×𝒴,
r̅_ν≤ν_0(x)/ν^*(x)≤R̅_ν, r̅_μ≤μ_0(y)/μ^*(y)≤R̅_μ.
We emphasize that Assumption <ref> is natural in the context of Fisher-Rao flows because one needs to choose an appropriate initialization (ν_0, μ_0) in order to ensure that the flow (ν_t, μ_t)_t ≥ 0 remains absolutely continuous with respect to the MNE (ν^*, μ^*). As observed in <cit.>, Assumption <ref> can be understood as a “warm start” type of condition.
Returning to the question of flat differentiability of V^σ, which was raised in Subsection <ref>, if we assume that F is flat differentiable with respect to both ν and μ (see Assumption <ref>), then the maps (ν, μ, x) ↦ a(ν, μ, x) δ F/δν(ν, μ, x) + σ^2/2log(ν(x)/π(x)) - σ^2/2D_KL(ν|π) and (ν, μ, y) ↦ b(ν, μ, y) δ F/δμ(ν, μ, y) - σ^2/2log(μ(y)/ρ(y)) + σ^2/2D_KL(μ|ρ) are well-defined and formally correspond to the flat derivatives δ V^σ/δν(ν, μ, ·) and δ V^σ/δμ(ν, μ, ·), respectively, for those measures ν and μ for which such derivatives exist (note that we will only need to consider a and b along our gradient flow (ν_t,μ_t)_t ≥ 0, so our argument can be interpreted as stating that, while V^σ is not flat differentiable everywhere, it is indeed flat differentiable along our gradient flow). The relative entropy terms D_KL appear in the definition of a and b as normalizing constants to ensure that ∫_𝒳 a(ν,μ,x) ν(dx) = 0 and ∫_𝒴 b(ν,μ,y) μ(dy) = 0, since we adopt the convention that the flat derivatives of F are uniquely defined up to an additive shift (see Definition <ref> in Appendix <ref>). Motivated by this discussion, we define the Fisher-Rao gradient flow (ν_t, μ_t)_t ≥ 0 on the space (𝒫_ac(𝒳) ×𝒫_ac(𝒴), FR) by
∂_t ν_t(x) = -a(ν_t, μ_t, x) ν_t(x),
∂_t μ_t(y) = b(ν_t, μ_t, y) μ_t(y),
with initial condition (ν_0, μ_0) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴).
The following result extends Theorem 2.1 from <cit.> to the case of two-player zero-sum games by showing that the Fisher-Rao gradient flow (<ref>) admits a unique solution (ν_t, μ_t)_t ≥ 0.
Suppose that Assumption <ref>, <ref> and condition (<ref>) from Assumption <ref> hold. Then the system of equations (<ref>) has a unique solution (ν_t, μ_t)_t ≥ 0. Moreover, for t≥ 0,
D_KL(ν_t|π) ≤ 2 log R_ν + 4/σ^2C_ν, D_KL(μ_t|ρ) ≤ 2 log R_μ + 4/σ^2C_μ
and there exist constants R_1, ν, R_1, μ > 1 such that for all t ≥ 0,
sup_x ∈𝒳ν_t(x)/π(x)≤ R_1, ν, sup_y ∈𝒴μ_t(y)/ρ(y)≤ R_1, μ.
Additionally, if condition (<ref>) from Assumption <ref> holds, then there exist constants r_1, ν, r_1, μ > 0 such that for all t ≥ 0,
inf_x ∈𝒳ν_t(x)/π(x)≥ r_1, ν, inf_y ∈𝒴μ_t(y)/ρ(y)≥ r_1, μ.
The bounds obtained in this theorem allow us to prove that the map t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) is differentiable along the FR dynamics (<ref>).
Next, we state the other main result of this paper. We prove two different types of convergence results for the FR dynamics (<ref>) in min-max games given by (<ref>).
Suppose that Assumption <ref>, <ref> and <ref> hold. Then the map t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) is differentiable along the FR dynamics (<ref>). Suppose furthermore that Assumption <ref> holds and that (ν_0, μ_0) are chosen such that D_KL(ν^*|ν_0) + D_KL(μ^*|μ_0) < ∞. Then, for all t ≥ 0, we have
D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) ≤ e^-σ^2/2t(D_KL(ν^*|ν_0) + D_KL(μ^*|μ_0)),
NI(1/t∫_0^t ν_s ds, 1/t∫_0^t μ_s ds) ≤1/t(D_KL(ν^*|ν_0) + D_KL(μ^*|μ_0)).
In game theoretic language, the first result of Theorem <ref> says that convergence of the FR dynamics (<ref>) to the unique MNE (ν^*, μ^*) in terms of the strategies ν_t and μ_t is achieved with exponential rate depending on σ. On the other hand, the second result gives a convergence estimate in terms of the NI error, which in turn is expressed as a function of the payoff V. More precisely, the NI error of the time-averaged flow converges to 0 with linear rate 1/t regardless of the value of σ.
Exponential convergence of the single mean-field birth-death flow (ν_t)_t ≥ 0 with respect to D_KL(ν^*|ν_t) can be shown to also hold in the setting of <cit.>, however it was not studied in <cit.>, which considered only convergence of V(ν_t) for a convex energy function V: 𝒫(ℝ^d) →ℝ.
It is worth commenting that the second result (<ref>) of Theorem <ref> can be obtained by only requiring F to be convex-concave (see Assumption <ref>) and setting σ = 0, i.e., the proof of (<ref>) does not use the fact that V^σ is regularized by D_KL with a positive σ. However, we are presently unaware of a way to prove the existence of the FR flow (<ref>) for σ = 0.
The assumptions on the reference measures π and ρ are only necessary in our proofs of the results concerning the FR flow (<ref>) but we choose to also keep π and ρ in the analysis for the MF-BR dynamics for consistency. Indeed, one can remove π and ρ and regularize only by entropy instead of the relative entropy D_KL in the case of MF-BR. The results and their corresponding proofs would remain unchanged.
§ PROOF OF THEOREM <REF> AND COROLLARY <REF>
Before we present the proof of Theorem <ref>, we state some useful auxiliary results which are proved in Appendix <ref>. We split the proof of Theorem <ref> into three steps:
* First, we show that the map t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) is differentiable when (ν_t, μ_t)_t ≥ 0 satisfies the MF-BR flow.
* Second, we differentiate t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) with respect to t and show that d/dt(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))) is bounded above by -α(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))).
* Lastly, we finish by applying Gronwall's inequality to obtain exponential convergence. Subsequently, we establish exponential convergence with respect to t ↦NI(ν_t, μ_t).
The lemma below is an adaptation of the first part of <cit.> to the min-max setting (<ref>).
Suppose that Assumption <ref> holds. Then there exist constants k_Ψ,K_Ψ, k_Φ,K_Φ with 0<k_Ψ<1<K_Ψ<∞ and 0<k_Φ<1<K_Φ<∞ such that for all (ν,μ) ∈𝒫(𝒳) ×𝒫(𝒴) and all (x,y) ∈𝒳×𝒴, we have that
k_Ψe^-U^π(x)≤Ψ(ν, μ)(x) ≤ K_Ψe^-U^π(x),
k_Φe^-U^ρ(y)≤Φ(ν, μ)(y) ≤ K_Φe^-U^ρ(y),
where, by an abuse of notation, Ψ(ν, μ)(x) and Φ(ν, μ)(y) denote the densities of Ψ(ν, μ) and Φ(ν, μ), respectively, with respect to the Lebesgue measure on 𝒳 and 𝒴. Moreover, Ψ(ν, μ) and Φ(ν, μ) belong to 𝒫(𝒳) and 𝒫(𝒴).
The corollary and lemma below are extensions of <cit.> and <cit.>, respectively, to the min-max setting (<ref>).
Let Assumption <ref>, <ref> and <ref> hold. We have the following bounds on ν_t(x) and μ_t(y):
( 1 - e^-α t) k_Ψ e^-U^π(x)≤ν_t (x) ≤( 1 - e^-α t) K_Ψ e^-U^π(x) + e^-α tν_0(x),
( 1 - e^-α t) k_Φ e^-U^ρ(y)≤μ_t (y) ≤( 1 - e^-α t) K_Φ e^-U^π(y) + e^-α tμ_0(y).
hold for all x,y ∈𝒳.
Let Assumption <ref>, <ref> and <ref> hold and let s > 0. There exist integrable functions f, g, f̂, ĝ such that the following holds for all (x,y) ∈𝒳×𝒴
and all s ≤ t < +∞
g(x) ≤logν_t(x)/e^-U^π(x)(Ψ(ν_t, μ_t)(x) - ν_t(x)) ≤ f(x),
ĝ(y) ≤logμ_t(y)/e^-U^ρ(y)(Φ(ν_t, μ_t)(y) - μ_t(y)) ≤f̂(y).
Finally, we state an auxiliary Lemma <ref>, linking the NI error to the Lyapunov function t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)).
The proof of this lemma can also be found in Appendix <ref>.
Let Assumption <ref> hold. Then, for V^σ given by (<ref>) and for any (ν_t, μ_t) ∈𝒫(𝒳) ×𝒫(𝒴) and any σ > 0, we have that
NI(ν_t,μ_t) ≤σ^2/2(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))).
Step 1: Differentiability of D_KL with respect to the MF-BR dynamics (<ref>):
The differentiability of the map t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) follows from a standard argument utilizing the dominated convergence theorem (see <cit.> for more details). Let t > 0 and a sequence (t_n)_n ∈ℕ⊂ (0, ∞) such that t_n ≠ t, for all n ∈ℕ, and lim_n →∞ t_n = t. From the definition of D_KL, we have that
∂_t D_KL(ν_t|Ψ(ν_t,μ_t)) = lim_n →∞D_KL(ν_t_n|Ψ(ν_t_n,μ_t_n)) - D_KL(ν_t|Ψ(ν_t,μ_t))/t_n - t
= lim_n →∞∫_𝒳1/t_n-t(ν_t_n(x) logν_t_n(x)/Ψ(ν_t_n,μ_t_n)(x) - ν_t(x) logν_t(x)/Ψ(ν_t,μ_t)(x)) dx.
Observe that
lim_n →∞1/t_n-t (ν_t_n(x) logν_t_n(x)/Ψ(ν_t_n,μ_t_n)(x) - ν_t(x) logν_t(x)/Ψ(ν_t,μ_t)(x))
= ∂_t (ν_t(x) logν_t(x)/Ψ(ν_t,μ_t)(x)).
Therefore, we can exchange the limit and the integral in (<ref>) using the dominated convergence theorem if there exist integrable functions h_Ψ, h_Ψ: 𝒳→ℝ such that, for all t > 0 and all x ∈𝒳, it holds that
h_Ψ(x) ≤∂_t (ν_t(x) logν_t(x)/Ψ(ν_t,μ_t)(x)) ≤h_Ψ(x).
We can argue similarly to show that t ↦D_KL(μ_t|Φ(ν_t,μ_t)) is differentiable by finding integrable functions h_Φ, h_Φ: 𝒴→ℝ such that, for all t > 0 and all y ∈𝒴, we have that
h_Φ(y) ≤∂_t (μ_t(y) logμ_t(y)/Φ(ν_t,μ_t)(y)) ≤h_Φ(y).
Using Lemma <ref>, Corollary <ref> and Lemma <ref>, we will show how to obtain h_Ψ and h_Ψ since obtaining h_Φ and h_Φ is analogous. First, we can rewrite
∂_t (ν_t(x) logν_t(x)/Ψ(ν_t,μ_t)(x)) = ∂_t (ν_t(x) logν_t(x)/e^-U^π(x) + ν_t(x) loge^-U^π(x)/Ψ(ν_t,μ_t)(x))
= (1+ logν_t(x)/e^-U^π(x))∂_t ν_t(x) + ∂_t(ν_t(x) loge^-U^π(x)/Ψ(ν_t,μ_t)(x))
= α(1+ logν_t(x)/e^-U^π(x))(Ψ(ν_t,μ_t)(x) - ν_t(x))
+ α(loge^-U^π(x)/Ψ(ν_t,μ_t)(x))(Ψ(ν_t,μ_t)(x) - ν_t(x)) - ν_t(x) ∂_t logΨ(ν_t,μ_t)(x).
Next, we will bound by integrable functions each of the three terms in the last equality.
Bound for the term α(1+ logν_t(x)/e^-U^π(x))(Ψ(ν_t,μ_t)(x) - ν_t(x)):
By Lemma <ref>, there exist integrable functions f,g: 𝒳→ℝ such that for all x ∈𝒳,
f(x) ≤logν_t(x)/e^-U^π(x)(Ψ(ν_t,μ_t)(x) - ν_t(x)) ≤ g(x).
Recall that estimate (<ref>) from Lemma <ref> is
k_Ψe^-U^π(x)≤Ψ(ν_t, μ_t)(x) ≤ K_Ψe^-U^π(x),
and from estimate (<ref>) in Corollary <ref>, we have that
0 ≤( 1 - e^-α t) k_Ψ e^-U^π(x)≤ν_t (x) ≤( 1 - e^-α t) K_Ψ e^-U^π(x) + e^-α tν_0(x)
≤ K_Ψ e^-U^π(x) + ν_0(x).
Thus, we get
h_1(x) :=-α((K_Ψ - k_Ψ)e^-U^π(x) + ν_0(x)) ≤α(Ψ(ν_t,μ_t)(x) - ν_t(x))
≤α K_Ψe^-U^π(x) =: h_1(x),
and hence we obtain that
h_1(x) + α f(x) ≤α(1+ logν_t(x)/e^-U^π(x))(Ψ(ν_t,μ_t)(x) - ν_t(x)) ≤α g(x) + h_1(x).
Bound for the term α(loge^-U^π(x)/Ψ(ν_t,μ_t)(x))(Ψ(ν_t,μ_t)(x) - ν_t(x)):
We can split the second term as
α loge^-U^π(x)/Ψ(ν_t,μ_t)(x)(Ψ(ν_t,μ_t)(x) - ν_t(x)) = αΨ(ν_t,μ_t)(x)loge^-U^π(x)/Ψ(ν_t,μ_t)(x)
- αν_t(x)loge^-U^π(x)/Ψ(ν_t,μ_t)(x).
From estimate (<ref>) in Lemma <ref>, we have that
log1/K_Ψ≤loge^-U^π(x)/Ψ(ν_t,μ_t)(x)≤log1/k_Ψ,
and since Ψ(ν_t,μ_t)(x) ≥ 0, it follows that
Ψ(ν_t,μ_t)(x)log1/K_Ψ≤Ψ(ν_t,μ_t)(x)loge^-U^π(x)/Ψ(ν_t,μ_t)(x)≤Ψ(ν_t,μ_t)(x)log1/k_Ψ
Note that K_Ψ > 1 > k_Ψ > 0 and so log1/K_Ψ < 0 < log1/k_Ψ. Therefore, using estimate (<ref>) again we get that
α K_Ψe^-U^π(x)log1/K_Ψ≤αΨ(ν_t,μ_t)(x)log1/K_Ψ,
and
αΨ(ν_t,μ_t)(x)log1/k_Ψ≤α K_Ψe^-U^π(x)log1/k_Ψ.
Hence, we obtain
h_2(x) := α K_Ψe^-U^π(x)log1/K_Ψ≤αΨ(ν_t,μ_t)(x)loge^-U^π(x)/Ψ(ν_t,μ_t)(x)
≤α K_Ψe^-U^π(x)log1/k_Ψ =: h_2(x).
Next, since ν_t(x) ≥ 0 and log1/K_Ψ < 0 < log1/k_Ψ, it follows from estimates (<ref>) and (<ref>) that
αν_t(x)log1/K_Ψ≤αν_t(x)loge^-U^π(x)/Ψ(ν_t,μ_t)(x)≤αν_t(x) log1/k_Ψ
≤α(K_Ψ e^-U^π(x) + ν_0(x))log1/k_Ψ.
Since log1/K_Ψ < 0, using again estimate (<ref>) gives that
α(K_Ψ e^-U^π(x) + ν_0(x))log1/K_Ψ≤αν_t(x)log1/K_Ψ,
and hence
-h_3(x):=α(K_Ψ e^-U^π(x) + ν_0(x))log1/K_Ψ≤αν_t(x)loge^-U^π(x)/Ψ(ν_t,μ_t)(x)
≤α(K_Ψ e^-U^π(x) + ν_0(x))log1/k_Ψ =: -h_3(x).
Therefore, we obtain
h_2(x) + h_3(x) ≤α(loge^-U^π(x)/Ψ(ν_t,μ_t)(x))(Ψ(ν_t,μ_t)(x) - ν_t(x)) ≤h_2(x) + h_3(x).
Bound for the term ν_t(x) ∂_t logΨ(ν_t,μ_t)(x):
First, using the expression of Ψ from (<ref>), we can calculate that
∂_t logΨ(ν_t,μ_t)(x) = -∂_t log Z(ν_t,μ_t) - 2/σ^2∂_t δ F/δν(ν_t, μ_t, x)
= -∂_t Z(ν_t,μ_t)/Z(ν_t,μ_t) - 2/σ^2∂_t δ F/δν(ν_t, μ_t, x)
= - 1/Z(ν_t,μ_t)(∫_𝒳δ Z/δν(ν_t,μ_t,z)∂_t ν_t(z)dz + ∫_𝒴δ Z/δμ(ν_t,μ_t,w)∂_t μ_t(w)dw)
- 2/σ^2(∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z)∂_t ν_t(z)dz + ∫_𝒴δ^2 F/δμδν(ν_t,μ_t,x,w)∂_t μ_t(w)dw).
Then, observe that
1/Z(ν_t,μ_t)δ Z/δν(ν_t,μ_t,z) = -2/σ^2∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) Ψ(ν_t, μ_t)(x) dx,
1/Z(ν_t,μ_t)δ Z/δμ(ν_t,μ_t,w) = -2/σ^2∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) Ψ(ν_t, μ_t)(x) dx,
and hence, we obtain that
∂_t logΨ(ν_t,μ_t)(x) = 2/σ^2(∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) Ψ(ν_t, μ_t)(x)∂_t ν_t(z)dxdz
+ ∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) Ψ(ν_t, μ_t)(x) ∂_t μ_t(w)dxdw
- ∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z)∂_t ν_t(z)dz - ∫_𝒴δ^2 F/δμδν(ν_t,μ_t,x,w)∂_t μ_t(w)dw)
= 2α/σ^2(∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) Ψ(ν_t, μ_t)(x)Ψ(ν_t, μ_t) (z)dxdz
- ∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) Ψ(ν_t, μ_t)(x)ν_t(z)dxdz
- ∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z)Ψ(ν_t, μ_t)(z)dz + ∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z)ν_t(z)dz)
+ 2α/σ^2(∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) Ψ(ν_t, μ_t)(x) Φ(ν_t, μ_t)(w) dxdw
- ∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) Ψ(ν_t, μ_t)(x) μ_t(w)dxdw
- ∫_𝒴δ^2 F/δμδν(ν_t,μ_t,x,w)Φ(ν_t, μ_t)(w)dw + ∫_𝒴δ^2 F/δμδν(ν_t,μ_t,x,w)μ_t(w)dw) ,
where the second equality follows from the MF-BR flow (<ref>).
Using (<ref>) from Assumption <ref> and the fact that Ψ, Φ, ν and μ are all probability density functions, it follows that
-8α/σ^2C_ν, ν -8α/σ^2C_μ, ν≤∂_t logΨ(ν_t,μ_t)(x) ≤8α/σ^2C_ν, ν + 8α/σ^2C_μ, ν.
Multiplying the last inequality by ν_t(x) ≥ 0 and using (<ref>), we get that
-(8α/σ^2C_ν, ν +8α/σ^2C_μ, ν)(K_Ψ e^-U^π(x) + ν_0(x)) ≤ν_t(x)∂_t logΨ(ν_t,μ_t)(x)
≤(8α/σ^2C_ν, ν +8α/σ^2C_μ, ν)(K_Ψ e^-U^π(x) + ν_0(x)) =: h_4(x).
Putting everything together, we finally obtain that
h_Ψ(x) ≤∂_t (ν_t(x) logν_t(x)/Ψ(ν_t,μ_t)(x)) ≤h_Ψ(x),
where
h_Ψ(x) = h_1(x) + α f(x) + h_2(x) + h_3(x) - h_4(x),
h_Ψ(x) = h_1(x) + α g(x) + h_2(x) + h_3(x) + h_4(x).
Step 2: The Lyapunov function decreases along the MF-BR dynamics:
Since the map t ↦D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)) is differentiable for all t > 0, we have that
d/dt(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))) = ∫_𝒳∂_t (ν_t(x) logν_t(x)/Ψ(ν_t,μ_t)(x)) dx
+ ∫_𝒴∂_t (μ_t(y) logμ_t(y)/Φ(ν_t,μ_t)(y)) dy
= ∫_𝒳logν_t(x)/Ψ(ν_t,μ_t)(x)∂_t ν_t(x) dx + ∫_𝒳ν_t(x)(∂_t ν_t(x)/ν_t(x) - ∂_t logΨ(ν_t,μ_t)(x)) dx
+ ∫_𝒴logμ_t(y)/Φ(ν_t,μ_t)(y)∂_t μ_t(y) dy + ∫_𝒴μ_t(y)(∂_t μ_t(y)/μ_t(y) - ∂_t logΦ(ν_t,μ_t)(y)) dy
= α∫_𝒳logν_t(x)/Ψ(ν_t,μ_t)(x)(Ψ(ν_t,μ_t)(x) - ν_t(x)) dx - ∫_𝒳ν_t(x)∂_t logΨ(ν_t,μ_t)(x) dx
+ α∫_𝒴logμ_t(y)/Φ(ν_t,μ_t)(y)(Φ(ν_t,μ_t)(y) - μ_t(y)) dy - ∫_𝒴μ_t(y)∂_t logΦ(ν_t,μ_t)(y) dy
= -α(D_KL(ν_t|Ψ(ν_t, μ_t)) + D_KL(Ψ(ν_t, μ_t)|ν_t) + D_KL(μ_t|Φ(ν_t, μ_t)) + D_KL(Φ(ν_t, μ_t)|μ_t))
- ∫_𝒳ν_t(x)∂_t logΨ(ν_t,μ_t)(x) dx - ∫_𝒴μ_t(y)∂_t logΦ(ν_t,μ_t)(y) dy.
where the third equality follows from the MF-BR flow (<ref>) and the fact that ∫_𝒳∂_t ν_t(x) dx = ∫_𝒴∂_t μ_t(y) dy = 0. From (<ref>), we recall that
∂_t logΨ(ν_t,μ_t)(x) = 2/σ^2(∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) Ψ(ν_t, μ_t)(x)∂_t ν_t(z)dxdz
+ ∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) Ψ(ν_t, μ_t)(x) ∂_t μ_t(w)dxdw
- ∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z)∂_t ν_t(z)dz - ∫_𝒴δ^2 F/δμδν(ν_t,μ_t,x,w)∂_t μ_t(w)dw).
Therefore, we obtain that
-∫_𝒳ν_t(x)∂_t logΨ(ν_t,μ_t)(x) dx = -2/σ^2(∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) Ψ(ν_t, μ_t)(x)∂_t ν_t(z)dxdz
+ ∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) Ψ(ν_t, μ_t)(x) ∂_t μ_t(w)dxdw
- ∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z)∂_t ν_t(z)ν_t(x)dzdx - ∫_𝒳∫_𝒴δ^2 F/δμδν(ν_t,μ_t,x,w)∂_t μ_t(w) ν_t(x)dwdx).
Using (<ref>) from Assumption <ref> and the fact that Ψ(ν_t, μ_t) and ν_t are probability density functions , we have that
∫_𝒳∫_𝒳 |δ^2 F/δν^2(ν_t,μ_t,x,z)∂_t ν_t(z)ν_t(x)|dzdx
= ∫_𝒳∫_𝒳α|δ^2 F/δν^2(ν_t,μ_t,x,z)(Ψ(ν_t, μ_t)(z) - ν_t(z))ν_t(x)|dzdx
≤∫_𝒳∫_𝒳α|δ^2 F/δν^2(ν_t,μ_t,x,z)|Ψ(ν_t, μ_t)(z)ν_t(x)dzdx
+ ∫_𝒳∫_𝒳α|δ^2 F/δν^2(ν_t,μ_t,x,z)|ν_t(z)ν_t(x)dzdx
≤ 2α C_ν, ν < ∞.
Similarly, we have that
∫_𝒳∫_𝒴|δ^2 F/δμδν(ν_t,μ_t,x,w)∂_t μ_t(w) ν_t(x)|dwdx ≤ 2α C_μ, ν < ∞.
Therefore, we can apply Fubini's theorem and obtain that
-∫_𝒳ν_t(x)∂_t logΨ(ν_t,μ_t)(x) dx
= -2/σ^2(∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) (Ψ(ν_t, μ_t)(x) - ν_t(x))∂_t ν_t(z)dxdz
+ ∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) (Ψ(ν_t, μ_t)(x) - ν_t(x)) ∂_t μ_t(w)dxdw)
= -2α/σ^2∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) (Ψ(ν_t, μ_t)(x) - ν_t(x))(Ψ(ν_t, μ_t)(z) - ν_t(z))dxdz
-2α/σ^2∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) (Ψ(ν_t, μ_t)(x) - ν_t(x))(Φ(ν_t, μ_t)(w) - μ_t(w))dxdw
≤ -2α/σ^2∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) (Ψ(ν_t, μ_t)(x) - ν_t(x))(Φ(ν_t, μ_t)(w) - μ_t(w))dxdw,
where the last inequality follows from
∫_𝒳∫_𝒳δ^2 F/δν^2(ν_t,μ_t,x,z) (Ψ(ν_t, μ_t)(x) - ν_t(x))(Ψ(ν_t, μ_t)(z) - ν_t(z))dxdz ≥ 0
due to the convexity of the map ν↦ F(ν, μ).
Similarly to (<ref>), we obtain that
∂_t logΦ(ν_t,μ_t)(y) = -2/σ^2(∫_𝒴∫_𝒴δ^2 F/δμ^2(ν_t,μ_t,y,w) Φ(ν_t, μ_t)(y)∂_t μ_t(w)dydw
+ ∫_𝒳∫_𝒴δ^2 F/δνδμ(ν_t,μ_t,y,z) Φ(ν_t, μ_t)(y) ∂_t ν_t(z)dydz
- ∫_𝒴δ^2 F/δμ^2(ν_t,μ_t,y,w)∂_t μ_t(w)dw - ∫_𝒳δ^2 F/δνδμ(ν_t,μ_t,y,z)∂_t ν_t(z)dz)
and
-∫_𝒴μ_t(y)∂_t logΦ(ν_t,μ_t)(y) dy = 2/σ^2(∫_𝒴∫_𝒴δ^2 F/δμ^2(ν_t,μ_t,y,w) Φ(ν_t, μ_t)(y)∂_t μ_t(w)dydw
+ ∫_𝒳∫_𝒴δ^2 F/δνδμ(ν_t,μ_t,y,z) Φ(ν_t, μ_t)(y) ∂_t ν_t(z)dydz
- ∫_𝒴∫_𝒴δ^2 F/δμ^2(ν_t,μ_t,y,w)∂_t μ_t(w)μ_t(y)dwdy - ∫_𝒴∫_𝒳δ^2 F/δνδμ(ν_t,μ_t,y,z)∂_t ν_t(z)μ_t(y)dzdy).
As we showed before, using (<ref>) from Assumption <ref>, we can apply Fubini's theorem and get that
-∫_𝒴μ_t(y)∂_t logΦ(ν_t,μ_t)(y) dy
= 2/σ^2(∫_𝒴∫_𝒴δ^2 F/δμ^2(ν_t,μ_t,y,w) (Φ(ν_t, μ_t)(y) - μ_t(y))∂_t μ_t(w)dydw
+ ∫_𝒳∫_𝒴δ^2 F/δνδμ(ν_t,μ_t,y,z) (Φ(ν_t, μ_t)(y) - μ_t(y)) ∂_t ν_t(z)dydz)
≤2α/σ^2∫_𝒳∫_𝒴δ^2 F/δνδμ(ν_t,μ_t,y,z) (Φ(ν_t, μ_t)(y) - μ_t(y))(Ψ(ν_t, μ_t)(z) - ν_t(z))dydz,
where the last inequality follows from
∫_𝒴∫_𝒴δ^2 F/δμ^2(ν_t,μ_t,y,w) (Φ(ν_t, μ_t)(y) - μ_t(y))(Φ(ν_t, μ_t)(w) - μ_t(w))dydw ≤ 0
due to the concavity of the map μ↦ F(ν, μ).
Combining (<ref>), (<ref>) and (<ref>) gives that
d/dt(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)))
= -α(D_KL(ν_t|Ψ(ν_t, μ_t)) + D_KL(Ψ(ν_t, μ_t)|ν_t) + D_KL(μ_t|Φ(ν_t, μ_t)) + D_KL(Φ(ν_t, μ_t)|μ_t))
- ∫_𝒳ν_t(x)∂_t logΨ(ν_t,μ_t)(x) dx - ∫_𝒴μ_t(y)∂_t logΦ(ν_t,μ_t)(y) dy
≤ -α(D_KL(ν_t|Ψ(ν_t, μ_t)) + D_KL(Ψ(ν_t, μ_t)|ν_t) + D_KL(μ_t|Φ(ν_t, μ_t)) + D_KL(Φ(ν_t, μ_t)|μ_t))
- 2α/σ^2∫_𝒴∫_𝒳δ^2 F/δμδν(ν_t,μ_t,x,w) (Ψ(ν_t, μ_t)(x) - ν_t(x))(Φ(ν_t, μ_t)(w) - μ_t(w))dxdw
+ 2α/σ^2∫_𝒳∫_𝒴δ^2 F/δνδμ(ν_t,μ_t,y,z) (Φ(ν_t, μ_t)(y) - μ_t(y))(Ψ(ν_t, μ_t)(z) - ν_t(z))dydz.
Again, using (<ref>) from Assumption <ref> to justify the use of Fubini's theorem and Assumption <ref>, the last two terms cancel and we obtain that
d/dt(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t)))
≤ -α(D_KL(ν_t|Ψ(ν_t, μ_t)) + D_KL(Ψ(ν_t, μ_t)|ν_t) + D_KL(μ_t|Φ(ν_t, μ_t)) + D_KL(Φ(ν_t, μ_t)|μ_t)).
Step 3: Convergence of the MF-BR dynamics in D_KL and NI error:
From the inequality above, we have that
d/dt(D_KL(ν_t|Ψ(ν_t,μ_t)) + D_KL(μ_t|Φ(ν_t,μ_t))) ≤ -α(D_KL(ν_t|Ψ(ν_t, μ_t)) + D_KL(Ψ(ν_t, μ_t)|ν_t)
+ D_KL(μ_t|Φ(ν_t, μ_t)) + D_KL(Φ(ν_t, μ_t)|μ_t))
≤ -α(D_KL(ν_t|Ψ(ν_t, μ_t)) + D_KL(μ_t|Φ(ν_t, μ_t))),
and hence we deduce the conclusion from Gronwall's inequality. The convergence with respect to the NI error with rate σ^2/2e^-α t follows from Lemma <ref>.
Next, we obtain exponential convergence (with rate independent of σ) of the MF-BR flow in terms of the NI error when F is bilinear.
If we set
F(ν, μ) = ∫_𝒴∫_𝒳 f(x,y) ν(dx) μ(dy),
with f:𝒳×𝒴→ℝ bounded, then Assumption <ref>, <ref>, <ref> and <ref> still hold according to Remark <ref>, and moreover we have equality in Lemma <ref>. Therefore, the convergence estimate with respect to the NI error reads
NI(ν_t, μ_t) ≤σ^2/2e^-α t(D_KL(ν_0|Ψ(ν_0, μ_0)) + D_KL(μ_0|Φ(ν_0, μ_0))) = σ^2/2e^-α t2/σ^2NI(ν_0, μ_0)
= e^-α tNI(ν_0, μ_0).
§ PROOF OF THEOREM <REF>
Before we present the proof of Theorem <ref>, we begin with a technical lemma, which extends <cit.> to the min-max games setup, and which is proved in Appendix <ref>. The proof of Theorem <ref> is done in two steps:
* First, we show that the map t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) is differentiable when (ν_t, μ_t)_t ≥ 0 is defined as the FR flow (<ref>).
* Second, we show that d/dt(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)) is bounded from above by -σ^2/2(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)), and then we apply Gronwall's inequality to obtain exponential convergence. Subsequently, we establish linear convergence for t ↦NI(1/t∫_0^t ν_s ds, 1/t∫_0^t μ_s ds).
For V^σ given by (<ref>), if Assumption <ref> holds and (ν^*, μ^*) is a saddle point of V^σ, that is V^σ(ν^*, μ) ≤ V^σ(ν^*, μ^*) ≤ V^σ(ν, μ^*), for all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴), then V^σ satisfies the following inequalities for all (ν, μ) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴):
V^σ(ν^*, μ) - V^σ(ν, μ) ≥∫_𝒳 a(ν, μ, x) (ν^*-ν)(dx) + σ^2/2D_KL(ν^*| ν),
V^σ(ν, μ^*) - V^σ(ν, μ) ≤∫_𝒴 b(ν, μ, y)(μ^*-μ)(dy) - σ^2/2D_KL(μ^*| μ).
Step 1: Differentiability of D_KL with respect to the FR flow (<ref>):
Suppose that Assumption <ref>, <ref> and <ref> hold. In order to show the differentiability of t ↦D_KL(ν^*|ν_t) with respect to (<ref>), we can argue similarly to the proof of Theorem <ref> and use the dominated convergence theorem. Repeating the argument, it suffices to show that there exists an integrable function f: 𝒳→ℝ such that
|∂_t (ν^*(x) logν^*(x)/ν_t(x))| ≤ f(x),
for all t ≥ 0. Indeed, using (<ref>) and (<ref>), we have that
|∂_t (ν^*(x) logν^*(x)/ν_t(x))| = |ν^*(x) ∂_t ν_t(x)/ν_t(x)|
= |ν^*(x)(δ F/δν (ν_t, μ_t, x) + σ^2/2log( ν_t(x)/π(x)) - σ^2/2D_KL(ν_t|π))|
≤(3C_ν + σ^2/2( max{|log r_1, ν|, log R_1, ν} + 2 log R_ν))ν^*(x) f(x).
An identical argument gives the differentiability of t ↦D_KL(μ^*|μ_t). Then, we have that the map t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) is differentiable.
Step 2: Convergence of the FR flow:
Since t ↦D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) is differentiable, we have that
d/dt (D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)) = ∫_𝒳∂_t (ν^*(x) logν^*(x)/ν_t(x)) dx + ∫_𝒴∂_t (μ^*(y) logμ^*(y)/μ_t(y)) dy
= -∫_𝒳(ν^*(x) - ν_t(x)) ∂_t ν_t(x)/ν_t(x)dx - ∫_𝒴(μ^*(y) - μ_t(y)) ∂_t μ_t(y)/μ_t(y)dy
= ∫_𝒳 a(ν_t, μ_t, x) (ν^*-ν_t)(dx) - ∫_𝒴 b(ν_t, μ_t, y) (μ^*-μ_t)(dy),
where in the second equality we used the fact that ∫_𝒳∂_t ν_t(x) dx = ∫_𝒴∂_t μ_t(y) dy = 0.
If σ > 0 and Assumption <ref> holds then, using Lemma <ref>, (<ref>) implies that
d/dt(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)) ≤ V(ν^*, μ_t) - V(ν_t, μ_t) + V(ν_t,μ_t) - V(ν_t, μ^*)
- σ^2/2D_KL(ν^*|ν_t) - σ^2/2D_KL(μ^*|μ_t)
= -NI(ν_t, μ_t) - σ^2/2(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t))
≤ - σ^2/2(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t)),
where the last inequality follows from the fact that NI(ν_t, μ_t) ≥ 0, for all t ≥ 0. Hence, by Gronwall's inequality, we obtain that
D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) ≤ e^-σ^2/2t(D_KL(ν^*|ν_0) + D_KL(μ^*|μ_0)).
On the other hand, since D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t) ≥ 0, we have from before that
d/dt (D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t))
≤ -NI(ν_t, μ_t) - σ^2/2(D_KL(ν^*|ν_t) + D_KL(μ^*|μ_t))
≤ -NI(ν_t, μ_t).
Hence, integrating this inequality from 0 to t > 0 and dividing by t, it follows that
1/t∫_0^t NI(ν_s, μ_s) ds ≤1/t(D_KL(ν^*|ν_0) + D_KL(μ^*|μ_0) - D_KL(ν^*|ν_t) - D_KL(μ^*|μ_t))
≤1/t(D_KL(ν^*|ν_0) + D_KL(μ^*|μ_0)).
Recall that NI(ν_t, μ_t) = max_μ' ∈𝒫(𝒴)V^σ(ν_t, μ') - min_ν' ∈𝒫(𝒳)V^σ(ν', μ_t). Then, using the definition of Φ and the fact that ν↦max_μ' ∈𝒫(𝒴)V^σ(ν, μ') is convex, it follows from Jensen's inequality that
1/t∫_0^t max_μ' ∈𝒫(𝒴)V^σ(ν_s, μ') ds = 1/t∫_0^t V^σ(ν_s, Φ(ν_s, μ_s)) ds ≥ V^σ(1/t∫_0^t ν_s ds, Φ(ν_s, μ_s))
= max_μ' ∈𝒫(𝒴) V^σ(1/t∫_0^t ν_s ds, μ').
Similarly, using the definition of Ψ and the fact that μ↦min_ν' ∈𝒫(𝒳)V^σ(ν', μ) is concave, Jensen's inequality gives that
1/t∫_0^t min_ν' ∈𝒫(𝒳)V^σ(ν', μ_s) ds ≤min_ν' ∈𝒫(𝒳) V^σ(ν', 1/t∫_0^t μ_s ds).
Therefore, we obtain that
NI(1/t∫_0^t ν_s ds, 1/t∫_0^t μ_s ds) ≤1/t∫_0^t NI(ν_s, μ_s) ds,
and hence the conclusion follows.
§ ACKNOWLEDGEMENTS
R-AL was supported by the EPSRC Centre for Doctoral Training in Mathematical Modelling, Analysis and Computation (MAC-MIGS) funded by the UK Engineering and Physical Sciences Research Council (grant EP/S023291/1), Heriot-Watt University and the University of Edinburgh. LS acknowledges the support of the UKRI Prosperity Partnership Scheme (FAIR) under EPSRC Grant EP/V056883/1 and the Alan Turing Institute.
abbrv
equationsection
§ OUTLINE OF THE APPENDIX
In Section <ref> of the following appendix, we present the proofs of the remaining results formulated in Section <ref> of the paper. In Section <ref>, we recall some important definitions.
§ TECHNICAL RESULTS AND PROOFS
§.§ Auxiliary results
In this subsection, we present the proofs of Lemma <ref>, Corollary <ref>, Lemma <ref>, Lemma <ref> and Lemma <ref> but first we recall a result based on <cit.>, which will be useful throughout the appendix, regarding the characterization of MNEs via first-order conditions (see also <cit.>).
Assume that F satisfies the conditions in Definition <ref> and that Assumption <ref> holds. Then, the pair (ν^*, μ^*) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴) is a MNE of (<ref>), i.e. ν^* ∈_ν' ∈𝒫(𝒳) V^σ(ν',μ^*) and μ^* ∈_μ' ∈𝒫(𝒴) V^σ(ν^*,μ'), if and only if it satisfies the following first-order condition for all (x, y) ∈𝒳×𝒴 Lebesgue almost surely:
δ F/δν(ν^*, μ^*, x) + σ^2/2log(ν^*(x)/π(x)) = constant,
δ F/δμ(ν^*, μ^*, y) - σ^2/2log(μ^*(y)/ρ(y)) = constant.
From Assumption <ref>, we have the estimates
exp( -2/σ^2 C_ν - U^π(x) ) ≤exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x) ) ≤exp( 2/σ^2 C_ν - U^π(x) ),
exp(-2C_ν/σ^2) ≤ Z(ν, μ) ≤exp(2C_ν/σ^2).
Thus, we obtain (<ref>) with constant K_Ψ = 1/k_ψ = exp( 4/σ^2 C_ν)> 1. Moreover, by construction,
∫_𝒳Ψ(ν, μ) (dx) = ∫_𝒳Ψ(ν, μ) (x) dx
= 1/Z(ν,μ)∫_𝒳exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x)) dx = 1,
and therefore Ψ(ν, μ)∈𝒫(𝒳). One can argue similarly for Φ(ν, μ).
From (<ref>) and (<ref>), we have that, for all t ≥ 0,
k_Ψe^-U^π(x)≤Ψ(ν_t, μ_t)(x), k_Φe^-U^ρ(y)≤Φ(ν_t, μ_t)(y).
By Duhamel's formula we can rewrite equations in (<ref>) as
ν_t(x) = e^-α tν_0(x) + ∫_0^t α e^-α (t-s)Ψ(ν_s, μ_s)(x) ds,
μ_t(y) = e^-α tμ_0(y) + ∫_0^t α e^-α (t-s)Φ(ν_s, μ_s)(y) ds.
Therefore, using (<ref>) and (<ref>), it follows that
ν_t (x) ≥∫_0^t α e^-α (t-s)Ψ(ν_s, μ_s)(x) ds ≥ k_Ψe^-U^π(x)∫_0^t α e^-α (t-s)ds = ( 1 - e^-α t) k_Ψ e^-U^π(x),
μ_t (y) ≥∫_0^t α e^-α (t-s)Φ(ν_s, μ_s)(y) ds
≥ k_Φe^-U^ρ(y)∫_0^t α e^-α (t-s)ds = (1 - e^-α t) k_Φ e^-U^ρ(y).
The proof for the upper bounds is similar.
First, we derive lower and upper bounds on Ψ(ν_t, μ_t)(x)logν_t(x)/e^-U^π(x). Using the bounds given by (<ref>) and (<ref>), we have
Ψ(ν_t, μ_t)(x)logν_t(x)/e^-U^π(x)≥Ψ(ν_t, μ_t)(x)log( 1 - e^-α t) k_Ψ e^-U^π(x)/ e^-U^π(x)
= Ψ(ν_t, μ_t)(x)log((1 - e^-α t) k_Ψ)
≥Ψ(ν_t, μ_t)(x)log( (1 - e^-α t) k_Ψ)
≥log( (1 - e^-α t) k_Ψ) K_Ψ e^-U^π(x) =: g_1(x),
where the last inequality follows from the fact that k_Ψ∈ (0,1) so that log( (1 - e^-α t) k_Ψ) < 0.
The upper bound is obtained as follows. From Duhamel's formula (<ref>) and (<ref>), we have that
logν_t(x)/e^-U^π(x) = log( e^-α tν_0 (x)/e^-U^π(x) + ∫_0^t α e^-α (t-s)Ψ(ν_s, μ_s) (x)/e^-U^π(x) ds )
≤log( e^-α tν_0 (x)/e^-U^π(x) + ∫_0^t α e^-α (t-s) K_Ψ ds )
= log( e^-α tν_0 (x)/e^-U^π(x) + (1 - e^-α t) K_Ψ)
≤log((1 - e^-α t) K_Ψ) + ν_0 (x)/e^-U^π(x)e^-α t/(1 - e^-α t) K_Ψ
≤log K_Ψ + ν_0(x)/K_Ψe^-U^π(x)κ_s,
where in the second inequality we used the inequality log(x + y) ≤log x + y/x and in the last inequality we maximize over t ≥ s and take κ_s sup_t ≥ se^-α t/(1 - e^-α t) = e^-α s/(1 - e^-α s). Finally, the upper bound is given by
Ψ(ν_t, μ_t)(x)logν_t(x)/e^-U^π(x)≤(log K_Ψ + ν_0(x)/K_Ψe^-U^π(x)κ_s) Ψ(ν_t, μ_t) (x)
≤(log K_Ψ + ν_0(x)/K_Ψe^-U^π(x)κ_s) K_Ψ e^-U^π(x) = K_Ψ e^-U^π(x)log K_Ψ + κ_sν_0(x) =: f_1(x).
Now, consider the second term ν_t(x)logν_t(x)/e^-U^π(x). By the convexity of the map z ↦ zlog z and using Duhamel's formula (<ref>), we have by Jensen's inequality that
ν_t(x) logν_t(x)/e^-U^π(x) ≤ e^-α tν_0(x)logν_0(x)/e^-U^π(x) + ∫_0^t α e^-α(t-s)Ψ(ν_s, μ_s)(x) logΨ(ν_s, μ_s)(x)/e^-U^π(x)ds
≤ e^-α tν_0(x)logν_0(x)/e^-U^π(x) + ∫_0^t α e^-α(t-s)Ψ(ν_s,μ_s) (x)log K_Ψds
≤ e^-α tν_0(x)logν_0(x)/e^-U^π(x) + ∫_0^t α e^-α(t-s) K_Ψ e^-U^π(x)log K_Ψds
≤ν_0(x)logν_0(x)/e^-U^π(x) + K_Ψ e^-U^π(x)log K_Ψ =: -g_2 (x),
where the second and third inequalities follow from (<ref>).
For the lower bound, we observe that
ν_t(x) logν_t(x)/e^-U^π(x) = ν_t(x)/e^-U^π(x) e^-U^π(x)logν_t(x)/e^-U^π(x)≥ -1/e e^-U^π(x) =: -f_2 (x),
where we used the fact that the map z ↦ zlog z is continuous with the global minimum at z=1/e. The conclusion follows if we set f(x) := f_1(x) + f_2(x) and g(y) := g_1(y) + g_2(y). One could argue similarly to obtain ĝ and f̂.
Using Proposition <ref>, we observe that (Ψ, Φ) as defined in (<ref>) and (<ref>) also satisfy
δ F/δν(ν_t, μ_t, x) + σ^2/2logΨ(ν_t, μ_t)(x)/π(x) = C_t,
δ F/δμ(ν_t, μ_t, y) - σ^2/2logΦ(ν_t, μ_t)(y)/ρ(y) = C_t,
for all (x,y) ∈𝒳×𝒴 Lebesgue almost surely, where C_t, C_t∈ℝ. Therefore, we have that
max_μ' ∈𝒫(𝒴)V^σ(ν_t,μ') - V^σ(ν_t,μ_t) = V^σ(ν_t, Φ(ν_t, μ_t)) - V^σ(ν_t, μ_t)
≤∫_𝒴δ F/δμ(ν_t, μ_t, y) (Φ(ν_t, μ_t) - μ_t)(dy) + σ^2/2(D_KL(ν_t|π) - D_KL(Φ(ν_t, μ_t)|ρ) )
- σ^2/2(D_KL(ν_t|π) - D_KL(μ_t|ρ) )
= ∫_𝒴C_t(Φ(ν_t, μ_t) - μ_t)(dy) + σ^2/2∫_𝒴logΦ(ν_t, μ_t)(y)/ρ(y)(Φ(ν_t, μ_t) - μ_t)(dy)
- σ^2/2D_KL(Φ(ν_t, μ_t)|ρ) + σ^2/2D_KL(μ_t|ρ)
= -σ^2/2∫_𝒴logΦ(ν_t, μ_t)(y)/ρ(y)μ_t(dy) + σ^2/2∫_𝒴logμ_t(y)/ρ(y)μ_t(dy) = σ^2/2D_KL(μ_t|Φ(ν_t, μ_t)),
where the first equality follows from the definition of Φ, the first inequality follows from (<ref>) in Assumption <ref> and the second equality follows from (<ref>). Similarly, using (<ref>) from Assumption <ref> and (<ref>), we have that
min_ν' ∈𝒫(𝒳)V^σ(ν',μ_t) - V^σ(ν_t,μ_t) ≥ -σ^2/2D_KL(ν_t|Ψ(ν_t, μ_t)).
Therefore, we can finish the proof by adding the two inequalities above and recalling that NI(ν_t, μ_t) = max_μ' ∈𝒫(𝒴)V^σ(ν_t,μ') - min_ν' ∈𝒫(𝒳)V^σ(ν',μ_t).
Using (<ref>) from Assumption <ref>, it follows that
V^σ(ν^*, μ) - V^σ(ν, μ) ≥∫_𝒳δ F/δν(ν,μ,x) (ν^*-ν)(dx) + σ^2/2D_KL(ν^*|π) - σ^2/2D_KL(ν|π)
= ∫_𝒳(δ F/δν(ν,μ,x) + σ^2/2log(ν(x)/π(x))) (ν^*-ν)(dx) - σ^2/2∫_𝒳log(ν(x)/π(x))(ν^*-ν)(dx)
+ σ^2/2∫_𝒳log(ν^*(x)/π(x))ν^*(dx) - σ^2/2∫_𝒳log(ν(x)/π(x))ν(dx)
= ∫_𝒳(δ F/δν(ν,μ,x) + σ^2/2log(ν(x)/π(x))) (ν^*-ν)(dx) + σ^2/2D_KL(ν^*|ν)
= ∫_𝒳 a(ν, μ, x) (ν^*-ν)(dx) + σ^2/2D_KL(ν^*|ν).
Similarly, using (<ref>) from Assumption <ref>, it follows that
V^σ(ν, μ^*) - V^σ(ν, μ) ≤∫_𝒴δ F/δμ(ν,μ,y) (μ^*-μ)(dy) - σ^2/2D_KL(μ^*|ρ) + σ^2/2D_KL(μ|ρ)
= ∫_𝒴(δ F/δμ(ν,μ,y) - σ^2/2log(μ(y)/ρ(y)))(μ^*-μ)(dy) + σ^2/2∫_𝒴log(μ(y)/ρ(y))(μ^*-μ)(dy)
- σ^2/2∫_𝒴log(μ^*(y)/ρ(y))μ^*(dy) + σ^2/2∫_𝒴log(μ(y)/ρ(y))μ(dy)
= ∫_𝒴(δ F/δμ(ν,μ,y) - σ^2/2log(μ(y)/ρ(y)))(μ^*-μ)(dy) - σ^2/2D_KL(μ^*|μ)
=∫_𝒴 b(ν, μ, y)(μ^*-μ)(dy) - σ^2/2D_KL(μ^*|μ).
§.§ Existence and uniqueness of the MF-BR flow
In this subsection, we present the proof of our main result concerning the existence and uniqueness of the Mean-Field Best Response (MF-BR) flow, i.e., Proposition <ref>. The proof follows a classical Picard iteration technique. Lemma <ref> shows that a Picard iteration that we use for proving existence of the MF-BR flow is indeed contractive in an appropriate metric, which then helps us to conclude the proof of Proposition <ref>.
Before presenting the proof of Proposition <ref>, we state and prove a useful auxiliary result. The lemma below is an adaptation of the second part of <cit.> to the min-max setting (<ref>). In contrast to <cit.>, we work with the total variation instead of the Wasserstein distance, which helps us to simplify some aspects of the argument, and to avoid imposing an additional assumption of Lipschitz continuity of the flat derivative of F (cf. <cit.>).
Suppose that Assumption <ref> and <ref> hold. Then there exist constants L_Ψ, L_Φ > 0 such that for all (ν, μ), (ν', μ') ∈𝒫(𝒳) ×𝒫(𝒴), it holds that
|Ψ(ν, μ)(x) - Ψ(ν', μ')(x)| ≤ L_Ψe^-U^π(x)(TV(ν,ν') + TV(μ,μ')),
|Φ(ν, μ)(y) - Φ(ν', μ')(y)| ≤ L_Φe^-U^ρ(y)(TV(ν,ν') + TV(μ,μ')),
and hence the maps Ψ : 𝒫(𝒳) ×𝒫(𝒴) →𝒫(𝒳) and Φ : 𝒫(𝒳) ×𝒫(𝒴) →𝒫(𝒴) are TV-Lipschitz in the sense that there exist L,L' > 0 such that
TV(Ψ(ν, μ), Ψ(ν', μ')) ≤ L(TV(ν, ν') + TV(μ, μ')),
TV(Φ(ν, μ), Φ(ν', μ')) ≤ L'(TV(ν, ν') + TV(μ, μ')).
From Assumption <ref>, using (<ref>), (<ref>) and the estimate |e^x - e^y| ≤ e^max{x,y}|x-y|, we have
| exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x) ) - exp(- 2/σ^2δ F/δν(ν', μ', x) - U^π(x) ) |
≤2/σ^2exp(2/σ^2C_ν)e^-U^π(x)(TV(ν,ν') + TV(μ,μ')).
Integrating the previous inequality with respect to x, we obtain
|Z(ν, μ) - Z(ν', μ')| ≤2/σ^2exp(2/σ^2C_ν)(TV(ν,ν') + TV(μ,μ')).
Therefore, we have that
|Ψ(ν, μ)(x) - Ψ(ν', μ')(x)| = |1/Z(ν, μ)exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x) )
- 1/Z(ν', μ')exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x) ) + 1/Z(ν', μ')exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x) )
- 1/Z(ν', μ')exp(- 2/σ^2δ F/δν(ν', μ', x) - U^π(x) )|
≤exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x) ) |Z(ν', μ') - Z(ν, μ)|/Z(ν, μ)Z(ν', μ')
+ 1/Z(ν', μ')|exp(- 2/σ^2δ F/δν(ν, μ, x) - U^π(x) ) - exp(- 2/σ^2δ F/δν(ν', μ', x) - U^π(x) )|.
Using estimates (<ref>), (<ref>), (<ref>) and (<ref>), we arrive at the Lipschitz property (<ref>) with L_Ψ2/σ^2exp(3C_ν/σ^2)(1+exp(3C_ν/σ^2)) >0. Proving (<ref>) follows the same steps as above but with L_Φ2/σ^2exp(3C_μ/σ^2)(1+exp(3C_μ/σ^2)).
Now, integrating (<ref>) on 𝒳, and applying <cit.>, that is TV(m,m') = 1/2∫_𝒳 |m(x)-m'(x)|dx, for any m,m' ∈𝒫_ac(𝒳), it follows that
TV(Ψ(ν, μ), Ψ(ν', μ')) ≤L_Ψ/2(TV(ν, ν') + TV(μ, μ')),
and we set L L_Ψ/2 > 0. One similarly obtains that Φ is TV-Lipschitz with constant L' L_Φ/2.
Step 1: Existence of gradient flow on [0,T].
By Duhamel's formula we can rewrite equations in (<ref>) as
ν_t(x) = e^-α tν_0(x) + ∫_0^t α e^-α (t-s)Ψ(ν_s, μ_s)(x) ds,
μ_t(y) = e^-α tμ_0(y) + ∫_0^t α e^-α (t-s)Φ(ν_s, μ_s)(y) ds.
Based on these expressions, we will define a Picard iteration scheme as follows. Fix T > 0 and for each n ≥ 1, fix ν_0^(n) = ν_0^(0) = ν_0 and μ_0^(n) = μ_0^(0) = μ_0. Then define (ν_t^(n))_t ∈ [0,T] and (μ_t^(n))_t ∈ [0,T] by
ν^(n)_t(x) = e^-α tν_0(x) + ∫_0^t α e^-α (t-s)Ψ(ν^(n-1)_s, μ^(n-1)_s)(x) ds,
μ^(n)_t(y) = e^-α tμ_0(y) + ∫_0^t α e^-α (t-s)Φ(ν^(n-1)_s, μ^(n-1)_s)(y) ds.
For fixed T > 0, we consider the sequence of flows ( (ν_t^(n), μ_t^(n))_t ∈ [0,T])_n=0^∞ in
(𝒫_ac(𝒳)^[0,T]×𝒫_ac(𝒴)^[0,T], 𝒯𝒱^[0,T]), where, for any (ν_t, μ_t)_t ∈ [0,T]∈𝒫_ac(𝒳)^[0,T]×𝒫_ac(𝒴)^[0,T], the distance 𝒯𝒱^[0,T] is defined by
𝒯𝒱^[0,T]( (ν_t, μ_t)_t ∈ [0,T], (ν_t',μ_t')_t ∈ [0,T]) := ∫_0^T TV(ν_t, ν_t') dt + ∫_0^T TV(μ_t, μ_t') dt.
Since (𝒫(𝒳), TV) is complete, we can apply the argument from <cit.> with p=1 to conclude that (𝒫(𝒳)^[0,T], ∫_0^T TV(ν_t, ν_t') dt) and (𝒫(𝒴)^[0,T], ∫_0^T TV(μ_t, μ_t') dt) are complete. Therefore, one can deduce that (𝒫(𝒳)^[0,T]×𝒫(𝒴)^[0,T], 𝒯𝒱^[0,T]) is also complete.
On the other hand, it is straightforward to check that (𝒫_ac(𝒳), TV) is closed. Indeed, take a sequence (μ_n)_n ≥ 1⊂𝒫_ac(𝒳) such that μ_n →μ in TV for some μ∈𝒫(𝒳).
By Definition <ref>, since μ_n →μ in TV, it follows that μ_n(A) →μ(A) for all sets A ∈ℬ(𝒳), where ℬ(𝒳) is the Borel σ-algebra on 𝒳. Since (μ_n)_n ≥ 1⊂𝒫_ac(𝒳), choosing A with Lebesgue measure 0 implies that μ_n(A) = 0 for all n ≥ 1. Hence, μ(A) = 0, i.e. μ∈𝒫_ac(𝒳). Therefore, (𝒫_ac(𝒳), TV) is closed. Then clearly both (𝒫_ac(𝒳)^[0,T], ∫_1^T TV(ν_t, ν_t') dt) and (𝒫_ac(𝒴)^[0,T], ∫_1^T TV(μ_t, μ_t') dt) are closed and therefore (𝒫_ac(𝒳)^[0,T]×𝒫_ac(𝒴)^[0,T], 𝒯𝒱^[0,T]) is closed. But then since 𝒫_ac(𝒳)^[0,T]×𝒫_ac(𝒴)^[0,T]⊂𝒫(𝒳)^[0,T]×𝒫(𝒴)^[0,T] and the latter is complete in TV-norm, we obtain that (𝒫_ac(𝒳)^[0,T]×𝒫_ac(𝒴)^[0,T], 𝒯𝒱^[0,T]) is complete.
We consider the Picard iteration mapping ϕ((ν_t^(n-1), μ_t^(n-1))_t ∈ [0,T]) := (ν_t^(n), μ_t^(n))_t ∈ [0,T] defined via (<ref>) and (<ref>) and show that ϕ is a contraction in the complete space (𝒫_ac(𝒳)^[0,T]×𝒫_ac(𝒴)^[0,T], 𝒯𝒱^[0,T]). Then the Banach fixed point theorem will give us the existence and uniqueness of each of the solutions to equations in (<ref>).
The mapping ϕ((ν_t^(n-1), μ_t^(n-1))_t ∈ [0,T]) := (ν_t^(n), μ_t^(n))_t ∈ [0,T] defined via (<ref>) and (<ref>) is contractive in (𝒫_ac(𝒳)^[0,T]×𝒫_ac(𝒴)^[0,T], 𝒯𝒱^[0,T]).
From <cit.>, that is TV(m,m') = 1/2∫_𝒳 |m(x)-m'(x)|dx, for any m,m' ∈𝒫_ac(𝒳), and (<ref>), we have that
TV(ν^(n)_t, ν^(n-1)_t) = 1/2∫_𝒳|ν^(n)_t(x)-ν^(n-1)_t(x)|dx
= 1/2∫_𝒳|∫_0^t α e^-α (t-s)Ψ(ν^(n-1)_s, μ^(n-1)_s)(x) ds - ∫_0^t α e^-α (t-s)Ψ(ν^(n-2)_s, μ^(n-2)_s)(x) ds|dx
≤α/2∫_𝒳∫_0^t |Ψ(ν^(n-1)_s, μ^(n-1)_s)(x) - Ψ(ν^(n-2)_s, μ^(n-2)_s)(x)| dsdx
≤α/2∫_𝒳∫_0^t L_Ψe^-U^π(x)(TV(ν^(n-1)_s,ν^(n-2)_s) + TV(μ^(n-1)_s,μ^(n-2)_s))dsdx
= α L_Ψ/2∫_0^t (TV(ν^(n-1)_s,ν^(n-2)_s) + TV(μ^(n-1)_s,μ^(n-2)_s))ds,
where in the first inequality we used the fact that e^-α (t-s)≤ 1, for all s ∈ [0,t], and in the second inequality we used (<ref>).
A similar argument using (<ref>) leads to
TV(μ^(n)_t, μ^(n-1)_t) ≤α L_Φ/2∫_0^t (TV(ν^(n-1)_s,ν^(n-2)_s) + TV(μ^(n-1)_s,μ^(n-2)_s))ds.
Adding inequalities (<ref>) and (<ref>) gives
TV(ν^(n)_t, ν^(n-1)_t) + TV(μ^(n)_t, μ^(n-1)_t)
≤α L_Ψ+α L_Φ/2∫_0^t (TV(ν^(n-1)_s, ν^(n-2)_s) + TV(μ^(n-1)_s, μ^(n-2)_s)) ds
≤(α L_Ψ+α L_Φ/2)^n-1∫_0^t ∫_0^t_1…∫_0^t_n-2(TV(ν^(1)_t_n-1, ν^(0)_t_n-1) + TV(μ^(1)_t_n-1, μ^(0)_t_n-1))
×dt_n-1…dt_2 dt_1
≤(α L_Ψ+α L_Φ/2)^n-1t^n-2/(n-2)!∫_0^t (TV(ν^(1)_t_n-1, ν^(0)_t_n-1) + TV(μ^(1)_t_n-1, μ^(0)_t_n-1)) dt_n-1,
where in the third inequality we used the bound ∫_0^t_n-2dt_n-1≤∫_0^tdt_n-1.
Hence, we obtain
∫_0^T TV(ν^(n)_t, ν^(n-1)_t) + TV(μ^(n)_t, μ^(n-1)_t)dt
≤(α L_Ψ+α L_Φ/2)^n-1T^n-1/(n-2)!∫_0^T (TV(ν^(1)_t_n-1, ν^(0)_t_n-1) + TV(μ^(1)_t_n-1, μ^(0)_t_n-1))dt_n-1.
For sufficiently large n, the constant on the right hand side becomes less than 1 and the proof is complete.
Having proved Lemma <ref>, we return to the proof of Proposition <ref>.
Step 2: Existence of gradient flow on [0,∞).
From Lemma <ref>, for any T > 0, there exists unique flow (ν_t, μ_t)_t ∈ [0,T] satisfying (<ref>). It remains to prove that the existence of this flow could be extended to [0,∞). Let (ν_t, μ_t)_t ∈ [0,T], (ν'_t, μ'_t)_t ∈ [0,T]∈𝒫_ac(𝒳) ×𝒫_ac(𝒴). Then, using the calculations from Lemma <ref>, we have that
TV(ν_t, ν'_t) + TV(μ_t, μ'_t) ≤α L_Ψ+α L_Φ/2∫_0^t (TV(ν_s, ν'_s) + TV(μ_s, μ'_s)) ds,
which shows that (ν_t, μ_t)_t ∈ [0,T] do not blow up in any finite time, and therefore we can extend (ν_t, μ_t)_t ∈ [0,T] globally to (ν_t, μ_t)_t ∈ [0,∞).
By definition (<ref>), Ψ(ν, μ) admits a density of the form
Ψ(ν_t, μ_t)(x) 1/Z(ν_t, μ_t)exp(-2/σ^2δ F/δν(ν_t, μ_t, x) - U^π(x)),
Z(ν_t, μ_t) = ∫exp(-2/σ^2δ F/δν(ν_t, μ_t, x) - U^π(x)) dx.
Since the maps t ↦ν_t, t ↦μ_t and (ν, μ) ↦δ F/δν(ν, μ, x) are TV-continuous, it follows that t ↦δ F/δν(ν_t, μ_t, x) is also continuous. Moreover, δ F/δν(ν_t, μ_t, x) is bounded for all x ∈𝒳 due to Assumption <ref>. Therefore, both terms 1/Z(ν_t, μ_t) and exp(-2/σ^2δ F/δν(ν_t, μ_t, x) - U^π(x)) are continuous in t and bounded for x ∈𝒳. Hence, we have that Ψ(ν_t, μ_t)(x) is continuous in t and bounded for all x ∈𝒳. The same argument gives that Φ(ν_t, μ_t)(y) is continuous in t and bounded for all y ∈𝒳. But then this implies that the integrands in (<ref>) and (<ref>) are continuous in s and bounded for all (x,y) ∈𝒳×𝒴. Hence, t ↦ν_t(x) and t ↦μ_t(y) are in C^1([0, ∞)) for all (x,y) ∈𝒳×𝒴.
§.§ Existence and uniqueness of the FR flow
In this subsection, we present the proof of our main result concerning the existence and uniqueness of the Fisher-Rao (FR) flow, i.e., Theorem <ref>. The proof of Theorem <ref> follows the same principle as the proof of Proposition <ref>. We construct a Picard iteration which is proved to be well-defined in Lemma <ref>. Lemma <ref> shows that the Picard iteration is contractive in an appropriate metric. Then in order to conclude the proof of Theorem <ref> we show the ratio condition (<ref>).
The proof of Theorem <ref> is an adaptation of the proof of <cit.> to the min-max setting (<ref>).
Step 1: Existence of the gradient flow and bound (<ref>) on [0,T].
In order to prove the existence of a solution (ν_t, μ_t)_t ≥ 0 to
∂_t ν_t(x) = - ( δ F/δν (ν_t,μ_t,x) + σ^2/2log( ν_t(x)/π(x)) - σ^2/2D_KL(ν_t|π) ) ν_t(x),
∂_t μ_t(y) = ( δ F/δμ (ν_t,μ_t,y) - σ^2/2log( μ_t(y)/ρ(y)) + σ^2/2D_KL(μ_t|ρ) ) μ_t(y),
we first notice that (<ref>) is equivalent to
∂_t logν_t(x) = - ( δ F/δν (ν_t,μ_t,x) + σ^2/2log( ν_t(x)/π(x)) - σ^2/2D_KL(ν_t|π) ),
∂_t logμ_t(y) = ( δ F/δμ (ν_t,μ_t,y) - σ^2/2log( μ_t(y)/ρ(y)) + σ^2/2D_KL(μ_t|ρ) ).
By Duhamel's formula, (<ref>) is equivalent to
logν_t(x) = e^-σ^2/2tlogν_0(x) - ∫_0^t σ^2/2 e^-σ^2/2(t-s)( 2/σ^2δ F/δν(ν_s,μ_s,x) - logπ(x) - D_KL(ν_s|π) ) ds,
logμ_t(y) = e^-σ^2/2tlogμ_0(x) + ∫_0^t σ^2/2 e^-σ^2/2(t-s)( 2/σ^2δ F/δν(ν_s,μ_s,y) + logρ(y) + D_KL(μ_s|ρ) ) ds.
Based on these formulas, we will define a Picard iteration scheme. To this end, let us first fix T > 0 and choose a pair of flows of probability measures (ν_t^(0), μ_t^(0))_t ∈ [0,T] such that
∫_0^T D_KL(ν_s^(0)|π) ds < ∞, ∫_0^T D_KL(μ_s^(0)|ρ) ds < ∞.
For each n ≥ 1, we fix ν_0^(n) = ν_0^(0) = ν_0 and μ_0^(n) = μ_0^(0) = μ_0 (with ν_0 and μ_0 satisfying condition (<ref>) from Assumption <ref>) and define (ν_t^(n), μ_t^(n))_t ∈ [0,T] by
logν_t^(n)(x) = e^-σ^2/2tlogν_0(x)
- ∫_0^t σ^2/2 e^-σ^2/2(t-s)( 2/σ^2δ F/δν(ν_s^(n-1), μ_s^(n-1), x) - logπ(x) - D_KL(ν_s^(n-1)|π) ) ds,
logμ_t^(n)(y) = e^-σ^2/2tlogμ_0(y)
+ ∫_0^t σ^2/2 e^-σ^2/2(t-s)( 2/σ^2δ F/δμ(ν_s^(n-1), μ_s^(n-1), y) + logρ(y) + D_KL(μ_s^(n-1)|ρ) ) ds.
We have the following result.
The sequence of flows ( (ν_t^(n), μ_t^(n))_t ∈ [0,T])_n=0^∞ given by (<ref>) and (<ref>) is well-defined and such that for all n ≥ 1 and all t ∈ [0,T] we have
D_KL(ν_t^(n)|π) ≤ 2 log R_ν + 4/σ^2C_ν, D_KL(μ_t^(n)|ρ) ≤ 2 log R_μ + 4/σ^2C_μ.
The proof follows from the same induction argument used to prove <cit.>.
For fixed T > 0, we consider the sequence of flows ( (ν_t^(n), μ_t^(n))_t ∈ [0,T])_n=0^∞ in
(𝒫(𝒳)^[0,T]×𝒫(𝒴)^[0,T], 𝒯𝒱^[0,T]), where, for any (ν_t, μ_t)_t ∈ [0,T]∈𝒫(𝒳)^[0,T]×𝒫(𝒴)^[0,T], the distance 𝒯𝒱^[0,T] is defined by
𝒯𝒱^[0,T]( (ν_t, μ_t)_t ∈ [0,T], (ν_t',μ_t')_t ∈ [0,T]) := ∫_0^T TV(ν_t, ν_t') dt + ∫_0^T TV(μ_t, μ_t') dt .
Since (𝒫(𝒳), TV) is complete, we can apply the argument from <cit.> with p=1 to conclude that (𝒫(𝒳)^[0,T], ∫_0^T TV(ν_t, ν_t') dt) and (𝒫(𝒴)^[0,T], ∫_0^T TV(μ_t, μ_t') dt) are complete. Therefore, one can deduce that (𝒫(𝒳)^[0,T]×𝒫(𝒴)^[0,T], 𝒯𝒱^[0,T]) is also complete. We consider the Picard iteration mapping ϕ((ν_t^(n-1), μ_t^(n-1))_t ∈ [0,T]) := (ν_t^(n), μ_t^(n))_t ∈ [0,T] defined via (<ref>) and (<ref>) and show that ϕ is contractive in (𝒫(𝒳)^[0,T]×𝒫(𝒴)^[0,T], 𝒯𝒱^[0,T]). Then the Banach fixed point theorem will give us the existence and uniqueness of the solution to (<ref>).
The mapping ϕ((ν_t^(n-1), μ_t^(n-1))_t ∈ [0,T]) := (ν_t^(n), μ_t^(n))_t ∈ [0,T] defined via (<ref>) and (<ref>) is contractive in (𝒫(𝒳)^[0,T]×𝒫(𝒴)^[0,T], 𝒯𝒱^[0,T]).
From (<ref>) we have
logν_t^(n)(x) - logν_t^(n-1)(x) = - ∫_0^t σ^2/2 e^-σ^2/2(t-s)×
×[ 2/σ^2( δ F/δν(ν_s^(n-1),μ_s^(n-1),x) - δ F/δν(ν_s^(n-2),μ_s^(n-2),x) ) - D_KL(ν_s^(n-1)|π) + D_KL(ν_s^(n-2)|π) ] ds .
Multiplying both sides by ν_t^(n)(x) and integrating with respect to x, we obtain
D_KL(ν_t^(n)|ν_t^(n-1)) = - ∫_0^t σ^2/2 e^-σ^2/2(t-s)×
×[ 2/σ^2∫_𝒳( δ F/δν(ν_s^(n-1),μ_s^(n-1),x) - δ F/δν(ν_s^(n-2),μ_s^(n-2),x) )ν_t^(n)(dx)
- D_KL(ν_s^(n-1)|π) + D_KL(ν_s^(n-2)|π) ] ds .
Moreover, note that
∫_𝒳 ( δ F/δν(ν_s^(n-1),μ_s^(n-1),x) - δ F/δν(ν_s^(n-2),μ_s^(n-2),x) )ν_t^(n)(dx)
=∫_𝒳( δ F/δν(ν_s^(n-1),μ_s^(n-1),x) - δ F/δν(ν_s^(n-1),μ_s^(n-2),x)
+ δ F/δν(ν_s^(n-1),μ_s^(n-2),x) -δ F/δν(ν_s^(n-2),μ_s^(n-2),x) )ν_t^(n)(dx)
= ∫_𝒳∫_𝒴∫_0^1 δ^2 F/δμδν(ν_s^(n-1), μ_s^(n-2) + λ( μ_s^(n-1) - μ_s^(n-2)),x,w )
×dλ(μ_s^(n-1) - μ_s^(n-2))(dw) ν_t^(n)(dx)
+ ∫_𝒳∫_𝒳∫_0^1 δ^2 F/δν^2(ν_s^(n-2) + λ( ν_s^(n-1) - ν_s^(n-2)),μ_s^(n-2),x,z )
×dλ(ν_s^(n-1) - ν_s^(n-2))(dz) ν_t^(n)(dx).
Similarly, again from (<ref>) we have
logν_t^(n-1)(x) - logν_t^(n)(x) = - ∫_0^t σ^2/2 e^-σ^2/2(t-s)×
×[ 2/σ^2∫_𝒳( δ F/δν(ν_s^(n-2),μ_s^(n-2),x) - δ F/δν(ν_s^(n-1),μ_s^(n-1),x) )ν_t^(n)(dx)
- D_KL(ν_s^(n-2)|π) + D_KL(ν_s^(n-1)|π) ] ds .
Multiplying both sides by ν_t^(n-1)(x) and integrating with respect to x, we obtain
D_KL(ν_t^(n-1)|ν_t^(n)) = - ∫_0^t σ^2/2 e^-σ^2/2(t-s)×
×[ 2/σ^2∫_𝒳( δ F/δν(ν_s^(n-2),μ_s^(n-2),x) - δ F/δν(ν_s^(n-1),μ_s^(n-1),x) )ν_t^(n-1)(dx)
- D_KL(ν_s^(n-2)|π) + D_KL(ν_s^(n-1)|π) ] ds.
Similarly as before, we note that
∫_𝒳 ( δ F/δν(ν_s^(n-2),μ_s^(n-2),x) - δ F/δν(ν_s^(n-1),μ_s^(n-1),x) )ν_t^(n-1)(dx)
=-∫_𝒳( δ F/δν(ν_s^(n-1),μ_s^(n-1),x) - δ F/δν(ν_s^(n-1),μ_s^(n-2),x)
+ δ F/δν(ν_s^(n-1),μ_s^(n-2),x) -δ F/δν(ν_s^(n-2),μ_s^(n-2),x) )ν_t^(n-1)(dx)
= -∫_𝒳∫_𝒴∫_0^1 δ^2 F/δμδν(ν_s^(n-1), μ_s^(n-2) + λ( μ_s^(n-1) - μ_s^(n-2)),x,w )
×dλ(μ_s^(n-1) - μ_s^(n-2))(dw) ν_t^(n-1)(dx)
- ∫_𝒳∫_𝒳∫_0^1 δ^2 F/δν^2(ν_s^(n-2) + λ( ν_s^(n-1) - ν_s^(n-2)),μ_s^(n-2),x,z )
×dλ(ν_s^(n-1) - ν_s^(n-2))(dz) ν_t^(n-1)(dx).
Combining (<ref>) and (<ref>), we obtain
D_KL(ν_t^(n)|ν_t^(n-1)) + D_KL(ν_t^(n-1)|ν_t^(n)) = - ∫_0^t e^-σ^2/2(t-s)×[
∫_𝒳∫_𝒴∫_0^1 δ^2 F/δμδν(ν_s^(n-1), μ_s^(n-2) + λ( μ_s^(n-1) - μ_s^(n-2)),x,w ) dλ(μ_s^(n-1) - μ_s^(n-2))(dw)×
×(ν_t^(n) - ν_t^(n-1))(dx)
+ ∫_𝒳∫_𝒳∫_0^1 δ^2 F/δν^2(ν_s^(n-2) + λ( ν_s^(n-1) - ν_s^(n-2)),μ_s^(n-2),x,z ) dλ(ν_s^(n-1) - ν_s^(n-2))(dz) ×
×(ν_t^(n) - ν_t^(n-1))(dx)] ds.
Hence, due to (<ref>) from Assumption <ref>, we get
D_KL(ν_t^(n)|ν_t^(n-1)) + D_KL(ν_t^(n-1)|ν_t^(n))
≤TV(ν_t^(n),ν_t^(n-1)) ∫_0^t e^-σ^2/2(t-s)(C_μ,νTV(μ_s^(n-1),μ_s^(n-2)) + C_ν, νTV(ν_s^(n-1),ν_s^(n-2)))ds
≤max{C_μ,ν, C_ν, ν}TV(ν_t^(n),ν_t^(n-1))
×∫_0^t e^-σ^2/2(t-s)(TV(μ_s^(n-1),μ_s^(n-2)) + TV(ν_s^(n-1),ν_s^(n-2)))ds.
By the Pinsker-Csizsar inequality, TV^2(ν_t^(n),ν_t^(n-1)) ≤1/2D_KL(ν_t^(n)|ν_t^(n-1)) and hence
4 TV^2(ν_t^(n),ν_t^(n-1)) ≤max{C_μ,ν, C_ν, ν}TV(ν_t^(n),ν_t^(n-1)) ∫_0^t e^-σ^2/2(t-s)(TV(μ_s^(n-1),μ_s^(n-2))
+ TV(ν_s^(n-1),ν_s^(n-2)))ds,
which gives
TV (ν_t^(n),ν_t^(n-1))
≤1/4max{C_μ,ν, C_ν, ν}∫_0^t e^-σ^2/2(t-s)(TV(μ_s^(n-1),μ_s^(n-2)) + TV(ν_s^(n-1),ν_s^(n-2)))ds.
An almost identical argument leads to
TV (μ_t^(n),μ_t^(n-1))
≤1/4max{C_ν,μ, C_μ, μ}∫_0^t e^-σ^2/2(t-s)(TV(μ_s^(n-1),μ_s^(n-2)) + TV(ν_s^(n-1),ν_s^(n-2)))ds.
If we set C_maxmax{C_μ,ν, C_ν, ν} + max{C_ν,μ, C_μ, μ}, and add the previous two inequalities, we obtain
TV (ν_t^(n),ν_t^(n-1)) + TV(μ_t^(n),μ_t^(n-1))
≤C_max/4∫_0^t e^-σ^2/2(t-s)(TV(μ_s^(n-1),μ_s^(n-2)) + TV(ν_s^(n-1),ν_s^(n-2)))ds.
≤( C_max/4)^n-1 e^-σ^2/2t∫_0^t ∫_0^t_1…∫_0^t_n-2 e^σ^2/2t_n-1(TV(ν_t_n-1^(1),ν_t_n-1^(0)) + TV(μ_t_n-1^(1),μ_t_n-1^(0)))
×dt_n-1…dt_2 dt_1
≤( C_max/4)^n-1 e^-σ^2/2tt^n-2/(n-2)!∫_0^t e^σ^2/2t_n-1(TV(ν_t_n-1^(1),ν_t_n-1^(0)) + TV(μ_t_n-1^(1),μ_t_n-1^(0))) dt_n-1
≤( C_max/4)^n-1t^n-2/(n-2)!∫_0^t (TV(ν_t_n-1^(1),ν_t_n-1^(0)) + TV(μ_t_n-1^(1),μ_t_n-1^(0))) dt_n-1,
where in the third inequality we bounded ∫_0^t_n-2 dt_n-1≤∫_0^t dt_n-1 and in the fourth inequality we bounded e^σ^2/2t_n-1≤ e^σ^2/2t. Hence, we obtain
∫_0^T TV(ν_t^(n),ν_t^(n-1))dt + ∫_0^T TV(μ_t^(n),μ_t^(n-1)) dt
≤( C_max/4)^n-1T^n-1/(n-1)!(∫_0^T TV(ν_t_n-1^(1),ν_t_n-1^(0)) dt_n-1
+ ∫_0^T TV(μ_t_n-1^(1),μ_t_n-1^(0)) dt_n-1).
For sufficiently large n, the constant on the right hand side becomes less than 1 and the proof is complete.
By Lemma <ref>, for any T > 0 we obtain
the existence of a pair of flows (ν_t, μ_t)_t ∈ [0,T] satisfying (<ref>). Moreover, for Lebesgue-almost all t ∈ [0,T] we have
TV(ν_t^(n),ν_t) → 0, TV(μ_t^(n),μ_t) → 0 as n →∞ ,
which implies
ν_t^(n)→ν_t, μ_t^(n)→μ_t weakly as n →∞ .
Hence, using the lower semi-continuity of the entropy,
we obtain
D_KL(ν_t|π) ≤lim inf_n →∞D_KL(ν_t^(n)|π) ≤ 2 log R_ν + 4/σ^2C_ν ,
D_KL(μ_t|ρ) ≤lim inf_n →∞D_KL(μ_t^(n)|ρ) ≤ 2 log R_μ + 4/σ^2C_μ ,
where both second inequalities follow from Lemma <ref>. In order to ensure that the solution (ν_t, μ_t)_t ∈ [0,T] can be extended to all t ≥ 0, we first need to prove the bound on the ratios ν_t/π and μ_t/ρ in (<ref>).
Step 2: Ratio condition (<ref>).
Using (<ref>) and (<ref>), we see that for any t ∈ [0,T] we have
logν_t^(n)(x)/π(x) = e^-σ^2/2tlogν_0(x)/π(x)
- ∫_0^t σ^2/2 e^-σ^2/2(t-s)( 2/σ^2δ F/δν(ν_s^(n-1), μ_s^(n-1), x) - D_KL(ν_s^(n-1)|π) ) ds,
logμ_t^(n)(y)/ρ(y) = e^-σ^2/2tlogμ_0(y)/ρ(y)
+ ∫_0^t σ^2/2 e^-σ^2/2(t-s)( 2/σ^2δ F/δμ(ν_s^(n-1), μ_s^(n-1), y) + D_KL(μ_s^(n-1)|ρ) ) ds.
Using (<ref>), (<ref>), (<ref>) and (<ref>) we obtain
logν_t(x)/π(x)≤log R_ν + C_ν + σ^2/2( 2 log R_ν + 4/σ^2C_ν),
logμ_t(y)/ρ(y)≤log R_μ + C_μ + σ^2/2( 2 log R_μ + 4/σ^2C_μ).
Hence we can choose R_1, ν 1 + exp(log R_ν + C_ν + σ^2/2( 2 log R_ν + 4/σ^2C_ν)) and R_1, μ 1 + exp(log R_μ + C_μ + σ^2/2( 2 log R_μ + 4/σ^2C_μ)). Note R_1, ν, R_1, μ > 1 are conveniently chosen so that log R_1, ν, log R_1, μ > 0 in our subsequent calculations.
Obtaining a lower bound on ν_t(x)/π(x) and μ_t(y)/ρ(y) follows similarly, by using (<ref>) instead of (<ref>).
Step 3: Existence of the gradient flow on [0,∞).
In order to complete our proof, note that the unique solution (ν_t, μ_t)_t ∈ [0,T] to (<ref>) can also be expressed as
ν_t(x) = ν_0(x) exp( - ∫_0^t ( δ F/δν (ν_s, μ_s, x) + σ^2/2log( ν_s(x)/π(x)) - σ^2/2D_KL(ν_s|π) ) ds ),
μ_t(y) = μ_0(y) exp( ∫_0^t ( δ F/δμ (ν_s, μ_s, y) - σ^2/2log( μ_s(y)/ρ(y)) + σ^2/2D_KL(μ_s|ρ) ) ds ) .
From (<ref>), (<ref>), (<ref>) and (<ref>), we obtain for any t ∈ [0,T]
|δ F/δν (ν_t, μ_t, x) + σ^2/2log( ν_t(x)/π(x)) - σ^2/2D_KL(ν_t|π)|
≤ 3C_ν + σ^2/2( max{|log r_1, ν|, log R_1, ν} + 2 log R_ν) =: C_V, ν,
|δ F/δμ (ν_t, μ_t, y) - σ^2/2log( μ_t(y)/ρ(y)) + σ^2/2D_KL(μ_t|ρ)|
≤ 3C_μ + σ^2/2( max{|log r_1, μ|, log R_1, μ} + 2 log R_μ) =: C_V, μ.
This gives ν_t _TV≤ν_0 _TV e^C_V, ν t and μ_t _TV≤μ_0 _TV e^C_V, μ t, and shows that ν_t and μ_t do not explode in any finite time, hence we obtain a global solution (ν_t, μ_t)_t ∈ [0,∞). In particular, the bounds in (<ref>), (<ref>), (<ref>) and (<ref>) hold for all t ≥ 0.
§.§ Additional results
We finally present two results concerning the existence and uniqueness of MNEs for games of the form (<ref>).
Assume that F satisfies the conditions in Definition <ref> and that the following hold:
* The sets _ν∈𝒫(𝒳) V^σ(ν,μ) and _μ∈𝒫(𝒴){-V^σ(ν,μ)} are non-empty and convex,
* The maps ν↦ F(ν, μ) and μ↦ F(ν, μ) are TV-continuous, and the maps (ν, μ) ↦δ F/δν(ν,μ,x) and (ν, μ) ↦δ F/δμ(ν,μ,y) are jointly TV-continuous for all (x,y) ∈𝒳×𝒴,
* There exist a ≥ a' > 0, b ≥ b' > 0 and C_a, C_a', C_b, C_b' ∈ℝ such that for all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴),
C_a'|x|^a' - C_a ≤δ F/δν(ν, μ, x)≤ C_a'|x|^a + C_a,
C_b'|y|^b' - C_b ≤δ F/δμ(ν, μ, y) ≤ C_b'|y|^b + C_b.
Then there exists at least one MNE (ν^*, μ^*) of the game (<ref>).
The proof closely follows the one of <cit.>. For given (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴), define ℛ^ν(ν, μ) := _ν' ∈𝒫(𝒳) V^σ(ν',μ), ℛ^μ(ν, μ) := _μ' ∈𝒫(𝒴){-V^σ(ν,μ')} and ℛ(ν, μ) := {(ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴): ν∈ℛ^ν(ν, μ), μ∈ℛ^μ(ν, μ) }.
Step 1. We show that ℛ(ν, μ) is TV-compact. Due to Proposition <ref>, any ν∈ℛ^ν(ν, μ) satisfies the first-order condition
ν(x) = 1/Z(ν, μ)e^-2/σ^2δ F/δν(ν, μ, x) - U^π(x),
where Z(ν, μ) > 0 is a normalization constant so that ν∈𝒫(𝒳). Integrating the equation above and using condition (<ref>), we obtain that Z(ν, μ) is uniformly bounded. Let p' > 0. Then, from condition (<ref>), it follows that
∫_𝒳 |x|^p'ν(dx) = 1/Z(ν,μ)∫_𝒳 |x|^p'e^-2/σ^2δ F/δν(ν, μ, x) - U^π(x)dx
≤1/Z(ν,μ)∫_𝒳 |x|^p'e^-2/σ^2(C_a'|x|^a' - C_a) - U^π(x)dx < ∞.
Therefore, we obtain that
C^ν := sup_ν∈ℛ^ν(ν, μ)∫_𝒳 |x|^p'ν(dx) < ∞.
A similar argument but using (<ref>) gives that
C^μ := sup_μ∈ℛ^μ(ν, μ)∫_𝒴 |y|^p'μ(dy) < ∞.
Let 𝒮^ν := {ν∈𝒫(𝒳): ∫_𝒳 |x|^p'ν(dx) ≤C^ν} and 𝒮^μ := {μ∈𝒫(𝒴): ∫_𝒴 |y|^p'μ(dy) ≤C^μ} with 𝒮 := {(ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴): ν∈𝒮^ν, μ∈𝒮^μ}. Then we have that ℛ(ν, μ) ⊂𝒮 and 𝒮 is TV-compact.
Take a sequence (ν_n, μ_n)_n ≥ 1⊂ℛ(ν, μ) such that ν_n →ν and μ_n →μ in TV for some (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴). Then, as previously, for each n ≥ 1, we have that ν_n ∈ℛ^ν(ν_n, μ_n) and μ_n ∈ℛ^μ(ν_n, μ_n), and thus they satisfy
ν_n(x) = 1/Z(ν_n, μ_n)e^-2/σ^2δ F/δν(ν_n, μ_n, x) - U^π(x),
μ_n(y) = 1/Z'(ν_n, μ_n)e^2/σ^2δ F/δμ(ν_n, μ_n, y) - U^ρ(y),
where Z(ν_n, μ_n) = ∫_𝒳 e^-2/σ^2δ F/δν(ν_n, μ_n, x) - U^π(x)dx and Z'(ν_n, μ_n) = ∫_𝒳 e^2/σ^2δ F/δμ(ν_n, μ_n, y) - U^ρ(y)dy. From condition (<ref>), (ν, μ) ↦δ F/δν(ν,μ,x) and (ν, μ) ↦δ F/δμ(ν,μ,y) are jointly TV-continuous for all (x,y) ∈𝒳×𝒴. Then, together with condition (<ref>), we can apply the dominated convergence theorem and obtain that, for all (x,y) ∈𝒳×𝒴,
ν_n(x) →ν(x) = 1/Z(ν, μ)e^-2/σ^2δ F/δν(ν, μ, x) - U^π(x)
μ_n(y) →μ(y) = 1/Z'(ν, μ)e^2/σ^2δ F/δμ(ν, μ, y) - U^ρ(y)
in TV. Therefore, ν∈ℛ^ν(ν, μ) and μ∈ℛ^μ(ν, μ) and hence (ν, μ) ∈ℛ(ν, μ), i.e. ℛ(ν, μ) is TV-closed. Since ℛ(ν, μ) ⊂𝒮 and 𝒮 is TV-compact, it follows that ℛ(ν, μ) is TV-compact.
Step 2. We show that the graph of the map 𝒮∋ (ν,μ) ↦ℛ(ν,μ) ⊂𝒮 is TV-closed, i.e. given (ν_∞, μ_∞), (ν'_∞, μ'_∞) ∈𝒮, for any (ν_n, μ_n) → (ν_∞, μ_∞) and (ν'_n, μ'_n) → (ν'_∞, μ'_∞) in TV and (ν_n', μ_n') ∈ℛ(ν_n, μ_n), it follows that (ν'_∞, μ'_∞) ∈ℛ(ν_∞, μ_∞).
Since 𝒮 is TV-compact, we can extract two subsequences still denoted by (ν_n, μ_n) and (ν_n', μ_n') such that (ν_n, μ_n) → (ν_∞, μ_∞) and (ν_n', μ_n') → (ν'_∞, μ'_∞) in TV. Then, by the lower semi-continuity of (ν, μ) ↦ V^σ(ν, μ) + σ^2/2D_KL(μ|ρ), we have that
V^σ(ν'_∞, μ_∞) + σ^2/2D_KL(μ_∞|ρ) ≤lim inf_n →∞(V^σ(ν_n', μ_n) + σ^2/2D_KL(μ_n|ρ))
≤lim inf_n →∞(V^σ(ν, μ_n) + σ^2/2D_KL(μ_n|ρ))
= lim inf_n →∞(F(ν, μ_n) + σ^2/2D_KL(ν|π))
= F(ν, μ_∞) + σ^2/2D_KL(ν|π)
= V^σ(ν,μ_∞) + σ^2/2D_KL(μ_∞|ρ),
for any ν∈𝒮^ν. The second inequality follows from the fact that ν_n' ∈ℛ^ν(ν_n, μ_n) and the second equality follows from condition (<ref>). Therefore, ν'_∞∈ℛ^ν(ν_∞, μ_∞).
Similarly, by the upper semi-continuity of (ν, μ) ↦ V^σ(ν, μ) - σ^2/2D_KL(ν|π), we have that
V^σ(ν_∞, μ'_∞) - σ^2/2D_KL(ν_∞|π) ≥lim sup_n →∞(V^σ(ν_n, μ'_n) - σ^2/2D_KL(ν_n|π))
≥lim sup_n →∞(V^σ(ν_n, μ) - σ^2/2D_KL(ν_n|π))
= lim sup_n →∞(F(ν_n, μ) - σ^2/2D_KL(μ|ρ))
= F(ν_∞, μ) - σ^2/2D_KL(μ|ρ)
= V^σ(ν_∞, μ) - σ^2/2D_KL(ν_∞|π),
for any μ∈𝒮^μ. The second inequality follows from the fact that μ_n' ∈ℛ^μ(ν_n, μ_n) and the second equality follows from condition (<ref>). Therefore, μ'_∞∈ℛ^μ(ν_∞, μ_∞). Hence, we obtain that (ν'_∞, μ'_∞) ∈ℛ(ν_∞, μ_∞) as required.
Step 3.
From condition (<ref>) in the assumption and Step 1, we have that the set ℛ(ν, μ) is non-empty, convex and TV-compact. From Step 2, we have that the graph of the map 𝒮∋ (ν,μ) ↦ℛ(ν,μ) ⊂𝒮 is TV-closed. Therefore, from the Kakutani fixed point theorem, it follows that the map 𝒮∋ (ν,μ) ↦ℛ(ν,μ) ⊂𝒮 has a fixed point, say, (ν^*, μ^*). This implies that ν^* ∈_ν' ∈𝒫(𝒳) V^σ(ν',μ^*) and μ^* ∈_μ' ∈𝒫(𝒴){-V^σ(ν^*,μ')} and hence V(ν^*, μ) ≤ V(ν^*, μ^*) ≤ V(ν, μ^*), for all (ν,μ) ∈𝒫(𝒳) ×𝒫(𝒳), i.e. (ν^*, μ^*) is a MNE of the game (<ref>).
For V^σ given by (<ref>), if Assumption <ref> holds and (ν^*, μ^*) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴) is a saddle point of V^σ, that is V^σ(ν^*, μ) ≤ V^σ(ν^*, μ^*) ≤ V^σ(ν, μ^*), for all (ν, μ) ∈𝒫(𝒳) ×𝒫(𝒴), then it is unique.
Suppose to the contrary that (ν^*, μ^*), (ν̂^*, μ̂^*) ∈𝒫_ac(𝒳) ×𝒫_ac(𝒴) are two saddle point of V. Then, from Proposition <ref>, we have that
δ F/δν(ν^*, μ^*, x) + σ^2/2log(ν^*(x)/π(x)) = constant,
δ F/δμ(ν^*, μ^*, y) - σ^2/2log(μ^*(y)/ρ(y)) = constant,
for all (x, y) ∈𝒳×𝒴 Lebesgue almost surely and the same two equations also hold for (ν̂^*, μ̂^*). Then, using Lemma <ref>, we have that
V^σ(ν̂^*, μ^*) - V^σ(ν^*, μ^*) ≥∫_𝒳(δ F/δν(ν^*,μ^*,x) + σ^2/2log(ν^*(x)/π(x))) (ν̂^*-ν^*)(dx)
+ σ^2/2D_KL(ν̂^*|ν^*) = σ^2/2D_KL(ν̂^*|ν^*),
V^σ(ν^*, μ̂^*) - V^σ(ν^*, μ^*) ≤∫_𝒴(δ F/δμ(ν^*, μ^*, y) - σ^2/2log(μ^*(y)/ρ(y))) (μ̂^*-μ^*)(dy)
- σ^2/2D_KL(μ̂^*|μ^*) = - σ^2/2D_KL(μ̂^*|μ^*),
where the equalities follow from the first order condition. Swapping ν^* and μ^* with ν̂^* and μ̂^* in the inequalities above, we get the analogous
V^σ(ν^*, μ̂^*) - V^σ(ν̂^*, μ̂^*) ≥∫_𝒳(δ F/δν(ν̂^*,μ̂^*,x) + σ^2/2log(ν̂^*(x)/π(x))) (ν^*-ν̂^*)(dx)
+ σ^2/2D_KL(ν^*|ν̂^*) = σ^2/2D_KL(ν^*|ν̂^*),
V^σ(ν̂^*, μ^*) - V^σ(ν̂^*, μ̂^*) ≤∫_𝒴(δ F/δμ(ν̂^*, μ̂^*, y) - σ^2/2log(μ̂^*(y)/ρ(y)))(μ^*-μ̂^*)(dy)
- σ^2/2D_KL(μ^*|μ̂^*) = - σ^2/2D_KL(μ^*|μ̂^*).
Multiplying the second and the forth inequalities by -1 and adding all inequalities gives
0 ≥σ^2/2D_KL(ν̂^*|ν^*) + σ^2/2D_KL(μ̂^*|μ^*) + σ^2/2D_KL(ν^*|ν̂^*) + σ^2/2D_KL(μ^*|μ̂^*).
Since D_KL(m|m') ≥ 0 for all m, m' ∈𝒫(ℳ), where ℳ⊆ℝ^d, with equality if and only if m=m', it follows that
ν^* = ν̂^* and μ^* = μ̂^*,
and hence V^σ has a unique saddle point.
§ NOTATION AND DEFINITIONS
Let ℳ⊆ℝ^d and fix p ∈ [0, ∞). A functional F:𝒫_p(ℳ) →ℝ admits a first-order flat derivative, if there exists a functional δ F/δ m: 𝒫_p(ℳ) ×ℳ→ℝ, such that
* For all x∈ℳ, 𝒫_p(ℳ) ∋ m ↦δ F/δ m(m,x) is TV-continuous.
* For any ν∈𝒫_p(ℳ), there exists C>0 such that for all x∈ℳ we have
|δ F/δ m(ν,x)|≤ C(1+|x|^p) .
* For all m, m'∈𝒫_p (ℳ),
F(m')-F(m)=∫_0^1∫_ℳδ F/δ m(m + λ (m'-m),x)(m'- m)(dx) dλ.
The functional δ F/δ m is then called the flat derivative of F on 𝒫_p(ℳ). We note that δ F/δ m exists up to an additive constant, and thus we make the normalizing convention ∫_ℳδ F/δ m(m,x) m(dx) = 0.
Observe that F admits a second order flat derivative if there exists a functional δ^2 F/δ m^2: 𝒫_p(ℳ) ×ℳ×ℳ→ℝ satisfying Definition <ref>. If this holds, then for all x ∈ℳ and for all m, m'∈𝒫_p (ℳ), we have that
δ F/δ m(m',x)-δ F/δ m(m,x) = ∫_0^1∫_ℳδ^2 F/δ m^2(m + λ (m'-m),x, x')(m'- m)(dx') dλ.
Let (ℳ, 𝒜) be a measurable space and let P and Q be probability measures on (ℳ, 𝒜). Assume that μ is a σ-finite measure on (ℳ, 𝒜) such that P and Q are absolutely continuous with respect to μ and let p and q denote their probability density functions, respectively. The total variation distance between P and Q is defined as:
TV(P,Q) sup_A ∈𝒜|P(A) - Q(A)| = sup_A ∈𝒜|∫_A (p-q) dμ|.
For any m_0, m_1 ∈𝒫_ac(ℳ),
FR(m_0,m_1) inf{∫_0^1 ∫_ℳ |r_s|^2 ρ_s(dx)ds} ,
where the infimum is taken over all curves [0,1] ∋ t ↦ (ρ_t,r_t) ∈𝒫_ac(ℳ) × L^2(ℳ;ρ_t) solving ∂_t ρ_t = r_t ρ_t in the distributional sense, such that t ↦ρ_t is weakly continuous with endpoints ρ_0 = m_0 and ρ_1 = m_1.
|
http://arxiv.org/abs/2306.11716v1
|
20230620175206
|
Hydrodynamic Lubrication in Colloidal Gels
|
[
"Kim William Torre",
"Joost de Graaf"
] |
cond-mat.soft
|
[
"cond-mat.soft"
] |
[email protected]
Institute for Theoretical Physics, Center for Extreme Matter and Emergent Phenomena, Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands
Institute for Theoretical Physics, Center for Extreme Matter and Emergent Phenomena, Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands
Colloidal gels are elasto-plastic materials composed of an out-of-equilibrium, self-assembled network of micron-sized (solid) particles suspended in a fluid. Recent work has shown that far-field hydrodynamic interactions do not change gel structure, only the rate at which the network forms and ages. However, during gel formation, the interplay between short-ranged attractions leading to gelation and equally short-ranged hydrodynamic lubrication interactions remains poorly understood. Here, we therefore study gelation using a range of hydrodynamic descriptions: from single-body (Brownian Dynamics), to pairwise (Rotne-Prager-Yamakawa), to (non-)lubrication-corrected many-body (Stokesian Dynamics). We confirm the current understanding informed by simulations accurate in the far-field. Yet, we find that accounting for lubrication can strongly impact structure at low colloid volume fraction. Counterintuitively, strongly dissipative lubrication interactions also accelerate the aging of a gel, irrespective of colloid volume fraction. Both elements can be explained by lubrication forces facilitating collective dynamics and therefore phase-separation. Our findings indicate that despite the computational cost, lubricated hydrodynamic modeling with many-body far-field interactions is needed to accurately capture the evolution of the gel structure.
Hydrodynamic Lubrication in Colloidal Gels
J. de Graaf
July 31, 2023
==========================================
§ INTRODUCTION
Colloidal particles dispersed in a fluid medium can form a gel, when there are short-ranged attractions between the colloids with an interaction strength that significantly exceeds the thermal energy k_B T <cit.>; here, k_B is the Boltzmann constant and T the temperature. These interactions can be induced by, e.g., the presence of polymers <cit.> (depletion) or van-der-Waals interactions <cit.> (solvation). The depth and short-ranged nature of the attractive potential well interfere with thermal rearrangement of clustered colloids. This arrests the system’s natural tendency to fully phase separate <cit.>. Diffusion-based aggregation together with this arrested cluster dynamics leads to the formation of an open, space-spanning network structure <cit.>. The resulting material possesses useful properties, e.g., it can support the gel’s buoyant weight against gravity for a finite time <cit.>, behaving solid-like during this period, yet flow under moderate applied stresses <cit.>, behaving liquid-like. This ability to yield easily and reform <cit.> has led to the widespread use of particle gels in industrial, medical, and academic settings, for example, care products, printing inks, foodstuffs, crop protection, and pharmaceutical suspension formulations <cit.>.
Gels are intrinsically out of equilibrium and therefore age after they form <cit.>. A gel's structure and its mechanical response are intimately linked <cit.>. This necessitates understanding of formation and aging in order to begin to predict and ultimately control the properties of colloidal gels. Both processes have been suggested to depend (strongly) on the nature of the fluid medium, in which the colloids are suspended <cit.>. That is, unlike equilibrium systems, for which the particle dynamics are known not to affect the average arrangement of colloids, hydrodynamic interactions (HIs) can affect gel structure by impacting the kinetics of formation. HIs result from momentum conservation in the fluid, which gives rise to long-ranged forces — decaying with the inverse of the center-to-center separation 1/r — between moving suspended particles <cit.>. The forces inducing HIs can be externally applied, stem from the interaction potential, or are thermal in nature. The latter two play a prominent role in gel formation (without gravity), thus it is relevant to quantify their effect on structure when accounting for the suspending fluid.
Over the past two decades, various research groups have performed numerical studies of colloidal gels that account for HIs <cit.>. Introducing HIs into particle simulations poses a significant challenge. In the friction-dominated regime appropriate to colloidal hydrodynamics, a Newtonian fluid satisfies the Stokes equations. Exactly solving the full set of Stokes equations, for which the particles act as boundary conditions, is practically infeasible for all but the simplest systems <cit.>. Additionally, HIs decay as 1/r, which makes the dynamics in a particle suspension an intrinsically many-body problem <cit.>. A number of approximating techniques have been proposed over the years to make progress <cit.>. These techniques weight the accuracy, by which HIs are approximated, against computational efficiency. Prominent examples of methods that have been used in studying colloidal gels include: Fluid Particle Dynamics (FPD), Lattice-Boltzmann (LB), and Stokesian Dynamics (SD). In view of the approximating nature of gel simulations with HIs, it is not surprising that many different (seemingly conflicting) results on the impact of HIs have been reported.
De Graaf et al. <cit.> recently used lattice-Boltzmann (LB) method <cit.> to account for the many-body, far-field interactions between gelling colloids. These authors, revealed two distinct regimes in their simulations with and without HIs, and a crossover at a colloid volume fraction of ϕ≈ 0.16. For ϕ < 0.16, HIs were found to accelerate gelation, while for greater ϕ, the effect was seen to be reversed. The apparent substantial differences in the gel structure, when comparing gels with and without HIs at equivalent time, particularly at low ϕ, were found to become negligible, when comparing the systems at equivalent “structural times” <cit.>. The findings of De Graaf et al. appear to reconcile the contradictory observations reported in previous simulation studies with HIs, requiring only limited reference to the various approximations made by other authors.
However, similar to the vast majority of simulation studies, the authors of Ref. <cit.> ignored hydrodynamic lubrication forces (HLFs). These forces arise when colloidal particles are separated by less than ≈ 10% of their particle diameter and are highly dissipative <cit.>. Motion parallel to the line connecting the centers of two spherical colloids is associated with a hydrodynamic friction that diverges as 1/h, where h measures the surface-to-surface separation. This divergence results from the force required to squeeze the fluid out of the small gap between the colloids. Such an interaction has the potential to interfere with gelation, as the attractions induced by depletion interactions also typically fall in this separation regime. Analogous divergences exist for the other modes of relative displacement, e.g., there is a logarithmic divergence for motion orthogonal to this connecting line.
In the context of colloidal gels, the work of Bybee and Higdon <cit.> is often referenced to justify neglecting HLFs. These authors approximated HIs by considering exclusively HLFs between nearly touching particles. Their study revealed that the percolation line and microstructure of the gel did not exhibit noticeable changes, when compared to the results obtained using non-lubricated Brownian Dynamics (BD). Thus, these authors argued that near-field HIs contribute only minimally on the time scale of gelation. However, the recent work by Townsend et al. <cit.> on HIs in colloidal suspensions, demonstrated that neglecting far-field HIs can lead to inaccuracies, even when colloidal particles are in close proximity to each other.
An additional argument used to neglect HLFs is that in many experimental systems it is not clear that the fluid is Newtonian. That is, macromolecules (polymers) may have been added to induce gelation via depletion, which could affect the medium's hydrodynamic response. Moreover, even very well synthesized colloidal spheres are not perfectly spherical or hard <cit.>. This has led to debate on the relevance of contact interactions <cit.>. Both of these effects limit the applicability of analytic lubrication expressions, which are derived for perfect spheres in a Newtonian solvent. Nonetheless, it is important to fundamentally understand what effect HLFs can have on gelation, gels structure, and aging, before discounting these in favor of other mechanisms. This leads us to revisit colloidal gelation in this work.
It should be noted that LB — and many other approximations for HIs — possesses a near-field increase of the hydrodynamic friction between approaching particles <cit.>. In the case of embedded particles that couple to the lattice fluid on a sub-grid scale, the increase is non-divergent and a mix between far-field flows and compressibility / approximation-level artifacts. For Ladd-type boundaries, there are also issues, but these can be explicitly lubrication corrected <cit.>, i.e., it is subtracted and replaced by accurate analytical lubrication approximations. However, this it is not clear how to effectively do this when studying the dynamics of suspended sub-grid particles experiencing thermal fluctuations. Lubrication corrected Ladd-type particles typically require too many lattice sites to effectively study gelation, as the number of particles is limited by the computational requirements. A different approach is thus needed.
Here, we thus consider the effect of truncating the HIs at four distinct levels, which allows us to chart the specific effect of each approximation. The four levels of approximation are: (i) Free-draining spheres (FD) — effectively captured by BD simulations — will serve as our reference point. This model considers forces between the fluid and colloids only at the one-body level, i.e., the colloids experience Stokes drag but have no long-ranged or short-ranged HIs. (ii) Rotne-Prager-Yamakawa (RPY) hydrodynamics <cit.>, which approximate HIs using far-field Green's functions at a pair level. That is, RPY HIs ignore the intrinsic many-body effects present between suspended spheres, which is typically considered reasonable for low ϕ. Here, we do not make use of lubrication corrections. (iii) Stokesian Dynamics (SD) <cit.> without lubrication corrections (SD^ff). This approach accounts for the many-body effects, but has limited accuracy for spheres that approach each other closely, as is the case in colloidal gels. Of the approaches, it is most closely comparable to the work by De Graaf et al. <cit.> — the LB method is many body in nature and we will contrast Ref. <cit.>'s results to those obtained here. SD is based on matrix inversion and typically more costly than LB, limiting the number of particles that we studied to about 7,000. (iv) Lastly, we consider lubrication-corrected SD (simply SD throughout). This is the most accurate of all techniques in describing the interactions between spheres suspended in a Stokes fluid. However, this comes at the price of being the most computationally expensive. Even full SD is approximating in the mid-field, i.e., when transitioning from lubrication to the far-field expressions.
Using the above four methods, we find that the dynamics of the gel are highly sensitive to the type of hydrodynamic model used, in line with previous studies. However, the (quasi-)steady-state structure remains relatively unaffected between these, provided HLFs are not present. Interestingly, when HLFs are introduced, significant differences are observed, particularly for colloid volume fractions ϕ < 0.138. In addition to altering the structure of the gel, resulting in smaller voids and clusters at the percolation point, lubrication also accelerates gel aging. This may seem counter intuitive, as lubrication is associated with the strongest hydrodynamic dissipation. The explanation for both these effects is that lubrication interactions facilitate phase separation, by interfering with bonding and prevent the rupture of clusters. Additionally, HLFs suppress non-collective modes in the gel arms, leading to more vigorous dynamics.
The remainder of this paper is organized as follows. In Section <ref>, we first provide an overview of the numerical methods used in this work. Section <ref> contains specific details of the hydrodynamic models considered, while in Section <ref> we present the quantities used to characterize our gel systems. Next, in Section <ref>, we show the results of our simulations and describe the system's dynamics and structure. Here, we also provide the first insights into the effects of HIs and HLFs. In Section <ref>, we connect our results with the existing literature and provide an explanation for the deviations observed in SD simulations. We also discuss some of the advantages and limitations of the SD method. Finally, Section <ref> concludes with a summary of our findings and an outlook on future directions for particle-based simulations of colloidal gels.
§ SIMULATION SETUP
We want to study the influence of hydrodynamic interactions on the structural formation and evolution of a colloidal gel. We do so by performing particle-based simulations of neutrally buoyant spherical particles of diameter σ, suspended in a fluid with viscosity η. We simulate only the colloids and account for the presence of the polymers that cause depletion attraction via a generalized “high-exponent” LJ potential
V_LJ^he = ϵ [ ( σ/r )^96 - 2 ( σ/r )^48 ],
where ϵ the interaction strength is set to 10 k_B T. This is a smooth approximation to the well-known Asakura-Oosawa-Vrij interaction <cit.> when combined with steric repulsion that prevents overlap.
We can write the equations of motion for the entire system of N colloids using three 6N-dimensional generalised force vectors
0 = ℱ^𝒫 + ℱ^ℋ + ℱ^ℬ ,
which group forces and torques together. The first term in the right-hand side of Eq. (<ref>), ℱ^𝒫, is a 6N-dimensional vector with the first 3N components containing the inter-particle forces and the last 3N entries are zero (our interaction potentials are torque free). The second term, ℱ^ℋ contains hydrodynamic drag forces. In a steady Stokes' flow, this term can be written as the product of a hydrodynamic resistance matrix, which depends only on the relative distances between particles, and a 6N-dimensional vector 𝒰 containing translational- and rotational-velocities:
ℱ^ℋ = - 𝐑_FU·𝒰.
The last term in Eq. (<ref>), ℱ^ℬ, contains forces and torques that account for Brownian motion. These are instantaneously correlated through the HIs and their variance is given by the fluctuation–dissipation theorem <cit.>. In a simulation with time-step size Δ t, we can express this correlation as <cit.>:
⟨ℱ^ℬ(t_i) ℱ^ℬ(t_j) ⟩ - ⟨ℱ^ℬ(t_i)⟩⟨ℱ^ℬ(t_j)⟩ = 2 k_B T/Δ t𝐑_FUδ_ij,
where δ_ij represents the Kronecker delta and the angled brackets indicate a time average.
To compute the Brownian contributions, we must specify by which convention we evaluate them <cit.>. For numerical studies, a common choice is the Itô convention. At each (discrete) time t_i, we compute 𝐑_FU using the relative particle positions 𝐫(t_i). Following this convention, the Brownian forces acquire a non-zero average
⟨ℱ^ℬ(t_i) ⟩ = k_B T 𝐑_FU·∇·𝐑^-1_FU ,
which is usually called “Brownian drift”. Physically, its presence is required to ensure stationarity under the Gibbs–Boltzmann distribution and generate particle configurations with the correct statistics at equilibrium <cit.>.
Combining the above, the equation for particle displacement reads
Δ𝐫/Δ t = 𝐑_FU^-1·ℱ^𝒫 + √(2k_B T/δ t)𝐑_FU^-1/2·ψ + k_B T ∇·𝐑_FU^-1,
with ψ a 6N-dimensional vector containing uniformly distributed random variable with zero mean and unit variance. We must evaluate 𝐑_FU to integrate the colloid trajectories in time. In general, computing an exact hydrodynamic resistance or mobility matrix is a task with a high computational expense, involving the approximate solution of the Stokes equations in the fluid phase surrounding the particles. Section <ref> provides details on the four approximations to 𝐑_FU, which we used to make it feasible to perform dynamical simulations.
We used periodic, cubic boxes with an edge length L such that the volume fraction of the systems were ϕ∈ [0.075, 0.138, 0.225]. The number of colloids in the boxes was chosen to be N=1,000 and ≈ 7,000, with L suitably modified to achieve a given ϕ. Unless otherwise explicitly indicated, all simulation results presented use the larger particle number. Following <cit.>, we first equilibrated each configuration for 50 τ_B using a purely repulsive inter-particle potential (r_cut = σ), where τ_B = σ^2/(4D) is the Brownian time of the colloids with single-particle translational diffusion coefficient D. Subsequently, the gels were formed via an instantaneous deep quench from a purely repulsive potential to one with the aforementioned 10 k_BT attraction strength, and left to form and age for [20, 50, 80, 1000]τ_B.
The FD, SD^ff, and SD simulations were performed using HOOMD-blue, a GPU-compatible Python package developed in the Glotzer Lab <cit.>. For the SD^ff and SD simulations we used Fiore's external plugin “Positively-split Ewald” (PSEv3) to implement the Stokesian Dynamics algorithm <cit.>. The RPY simulations were performed using UAMMD, a GPU-accelerated software infrastructure for complex fluids simulations <cit.>. For each data point we ran 3 independent simulations (10 for the smaller system sizes). Each typically took several hours to several days to run on a desktop (i9-10900) with modern GPU (NVidia RTX 2060 Super).
§ HYDRODYNAMIC MODELS
In this section, we provide some details on the hydrodynamic approximations used in our work, aimed at contextualizing our results. For each approximation, we provide references to help the interested reader to learn more about that approach. Figure <ref> gives a visual summary of these approaches and introduces a color coding for the methods that we use throughout.
FD spheres are arguably the simplest model for HIs. Here, HIs are accounted for only at a one-body level. This means that each particle experiences the same drag force (given a velocity), irrespective of the presence of other particles in the simulation volume. In this case, the resistance matrix takes the simple diagonal form R^(ij)_FU = 3 πησδ_ij and the thermalization is similarly reduced, see Fig. <ref>a. This makes the FD approximation the fastest method at our disposal; it is fully equivalent to regular BD. The efficiency comes at the cost of neglecting HIs at any relative distance, which means that this approximation produces unrealistic dynamics for suspended colloids. This is not an issue for studying equilibrium systems, though it strongly affects the dynamics of formation <cit.> and collapse <cit.> in colloidal gels.
A more realistic approximation involves a direct construction of the inverse of the resistance matrix, the mobility matrix. This can be done using pairwise Greens' functions for the interactions between spheres, which are called RPY tensors, as depicted in Fig. <ref>b. At a two-body level, these follow form the combination of Faxén's laws for spherical particles and a multi-pole expansion of the solution to Stokes' equation <cit.>. For two particles in an unbounded solvent, the tensor takes the form
𝐌^(i,j)_RPY = 1/3πησ
( 3σ/8r + σ^3/16r^3 ) 𝐈 + ( 3σ/8r - 3σ^3/16r^3 ) 𝐫̂𝐫̂, r > σ,
( 1 - 9r/16σ) 𝐈 + ( 3r/16σ ) 𝐫̂𝐫̂, r ≤σ,
𝐈 , i = j,
where 𝐈 is the 3-dimensional identity matrix, r=|𝐫_i - 𝐫_j|, and 𝐫̂ is the center-to-center unit vector. It is implied that 𝐫̂𝐫̂ represents the Kronecker product of the two vectors and acts as a 3
× 3 matrix. Invoking pair-wise addition, the tensor for a system of N colloids can be written as
𝐌_RPY = ( 𝐌^(1,1)_RPY 𝐌^(1,2)_RPY
... 𝐌^(1,N)_RPY
𝐌^(2,1)_RPY 𝐌^(2,2)_RPY 𝐌^(2,N)_RPY
⋮ ⋱ ⋮
𝐌^(N,1)_RPY 𝐌^(N,2)_RPY
... 𝐌^(N,N)_RPY ) = 𝐑^-1_FU.
In periodic domains, which is the case of our simulations, a compact form of the tensor can be computed in Fourier space, resulting in a positive-definite matrix for any particle configuration <cit.>. The downside of using a RPY-based resistance matrix, is that its pair-wise nature overestimates hydrodynamic forces, when many particles are relatively close. RPY cannot reproduce screening effects at large volume fractions <cit.>, which are known to impact the dynamics in dense suspensions. Gels are partially dense and partially open, which raises the question whether RPY is sufficiently accurate. The Swan group <cit.> showed that RPY is able to produce distributions of the particle contact number, that are are representative of those observed in experiment. As we will discuss in Section <ref>, the contact number distribution may not always be a telling quantifier.
The SD method takes into account far-field, many-body HIs by computing the resistance matrix from a grand-mobility matrix ℳ_ff, via matrix inversion. The latter is constructed similarly to 𝐌_RPY, but the number of hydrodynamic moments included in the multi-pole expansion is higher than in the RPY formulation <cit.>. As a result, the grand-mobility matrix is an 11N × 11N matrix, and 𝐑_FU is equal to the upper left 6N × 6N block of ℳ^-1_ff. The operation of matrix inversion gives rise to many-body hydrodynamic interactions <cit.> (see Fig. <ref>c), which would be absent otherwise, since ℳ_ff is computed pair-wise, as with the RPY tensor.
The HIs discussed thus far are all far-field approximations, since the multipole expansion is truncated at a finite order. As a consequence, short-range hydrodynamic forces are poorly reproduced. In particular, the divergent part of such interactions — following from lubrication — is completely neglected. In the framework of SD, such interactions are instead computed from the analytical expressions derived by Jeffrey and Onishi <cit.> for two-spheres at close proximity. These expressions are used to compute a two-sphere resistance matrix accurate at small separation, which is then used as a building block to assemble the lubrication gran-resistance matrix ℛ_nf, as illustrated in Fig. <ref>d. However, part of the two-body resistance interactions have already been included upon the inversion of ℳ_ff, and therefore, must be subtracted from the total grand-resistance matrix. The matrix containing these interactions is found by simply inverting a two-body mobility matrix. Thus, the total grand-resistance matrix, which contains both near-field lubrication and far-field many-body interactions, is
ℛ = ℳ^-1_ff + ℛ_nf - ℛ^2B_ff,
where the last term contains the aforementioned corrections. 𝐑_FU is then equal to the upper-left block of ℛ.
It should be noted that here we applied a 1% positive shift in the colloid diameter σ used in the inter-particle potential of Eq. (<ref>), compared to the one used to construct the resistance matrix 𝐑_FU. We did this to prevent particle-particle overlaps, which would make the numerical scheme unstable. Even though lubrication effects are already dominant at surface-to-surface separations ≈ 10% of the colloids diameter, the imposed shift might have weakened the contribution of lubrication to both gel dynamics and structure. Reducing this shift should make the effect of lubrication more prominent.
§ SYSTEM CHARACTERIZATION
We used various characterization techniques to understand the dynamics in our colloidal gels and quantify their structure. Here, we predominantly made use of post processing using the Python packages “freud” <cit.> and “NetworkX” <cit.>. In terms of identifying structures, we consider colloids to be bonded, when their center-to-center distance Δ r ≤ 1.05 σ, as in Ref. <cit.>.
In order to highlight the effects of HIs, we study the shape of the clusters using the relative shape anisotropy. This is defined as
⟨κ^2 ⟩ = 3/2⟨∑_iλ^4_i /(∑_iλ^2_i )^2⟩- 1/2,
where the average is performed over the total number of clusters in the system, and λ_i are the eigenvalues of each gyration tensor. Values of κ^2 close to zero identify spherically symmetric clusters, whereas values approaching unity occur only when all the colloids in the cluster lie into a straight line.
To quantify the degree of phase-separation in the system, we make use of the void volume (VV). Here, the VV is computed by considering the volume of a sphere centered at a point in the system, which is just in contact with the surface of the nearest colloid <cit.>. From such a single-point sphere volume, we can compute the average void volume ⟨ VV ⟩ via Monte-Carlo integration. The VV allows us to identify structural dissimilarities on the scale of the gel arms, rather than the level of the particles (characterized by the coordination number).
Another quantity capable of indicating the level of coarsening in the gel is the tortuosity parameter ξ. This is defined as
ξ = 2/N_c(N_c - 1)∑^N_c_i=1∑^N_c_j>i 1 - D^E(r_i,r_j)/D^N(r_i,r_j),
with N_c the number of colloids in the considered cluster, D^E and D^N the Euclidean and network distances respectively. That is, ξ∈ [0,1] indicates how tortuous is the gel network, with low values of ξ representing systems close to completing the phase separation, and values close to one for networks with a non-trivial topology and thus, far from completing the phase-separation process. As we will show in Section <ref>, the large differences between the hydrodynamic models studied manifested in ξ rather than VV, with the latter showing smaller deviations.
§ RESULTS
In this section, we provide the main results of our study. We start by showing the analogy to the LB-based work by De Graaf <cit.> to demonstrate the various methods provide similar far-field results. Here, we focus mostly on the cluster formation and percolation. Next, we turn to the aging of the system and the way it phase separates. Finally, we consider the state-space trajectories and demonstrate that lubricated interactions lead to a divergent structural pathway.
§.§ Gel Network Formation
We first focus on the short-time dynamics of our systems. Figure <ref> provides the fraction of particles n_c^max = N_c^max / N that belong to the largest cluster. We observe that the overall trends are comparable to those of De Graaf et al. <cit.>. For the lowest ϕ, simulations performed with FD exhibit a slower growth than any other hydrodynamic model, while for larger ϕ, FD predicts the fastest cluster growth; a crossover regime is observed for ϕ = 0.138. For completeness, we indicate the power-law scaling that was obtained by De Graaf et al., which appears to match the trends that we find well. Note that particle incorporation in the largest cluster is substantially faster, for our lubricated SD simulations at low ϕ. We will come back to this result in the following.
We also determined the onset of percolation as a function of ϕ. Figure <ref> shows the percolation time t_p, defined as the moment at which there exist a cluster that percolates at least one spatial dimension. In line with Ref. <cit.>, we find that in all cases, HIs speed up the percolation process at low ϕ, and slow it down in denser systems. Note that the dramatic effect of lubrication interactions at low ϕ is borne out by the percolation transition as well. Contrasting SD and SD^ff allows us to assess the role of squeeze flows. Our results show that these play a role, but that the effect at intermediate to high volume fractions (for colloidal gelation) is limited. Interestingly, for all regimes, there is limited distinction between RPY and SD^ff dynamics. This suggests that many-body hydrodynamic effects are of limited importance to gelation.
However, following the largest cluster may not reveal all features of the evolution of the system. We discriminate the effect of various hydrodynamic contributions in the formation of a space-spanning network, by analyzing the shape and size of the clusters that are present in the sample at a given time. Figure <ref>a reports the mean relative shape anisotropy, ⟨κ^2⟩. It is clear that this generally decreases as clusters form. This is understood as follows: dimers have κ^2 = 1 and, as the clusters grow, the branching structures become more isotropic, leading to a decrease of the overall value of ⟨κ^2⟩.
At short times, newly formed clusters have heterogeneous shapes in any hydrodynamic model considered, and, as they diffuse, these clusters will also reconfigure into more compact shapes in order to minimize their energy. The characteristic time related to the latter process is in competition with percolation, as compact clusters form space-spanning network in a less efficient way compared to more elongated ones. Systems with ϕ≳ 0.138 show a clear minimum in ⟨κ^2⟩ at the percolation transition. At larger times, the growth of the mean shape anisotropy could be related to the network compaction, as pointed out by Tsurusawa et al. <cit.> in their experimental study. However, the values of κ^2 are certainly influenced by the periodic boundaries, once a cluster forms that spans the entire simulation box. The long-time increase in ⟨κ^2 ⟩ could be an artifact of this. We deem it irrelevant to our study, as we focus on short-time dynamics (t ≤ t_p) in analyzing the relative shape anisotropy.
Figure <ref>b shows the mean number of clustered particles ⟨ n_c ⟩ = ⟨ N_c ⟩ / N as a function of time. It is clear that for intermediate to high ϕ, the early time cluster dynamics is difficult to distinguish between the various approaches. Systematically, the values for the FD simulations appear lower, but recall that a fair comparison requires an analysis in structure space <cit.>. Interestingly, the situation is distinct for low ϕ, as all curves are initially different, though the FD, RPY, and SD^ff results start to track each other beyond t = t_p. The behavior of our SD simulations always remains separate from these curves, suggesting that lubrication has the ability to modify the structure as well as the dynamics.
In general, the effect of HIs is to both decrease the bonding rate of colloids and increase the time needed to rearrange clusters into lower energy configurations. Thus, on average, HIs impede cluster growth (see Fig. <ref>b). For large volume fractions, the time it takes to form a large percolated cluster, which depends on both ϕ and colloids bonding rate, is small compared to the time required by clusters to relax their shape. This becomes clear from Fig. <ref>a, where large ϕ curves overlap once scaled by their respective percolation times. Thus, in that regime of ϕ, the only noticeable effect of HIs on the gel formation is to decrease the bonding rate of colloids, thereby slowing down short-time gel dynamics.
In contrast, for the the lowest ϕ, the characteristic time associated with cluster reshaping is comparable with t_p, as shown in Fig. <ref>a). In this figure, FD systems reach a plateau in the mean relative shape anisotropy κ^2 before percolation occurs, while the other hydrodynamic models maintain on average more elongated shapes. This promotes the emergence of large structures composed by strand-like clusters of colloids, which speeds up space exploration, the capturing of smaller clusters, and ultimately percolation <cit.>. On the other hand, clusters of particles can rapidly rearrange into more compact configuration in FD simulations. Thus, disfavouring the formation of a percolated network-like structure.
§.§ Aging and Phase-Separation
Next, we shift our focus to the long-time dynamics in the gelled system. Figure <ref> shows the time evolution of the tortuosity ξ for our three ϕ of interest. Note that the curves peak approximately at the respective percolation points. Any mismatch is likely explained by our limited system size and choice of defining percolation as spanning in one direction. After the systems have percolated, the resulting network-like structures will keep spinodally decomposing into colloid-rich and colloid-poor phases, though in an arrested manner. Systems with HIs exhibit a markedly faster decay in ξ for all values of ϕ examined, especially when the SD approximation is used. All of the intermediate- to high-ϕ systems show a plateauing of ξ, suggesting that these display strongly arrested dynamics. Low-ϕ systems appear to remain more dynamical over the range that we could simulate. We will argue that this effect is given by the long-range hydrodynamic interactions that propagates among the gel arms.
We also studied the mean squared displacement ⟨ r^2 ⟩ (MSD), averaged over all colloids in the simulation volume. Figure <ref> shows that, at short times, all MSDs display a diffusive (linear) regime, as expected for systems that have not yet significantly clustered. In the long-time limit, all systems become subdiffusive, which is representative of the caging colloids experience in the network structure <cit.>. For our FD system, we were able to simulate sufficiently long to characterize the trend, which appears to follow a power law: ⟨ r^2 ⟩∝ t^1/4.
Surprisingly, for the lowest ϕ, SD-approximated HIs give rise to a transient superdiffusive regime (⟨ r^2 ⟩∝ t^1.35), right after the system percolates. This can be related to the smaller mean cluster size ⟨ n_c ⟩ that SD has before and at percolation, see Fig. <ref>b.
At percolation, the size of the largest cluster is roughly the same in every hydrodynamic model, meaning that in SD simulations the percolated network is surrounded by smaller “free” clusters on average. Thermal fluctuations cause displacements of the network arms, which in turn propagate long-range hydrodynamic forces onto the remaining clusters (except in FD systems). Smaller clusters will experience larger displacements due to these interactions, increasing on average the MSD in the SD systems. This is what we alluded to at the end of the opening paragraph to this section. Lastly, all long-time MSDs with HIs are larger than those of the FD systems. The data is suggestive of a collective speeding up of the aging dynamics, which we again relate to HIs generated by the arms of the colloidal gel.
§.§ State-Space Trajectories
In the previous section, we have shown that HIs strongly affect the dynamics of the system, which was expected, barring the unexpectedly large impact of HLFs at low ϕ. Here, we investigate whether this strong change in dynamics is indicative of an actual change in gel structure. Figure <ref>a shows the systems' trajectory in a “state space” formed by the average void volume ⟨ VV ⟩ and the average number of nearest neighbors ⟨ z ⟩. In this representation, time is parametric and its direction is indicated by the arrow. The results are similar to those reported by De Graaf et al. <cit.>. All hydrodynamic models roughly follow the same trajectory in state space, except for systems that include hydrodynamic lubrication. These show (small) deviations that become larger as the ϕ is decreased.
We highlight the differences displayed by lubricated systems in Fig. <ref>b, which shows the gel trajectories in a state space spanned by ⟨ VV ⟩ and the tortuosity of the largest cluster ξ^max. The qualitative picture remain roughly unchanged for large and intermediate ϕ, with deviations from the simple FD predictions becoming more significant as the hydrodynamic model considered becomes more accurate. For the lowest ϕ considered here, SD systems pursue a clearly distinct path in state space from those obtained using FD, RPY, and SD^ff. We conclude that tortuosity is a better characterizer of differences in colloidal gels than void volumes, and that the HLFs can strongly impact structure at low ϕ.
§ DISCUSSION
We showed that far-field HIs change the dynamics of colloidal gel formation, as already found in other numerical studies of gelation <cit.>. However, unless HLFs are included, this has little impact on the structural paths that these systems explore as they form, in line with De Graaf et al. <cit.>. Nonetheless, HLFs can strongly impact the dynamics and structure of a gel, in contradiction to the results of Bybee and Higdon <cit.>. The important difference here is that we include far-field effects. The strong effect of HLFs on structure are thus likely a consequence of an interplay with far-field HIs. Speculating, it could be that the fluid particle dynamics simulations by Furukawa and Tanaka <cit.> picked up on this effect, as these authors used relatively low volume fractions, weak gels, and numerically refined particles.
Let us now explain the counter intuitive result obtained by combining HLFs and far-field HIs during aging. The effect of far-field-only HIs on the aging process is clear: a less static network will coarsen faster over time. Adding HFLs to this, accelerates aging further, for two reasons: (i) Divergent dissipative forces hinder both the creation and rupture of colloidal bonds. That is, it is harder to push the colloids in and out of contact, meaning that once arms have formed, they are less likely to break. (ii) Thermal fluctuations are more collective in nature, when including HLFs. To understand this, note that Brownian velocities can be decomposed into a near- and a far-field contribution <cit.>: ℱ^ℬ = ℱ^ℬ_nf + ℱ^ℬ_ff. The former are fundamentally different when lubrication is included. In this case, the effect of lubrication is to filter out relative thermal displacements and rotations of colloids that are in close proximity. Thus, the spectrum of fluctuations is mainly populated by collective modes, which displace coherently the colloids that belong to the same gel arm and facilitates the coarsening process. As a result, the flux of particles migrating from the colloid-rich to the colloid-poor regions is negatively affected by HLFs, thus, favouring phase-separation. It would be worthwhile to revisit the work on the hopping dynamics of colloids in a gel arm et al. <cit.> in view of this.
In terms of computational cost, SD simulations are clearly the most expensive. This cost stems mainly from the matrix inversion needed to reproduce many-body far-field HIs, rather than the inclusion of near-field lubrication. Fortunately, HLFs slow down bonding dynamics, allowing the use of larger Δ t, which makes SD effectively more efficient than SD^ff. Nonetheless, full SD remains a costly numerical scheme in systems with strong short-ranged interactions such as colloidal gels, practically limiting the system sizes that can be studied to a few thousand particles. Considering that RPY and SD^ff have very similar behaviors, it could be worthwhile to investigate how accurate a lubricated RPY algorithm is in reproducing full SD results. If this approach is successful, much larger system sizes can be considered.
§ CONCLUSIONS AND OUTLOOK
Summarizing, we have studied the effect of hydrodynamic interactions in the formation and aging of colloidal gels. We considered single-body (Brownian Dynamics), pair-wise (Rotne-Prager-Yamakawa), many-body (Stokesian Dynamics), and lubrication-corrected many-body interactions. As expected, we found that the dynamics of gel formation is sensitive to the exact nature of the approximation that is used. However, the steady-state structure is relatively unaffected when considering far-field hydrodynamic interactions and appropriate rescaling of time, in line with a previous analysis based on lattice Boltzmann simulations.
Intriguingly, introducing hydrodynamic lubrication corrections results in significant departures from the rescaled trend, especially for colloid volume fractions ϕ≲ 0.138. Not only is the structure altered, with on average smaller voids and clusters at the percolation point, lubrication also accelerates the aging of the gel. This result seems counterintuitive, as the lubricated regime is typically associated with the strongest hydrodynamic dissipation, and is not in line with previous understanding in the literature. Both the dynamic and structure effect can, however, be explained by the fact that lubrication interactions hinder the bonding and rupturing of clusters, as well as suppress non-collective Brownian modes in the gel arms. These aspects combine to facilitate phase-separation. The key point this that both far-field hydrodynamics and lubrication forces must be present to realize is enhanced separation, which is where our study improves upon the existing literature. Studying colloidal gelation thus requires a relatively costly, but accurate representation of hydrodynamic interactions.
Moving away from the idealized Stokes-flow (approximate) solutions that we considered here, our findings strongly suggests that near-contact dynamics should be given due consideration in the modeling of experimental colloidal gels. This work provides a solid foundation for future studies that make the comparison between experiment and simulation. In particular, it will be relevant to see how the lubricated dynamics impacts gels subjected to externally imposed stimuli, such as shear.
§ ACKNOWLEDGEMENTS
The authors acknowledge NWO for funding through OCENW.KLEIN.354. We are grateful to the late Prof. James Swan for initial discussions on the possible differences between LB and SD simulations of colloidal gels. We thank Dr. Andrew Fiore and Dr. Madhu Majji for their help with getting PSE up and running with HOOMD-blue, as well as Dr. Gwynn J. Elfring and Dr. Zhouyang Ge for sharing an accelerated variant of PSE and discussing minor bugs in the resistance tensor. An open data package containing the means to reproduce the results of the simulations is available at: [DOI]
aip
|
http://arxiv.org/abs/2306.10391v1
|
20230617170326
|
The Dirichlet problem for the minimal surface equation on unbounded helicoidal domains of $\mathbb{R}^{m}$
|
[
"Ari Aiolfi",
"Caroline Assmann",
"Jaime Ripoll"
] |
math.DG
|
[
"math.DG",
"math.AP",
"53A10, 58J05"
] |
The Dirichlet problem for the minimal surface equation on unbounded helicoidal
domains of ℝ^m
Ari Aiolfi, Caroline Assmann, Jaime Ripoll
Accepted 16 June 2023
=============================================================================================
We consider a helicoidal group G in ℝ^n+1 and unbounded
G-invariant C^2,α-domains Ω⊂ℝ^n+1 whose
helicoidal projections are exterior domains in ℝ^n, n≥2. We
show that for all s∈ℝ, there exists a G-invariant solution
u_s∈ C^2,α( Ω) of the Dirichlet
problem for the minimal surface equation with zero boundary data which
satisfies sup_∂Ω|gradu_s|
=| s|. Additionally, we provide further information on
the behavior of these solutions at infinity.
§ INTRODUCTION
The Dirichlet problem for the minimal surface equation (mse) in ℝ
^m, m≥2, namely,
{[ ℳ( u) :=div( gradu√(1+|gradu|
^2)) =0 in Ω, u∈ C^2( Ω) ∩
C^0( Ω); u|_∂Ω=φ ].
where Ω⊂ℝ^m is a C^2-domain and φ∈
C^0( ∂Ω) is given a priori, has been intensively
explored in the last decades. One of the most general answers to the Dirichlet
problem (<ref>) for bounded domain was given by H. Jenkins and J. Serrin in
<cit.>. They showed that (<ref>) is solvable for arbitrary φ∈
C^0( ∂Ω) if only if Ω is mean convex.
Moreover, they noted that if φ∈ C^2( ∂Ω), a bound on the oscillation of φ in terms of the second order norm
of φ should be enough to ensure the solvability of (<ref>) on
arbitrary bounded domains (Theorem 2 of <cit.>).
The study of the Dirichlet problem for the mse on unbounded domains began with
J. C. C. Nitsche in the so called exterior domains that is, when
ℝ^m-Ω is relatively compact (Section 4 of <cit.>). Since
then several authors continue the investigation of the Dirichlet problem for
the mse in exterior domains (<cit.>, <cit.>, <cit.>, <cit.>
,<cit.> <cit.>, <cit.> <cit.>).
The Dirichlet problem (<ref>) for more general unbounded domains reduces,
to authors knowledge, to few works: when m=2, Rosenberg and Sa Earp
(<cit.>) proved that (<ref>) has a solution if Ω⊂ℝ^2 is convex subset distinct of a half plane for any continuous
boundary data φ. In the half plane case, in the case that φ is
bounded, Collin and Krust (<cit.>) proved that if Ω is a convex set
distinct of a half plane, then the solution is unique and, if Ω is a
half plane then there is a unique solution with linear growth. For an
arbitrary dimension m, Z. Jin, in <cit.>, proved that (<ref>) has a
solution if Ω is a mean convex domain contained in some special like
parabola-shape region or in the complement of a cone in ℝ^m. More recently N. Edelen and Z. Wang proved that if Ωℝ^n is an open convex domain (e.g. a half-space) and φ∈
C^0( ∂Ω) is a linear function, then any solution
of (<ref>) must also be linear.
In our work we obtain an extension of the exterior Dirichlet problem for the
minimal surface equation in ℝ^m, m≥3, in the following
sense: we say that a domain is k-bounded, 0≤ k≤ m, if it is bounded
in k directions of the space ℝ^m (as a direction we mean an
equivalence class of parallel lines of ℝ^m). As so, a domain of
ℝ^m is relatively compact if and only if it is m-bounded. In
our main results we study the Dirichlet problem for the mse on certain domains
Ω of ℝ^m, m≥3, which complement ℝ
^m-Ω is (m-1)-bounded, and for zero boundary data. We recall that
Theorem 3.5 of <cit.> proves the existence of solutions of the Dirichlet
problem for the mse on a strip, a special 1-bounded domain of ℝ
^2, for arbitrary continuous bounded boundary data.
To state precisely our theorems we need to recall a result of the third author
and F. Tomi (Theorem 2 of <cit.>) which asserts that if Ω
is G-invariant C^2,α-domain for m≥3, where G is a subgroup
of ISO( ℝ^m) that acts freely and properly on
ℝ^m, such that P( Ω) is a bounded and mean
convex domain, then (<ref>) has an unique G-invariant solution for any
G-invariant boundary data φ∈ C^2,α( ∂Ω), where P:ℝ^m⟶ℝ^m/G is
the projection through the orbits of G and ℝ^m/G is endowed
with a metric such that P becomes a Riemannian submersion.
Related to the above result, we would like to mention that the use of a Lie
group of symmetries to study minimal surfaces was first considered by Wu-ye
Hsiang and Blaine Lawson in <cit.>. Although proving distinct facts, we can
say that Proposition 3 of <cit.> has the same spirit of Theorem 2 of
<cit.>. Also related to these results, there is the idea of using Lie
groups of symmetries to the study of minimal graphs (constant mean curvature
graphs and more general PDE's too), as Killing graphs in warped products. This
technique was first considered by Marcos Dajczer and the third author of this
work in <cit.> and, since then, many works have been done extending and
generalizing the results of <cit.>, as <cit.>, <cit.> and
<cit.>.
Let λ∈ℝ, a∈ℝ-{0} and i,j,k∈{
1,...,n+1} be given with any two i,j and k distinct. Consider
the helicoidal like group G≡ G_λ,a^i,j,k in ℝ
^n+1, n≥2, determined by the one parameter subgroup of isometries
G={φ_t} _t∈ℝ, where φ
_t:ℝ^n+1⟶ℝ^n+1 is given by
φ_t( p) =α( p) e_i+β(
p) e_j+γ( p) e_k+
∑_l≠ i,j,k
x_le_l,
where p=( x_1,...,x_n+1), { e_i}
_i=1^n+1 is usual orthonormal basis of ℝ^n+1,
α( p) =x_icosλ t+x_jsinλ t,
β( p) =x_jcosλ t-x_isinλ t
and γ( p) =x_k+at.
Let π:ℝ^n+1⟶{ x_k=0}≡ℝ^n be the helicoidal projection determined by G,
that is,
π( p) =α( p) e_i+β( p) e_j+
∑_l≠ i,j,k
x_le_l,
where
α( p) =x_icosλ x_k/a-x_j
sinλ x_k/a, β( p)
=x_jcosλ x_k/a+x_isinλ x_k/a.
Set
M:=(ℝ^n,⟨ ,⟩ _G),
where ⟨,⟩ _G is the metric such
that π becomes a Riemannian submersion (clearly G acts freely and
properly in ℝ^n+1 and { x_k=0}≡ℝ^n is a slice relatively to the orbits Gp={φ
_t( p) , t∈ℝ}). One may see that
the map ψ:ℝ^n+1/G→ℝ^n given by ψ
=π∘ P^-1 is well defined and is an isometry with the metrics
mentioned above. From the isometric submersion theory, it follows that M is complete.
Let Ω⊂ℝ^n+1 be a G-invariant domain of class
C^2,α and set Λ=π( Ω). Let d_E(
p) =d_E( p,∂Ω), p∈Ω, be the
(Euclidean) distance in ℝ^n+1 to ∂Ω and let
d( q) =d_M( q,∂Λ), q∈Λ,
be the distance in M to ∂Λ. Given u∈ C^2(
Ω) ∩ C^0( Ω) and φ∈
C^0( ∂Ω) , G-invariant functions, that is,
u=v∘π for some v∈ C^2( Λ) ∩ C^0(
Λ) and φ=ψ∘π for some ψ∈
C^0( ∂Λ), it follows from Proposition 3 of
<cit.> that u is solution of (<ref>) (relatively to m=n+1) if, and
only if,
{[ 𝔐( v) :=div_M( ∇
v√(1+|∇ v| ^2)) -1
√(1+|∇ v| ^2)⟨∇
v,J⟩ _M=0 on Λ; v|_∂Λ=ψ ].
where ∇ and div_M are the gradient and divergence in
M, respectively, and
J( π( p) ) =dπ_p( H_Gp( p) ) ,p∈ℝ^n+1,
where H_Gp is the mean curvature vector of the
1-dimensional submanifold Gp of ℝ^n+1. Moreover, |∇u| =|∇ v|∘π, where
∇ denotes the gradient in ℝ^n+1.
([The white region in Figure 1 is a G-invariant domain
Ω⊂ℝ^3 (with λ=a=1), whose boundary is the
surface Ψ:[ 0,π] ×ℝ⟶ ℝ^3
given by
Ψ( t,z) =( cos tcos z+( 5+sin t) sin
z,( 5+sin t) cos z-cos tsin z,z) .
Note that ℝ^3-Ω is an example of a 2-bounded
domain in ℝ^3.]) When Λ is a bounded and mean convex
domain is proved in <cit.>, as mentioned above, that there is an unique
G-invariant solution u∈ C^2,α( Ω)
of (<ref>) for all G-invariant φ∈ C^2,α(
∂Ω). In this note we work with the case where
Λ=π( Ω) is an exterior domain in ℝ^n
and the boundary data is zero. We observe that Λ is an exterior domain
in M if and only if Ω is n-bounded.
Regarding the condition of zero boundary data, we recall an old but quite
suggestive result to our work of Osserman that proves that in ℝ^2
with the Euclidean metric, there is a boundary data on a disk D for which
there is no solution to the exterior Dirichlet problem in ℝ^2-D.
This strongly suggests, since K_M>0, that the zero boundary data condition
can not also be dropped out in our case. But we don't have a counter example.
In order to state our main results, we remember that, relatively to an
exterior domain ℝ^n-𝔅_ρ(
p_0) in ℝ^n, n≥2, where 𝔅_ρ( p_0) is an open ball of ℝ^n centered at
p_0∈ℝ^n and of radius ρ>0, the function
v_ρ( p) :=ρ∫_1^τ/ρdt/√(t^2( n-1) -1), τ=| p-p_0
|, p∈ℝ^n-𝔅_ρ(
p_0) ,
is the solution relatively to the Dirichlet problem (<ref>) which
satisfies
p→∂𝔅_ρ( p_0) lim|∇v_ρ( p) |
=∞, | p|→∞lim|∇v_ρ( p) | =0
(a half-catenoid). If n≥3, v_ρ is bounded and its height at
infinity, which we denote by h( n,ρ), is
h( n,ρ) =ρ h( n,1) =ρ∫_1^∞
dt/√(t^2( n-1) -1).
In all of the results from now on, G≡ G_λ,a^i,j,k is as
defined in (<ref>), with λ,a,i,j,k fixed.
We prove:
Let Ω ⊂ℝ^n+1, n≥2, be a G-invariant
and C^2,α-domain such that π( Ω) is an
exterior domain of ℝ^n={ x_k=0}, where G and
π are as defined in (<ref>) and (<ref>), respectively. Let
ϱ>0 be the radius of smallest geodesic ball of ℝ^n which
contain ∂π( Ω), centered at origin of
ℝ^n if λ≠0. Then, given s∈ℝ, there is a
G-invariant solution u_s∈ C^2,α( Ω) of the Dirichlet problem (<ref>) with u_s|_∂Ω=0, such
that:i) sup_Ω|∇
u_s| =sup_∂Ω|∇
u_s| =| s|;ii) u_s is unbounded
if n=2 and s≠0 and, if n≥3, either
sup| u_s|≤ h( n,ϱ)
or there is a complete, non-compact, properly embedded n-dimensional
submanifold N⊂Ω, such that
d_E( p) →+∞limu_s|
_N. ( p) =h( n,ϱ) ,
where h( n,ϱ) is given by (<ref>);iii)
u_s satisfies
d_E( p) →∞lim|∇u_s( p) | =0
if λ=0 or s=0 or, if λ≠0, 3≤ n≤6 and u_sis bounded.
Additional informations relatively to set of G-invariant solutions of the
Dirichlet problem (<ref>) also are obtained under the assumption that
M-π( Ω) satisfies the interior sphere
condition of radius r>0, that is, for each q∈∂π( Ω), there is a geodesic sphere S_q of M of
radius r contained in M-π( Ω) such that S_q is
tangent to ∂π( Ω) at q and r is maximal with
this property.
Given a∈ℝ- { 0}, λ∈ℝ,
n≥3 and r>0 set
C=C( r,n,λ,a) :=2| a|(
n-1) +|λ| r/2| a| r.
Let ς>C be the solution of the equation
cosh( μ/√(μ^2-C^2)) =μ/C,
μ>C,
and set
ℒ
=
ℒ
( r,n,λ,a) :={[ 1/√(ς^2-C^2) if λ≠0; h( n,r) if λ=0 ]. ,
where h( n,r) is given by (<ref>).
Let Ω ⊂ℝ^n+1, n≥3, be a
G-invariant and C^2,α-domain such that π( Ω)
is an exterior domain of ℝ^n={ x_k=0}, where
G and π are as defined in (<ref>) and (<ref>), respectively. Assume
that M-π( Ω) satisfies the interior sphere condition of
radius r>0, where M is the n-dimensional Riemannian manifold given by
(<ref>). Let ℒ
=
ℒ
( r,n,λ,a) be as defined in (<ref>), where λ
and a are given in the definition of G. Then, given c∈0,
ℒ
], there is a G-invariant solution u_c∈ C^2( Ω)
∩ C^0( Ω) of (<ref>) with
u_c|_∂Ω=0, such that
d_E( p) →∞lim u_c(
p) =c.
In particular, if c∈0,
ℒ
) then u_c∈ C^2,α( Ω).
We note that our approach is not applicable for general boundary data.
Moreover, we were not able to prove that the solutions u_s obtained in
Theorem <ref> have a limit at infinity and this could be the
subject of a future research.
§ PRELIMINARIES
We first observe that, relatively to the PDE given in (<ref>), the maximum
and comparison principles work (see Section 3 of <cit.>).
In this section, we give further informations on M and we provide the basic
results to construct barriers relative to the Dirichlet problem (<ref>)
when the boundary data is zero. We shall use the meaning of the indexes i
and j as defined in (<ref>).
Let G be as defined in (<ref>). Given p=( x_1
,...,x_n+1) ∈ℝ^n+1, the orbit Gp has constant
curvature
H_Gp=λ^2√(x_i^2+x_j^2)/λ^2(x_i
^2+x_j^2)+a^2.
In particular,
p∈ℝ^n+1supH_Gp=|λ|/2| a|,
and the supremum is attended, if λ≠0, at those orbits through the
points p in ℝ^n+1 such that |( x_i
,x_j) | =| a| /|λ|.
An arch length parametrization of Gp is given by
γ_p(s)=F( p) e_i+G( p) e_j+H(
p) e_k+∑_l≠ i,j,kx_le_l.
where
F( p) =x_icos(A( p) s)+x_jsin(A(
p) s), G( p) =x_jcos(A( p)
s)-x_isin(A( p) s)
and
H( p) =x_k+aA( p) s/λ,
with
A(p)=λ/√(λ^2(x_i^2+x_j^2)+a^2).
The mean curvature vector of Gp is then
H_Gp(γ_p(s))=γ_p^''
(s)=-A^2( p) [ F( p) e_i+G(
p) e_j] ,
and, consequently, the mean curvature of Gp in ℝ^n+1 is given
by (<ref>). Since the mean curvature of the orbit Gp only depends on the
Euclidean distance of x_ie_i+x_je_j to the origin, setting
σ( p) =|( x_i,x_j) |, we have that ξ:[0,∞)⟶ℝ given
by
ξ( σ) =λ^2σ/λ^2σ
^2+a^2,
has a maximal absolute point at σ_0=| a|
/|λ| if λ≠0 and the result follows.
Since π:ℝ^n+1⟶ M is a Riemannian submersion,
given two orthogonal vector fields X,Y∈χ( M) and their
respective horizontal lift X,Y, we know that
K_M( X,Y) =K_ℝ^n+1( X
,Y) +3/4|[ X,Y] ^v| _ℝ^n+1^2
where K and [ X,Y] ^v means,
respectively, the sectional curvature and the vertical component of [
X,Y], that is, the component which is tangent
to the orbits Gp, p∈ℝ^n+1. As K_ℝ^n+1(
X,Y) =0, it follows that K_M(
X,Y) ≥0 and, therefore, Ric_M≥0 (straightforward, but quite
extensive calculations, give us that, in fact, K_M>0 with K_M
→0 at infinity).
Let Λ be an exterior domain in M. Denote by ν the
horizontal lift of ∇ d, where d=d_M( .,∂Λ). Then
⟨∇ d,J⟩ _M∘π=⟨ν
,H_G⟩
where ⟨ ,⟩ is the Euclidean metric and J is given
by (<ref>). In particular
-H_Gp( p) ≤⟨∇ d,J⟩ _M(
π( p) ) ,
for all p∈π^-1( Λ), where H_Gp is given by
(<ref>).
As H_G and ν are the horizontal lift of J and
∇ d respectively, we have J( π( p) )
=dπ_p( H_Gp( p) ) and
∇ d( π( p) ) =dπ_p( ν(
p) ). Since π is a Riemannian submersion,
dπ_p|_[ T_pℝ^n+1] ^h:[ T_p
ℝ^n+1] ^h⟶ T_π( p) M
is an isometry, where [ T_pℝ^n+1] ^h means the
horizontal vector space relatively to Gp at p and, from this, we have
(<ref>). In particular, ⟨ν,H_Gp
⟩ is constant along Gp. Note that |ν|
=1 since |∇ d| =1. Thus
-H_Gp( p) =-|H_Gp( p)
|≤⟨ν,H_Gp⟩(
p)
and the result follows.
Let G, π and M=(ℝ^n,⟨
,⟩ _G) as defined in (<ref>), (<ref>) and (<ref>),
respectively, and assume n≥3 and λ≠0. Let o be an arbitrary
but fixed point of M and let Λ=M-B_r(o), where
B_r(o) is the open geodesic ball of M of radius r centered at o. Let
b∈ℝ satisfying b>C, where C is given by (<ref>). Consider
ψ:[0,∞)⟶ℝ given by
ψ( t) =1/bcosh^-1( 1+bt)
and Λ_0:={ q∈Λ;d( q) ≤ t_0}, where
t_0=b-C/bC.
Then w=ψ∘ d:Λ_0→ℝ is such that w∈
C^2( Λ_0) ∩ C^0( Λ
_0), w|_∂Λ=0, w>0 on Λ_0,
d( q) →0lim|∇ w(
q) | =+∞
and 𝔐( w) ≤0 on Λ_0, where
𝔐 is the operator defined in (<ref>).
Let φ∈ C^2(( 0,∞)) ∩C^0(
[0,∞)) to be determined a posteriori and consider the
function w:Λ⊂ M⟶ℝ given by w(
q) =( φ∘ d) ( q). Straightforward
calculations give us that, in Λ,
div_M( ∇ w/√(1+|∇ w| ^2)
) =g( d) Δ d+g^'( d)
and
1/√(1+|∇ w| ^2)⟨∇
w,J⟩ _M=g( d) ⟨∇ d,J⟩
_M,
where
g( d) :=φ^'( d) /√(1+[
φ^'( d) ] ^2),
Δ is the Laplacian in M and "^'" means ∂/∂ d. Thus, 𝔐( w) ≤0 in Λ if
and only if
g( d) Δ d+g^'( d) -g( d)
⟨∇ d,J⟩ _M≤0 in Λ.
From the Laplacian's Comparison Theorem, since Ric_M≥0 and M=n,
we have
Δ d( q) ≤n-1/d( q) +r≤n-1/r, q∈Λ.
Now, we assume that our φ satisfies φ( 0) =0 and
φ^'( d) >0 for d>0 (consequently, φ( d) >0 for d>0). From (<ref>), it follows that g(
d) >0 for d>0 and, from (<ref>), we conclude that if
( n-1) g( d) /r+g^'( d)
-g( d) ⟨∇ d,J⟩ _M≤0 in
Λ
then we have (<ref>). From Lemma <ref> and (<ref>), we see that
-|λ|/2| a|≤
-H_Gp( p) ≤⟨∇ d,J⟩ _M∘π( p)
and, then, if
g^'( d) /g( d) ≤-C in
Λ,
where C is given by (<ref>), then we have (<ref>). From (<ref>), we
see that (<ref>) is equivalent to
φ^''( d) /φ^'(
d) ≤-C( 1+[ ψ^'( d) ]
^2) in Λ.
We will assume from now on that our φ satisfies
d→0limφ^'( d) =+∞.
A function φ which satisfies all the requirements demanded until now
is given by
φ( t) =αcosh^-1( 1+bt)
t≥0, with α, b positive constants to be determinate, where,
here, t=d( q), q∈Λ. Assuming such one φ, we
see that (<ref>) is equivalent to
-b( 1+bt) /( ( 1+bt) ^2-1+α
^2b^2) ≤-C.
We assume α,b such that α b=1. Thus, we have (<ref>) if
t≤b-C/bC, C<b.
Thus, assuming b>C, setting
t_0:=b-C/bC,
considering the neighborhood of ∂Λ in Λ given by
Λ_0:={ q∈Λ;d( q) ≤ t_0} ,
and the function
ψ( t) =1/bcosh^-1( 1+bt) ,
t≥0,
we have that
w( q) =ψ∘ d( q) , q∈Λ_0,
satisfies
𝔐( w) ≤0
in Λ_0 (note that exp_∂Λ:∂Λ×0,t_0]⟶Λ_0, exp_∂Λ( p,t) =exp_ptη( p), where η is the
unit vector field normal to ∂Λ that points to Λ, is a
diffeomorphism, since Λ is the exterior of a geodesic ball in M).
The others conclusion follow directly from the definition of ψ.
Assume the same hypotheses of Proposition <ref>. Let
ς>C be the solution of the equation (<ref>), where C=C(
r,n,λ,a) is given by (<ref>). The function w given in
Proposition <ref> with the biggest height at ∂Λ
_0-∂Λ is obtained taking b=ς. In particular, for
such w,
Λ_0supw=sup_∂Λ_0
-∂Λw=
ℒ
,
where ℒ is given by (<ref>) and, setting W:Λ⊂
M⟶ℝ given by
W( q) ={[ w( q) , if q∈Λ_0; ℒ
if q∈Λ-Λ_0 ]. ,
we have W∈ C^0( Λ) and radial with
relation to the point o, 𝔐( W) =0 on
Λ-∂Λ_0, W|_∂Λ=0, with
d( q) →0lim|∇ W(
q) | =+∞.
Given μ>C, take b as in Proposition <ref> given by b=μ. The
correspondent t_0 is
t_0=μ-C/μ C
and, at t_0, from (<ref>), we have ψ( t_0)
=μ^-1cosh^-1( μ C^-1). The function
f( μ) =1/μcosh^-1( μ/C)
, μ>C,
clearly satisfies f( μ) →0 when μ→ C
and f( μ) →0 when μ→+∞ and the
absolute maximal point of f in ( C,∞) is reach at the
point ς which is solution of (<ref>) and f(
ς) =( ς^2-C^2) ^-1/2. The other
conclusions follow from the definition of W and from the fact that Λ
is the exterior of a geodesic ball of M center at o.
We observe that if λ=0, then G is the group of the
translations in e_k- direction. In this case, we have M≡ℝ^n and the domain Λ, as in the hypothesis of Proposition
<ref>, is the exterior of the geodesic ball of ℝ^n of
radius r. Then v_r given by (<ref>) is a solution of (<ref>) if
the boundary data is zero. In particular, if n≥3, its height at infinity
is h( n,r), where h( n,r) is given by
(<ref>).
Let G, π and M=(ℝ^n,⟨ ,⟩
_G), n≥2, as defined in (<ref>), (<ref>) and (<ref>),
respectively. Let Λ_ρ:=M-𝔅_ρ(
0), where 𝔅_ρ( 0) is the open
geodesic ball of ℝ^n of radius ρ centered at origin of
ℝ^n. Then v_ρ∈ C^2( Λ_ρ) ∩
C^0( Λ_ρ) given by (<ref>) is a non-negative
solution of the Dirichlet (<ref>) relatively to Λ_ρ with
v_ρ|_∂Λ_ρ=0, which is unbounded if n=2 and
satisfies |∇ v_ρ|∘π=|∇v_ρ|,
d( q) →0lim|∇ v_ρ( q) | =∞, d( q)
→∞lim|∇ v_ρ( q)
| =0
and
d( q) →∞limv_ρ( q)
=h( n,ρ) if n≥3
where d=d_M( .,∂Λ_ρ), h(
n,ρ) is given by (<ref>).
Let v_ρ:ℝ^n-𝔅_ρ( 0)
→ℝ be the function given by (<ref>), ℝ
^n≡{ x_k=0}. As the graph of v_ρ
is a minimal graph in ℝ^n+1 it follows that the graph of
u_ρ( x_1,...,x_n+1) :=v_ρ( x) ,x=
∑_i≠ k
x_ie_i∈ℝ^n-𝔅_ρ( 0) ,
is a minimal graph in ℝ^n+2. In particular |∇u_ρ( x_1,...,x_n+1) |
=|∇v_ρ( x) |. Note
that, setting Ω_ρ⊂ℝ^n+1 the domain of u_ρ,
we have Ω_ρ a G-invariant domain such that its helicoidal
projection π( Ω) on ℝ^n coincides with the
image of the its orthogonal projection on ℝ^n in this case. From
(<ref>) we see that | x| =|π(
x_1,...,x_n+1) |. As v_ρ is radial, it follows
that
u_ρ( x_1,...,x_n+1) =( v_ρ∘π)
( x_1,...,x_n+1)
and, then, u_ρ is a G-invariant function with u_ρ
|_∂Ω_ρ=0. It follows from Proposition 3 of <cit.> that
v_ρ∈ C^2( Λ_ρ) ∩ C^0(
Λ_ρ) is a solution of the Dirichlet problem (<ref>)
relatively to Λ_ρ with v_ρ|_∂Λ_ρ=0
and, moreover,
|∇ v_ρ|∘π=|∇u_ρ| .
The other conclusions follow immediately from the definition of u_ρ and
from the properties satisfied by v_ρ, taking into account that
d( q) →0 (d( q) →∞) if
only if d_E( q,∂Λ_ρ) →0
(d_E( q,∂Λ_ρ) →∞).
§ PROOF OF THE MAIN RESULTS
The proof of Theorem <ref> and Corollary <ref> follow directly from the
results of this section, by using Proposition 3 of <cit.>.
Let G, π and M=(ℝ^n,⟨ ,⟩
_G) as defined in (<ref>), (<ref>) and (<ref>), respectively. Let
U be an exterior C^2,α-domain in M, U=π( Ω), where Ω⊂ℝ^n+1 is a G-invariant C^2,α-domain and let d=d_M( .,∂ U). Let ϱ>0 be
the radius of the smallest geodesic ball of ℝ^n which contain
M-U, centered at origin of ℝ^n if λ≠0. Given
s≥0, there is a non-negative solution ϑ_s∈ C^2,α( U) of the Dirichlet problem (<ref>) with
ϑ_s|_∂ U=0, such that
sup_U|∇ϑ_s| =sup_∂
U|∇ϑ_s| =s,
ϑ_s is unbounded if n=2 and s≠0 and, if n≥3, either
Usupϑ_s≤ h( n,ϱ)
or there is a complete, non-compact, properly embedded ( n-1)-dimensional submanifold Σ of M, Σ⊂ U, such that
d( q) →+∞limϑ_s|
_Σ. ( q) =h( n,ϱ) ,
where h( n,ϱ) is given by (<ref>). Moreover,
d( q) →∞lim|∇ϑ_s( q) | =0
if λ=0 or s=0 or, if λ≠0, 3≤ n≤6 and
ϑ_s is bounded.
As mentioned in Remark <ref>, if λ=0 then M≡ℝ^n
and the result is already contemplate in Theorem 1 of <cit.> for n≥3
and in Theorem 1 of <cit.> if n=2. The case s=0 is trivial. Assume
then s>0 and λ≠0.Let ρ>0 be such that ∂
U⊂𝔅_ρ( 0), where 𝔅_ρ:=𝔅_ρ( 0) is the open geodesic ball of
ℝ^n, centered at origin of ℝ^n and of radius ρ.
Let Λ_ρ=M-𝔅_ρ( 0). From
Proposition <ref> , there is v_ρ∈ C^∞( Λ_ρ) solution of (<ref>) relatively to Λ_ρ, with
v_ρ|_∂Λ_ρ=0, satisfying what is stated in
(<ref>). Since
d_M( q,∂Λ_ρ) →∞lim|∇ v_ρ( q) | =0,
we can choose k>ρ such that
|∇ v_ρ| _∂𝔅_k(
0) ≤s/2,
where 𝔅_k( 0) is the open geodesic ball of
ℝ^n centered at origin and of radius k. Let U_k
=𝔅_k( 0) ∩ U and define
T_k:={[ t≥0 ;∃ w_t∈ C^2,α( U_k)
;𝔐( w_t) =0; sup_U_k|∇ w_t|≤ s, w_t
|_∂ U=0,w_t|_∂𝔅_k( 0) =t ]} .
Note that the constant function w_0≡0 on U_k satisfies
all the condition in (<ref>), then T_k≠∅. Moreover, sup
T_k<∞ since
sup_U_k|∇ w_t|≤ s
for all t∈ T_k. Now, since the maximum principle and comparison
principle are applicable relatively to the operator 𝔐, we can use
the same approach used in the proof of Theorem 1 of <cit.> to show that
t_k:=sup T_k∈ T_k,
sup_U_k‖∇ w_t_k‖ =s,
sup_∂𝔅_k|∇ w_t_k|≤
s/2
and, since ∂ U_k=∂ U∪∂𝔅_k(
0), to conclude that
sup_U_k|∇ w_t_k| =sup_∂
U|∇ w_t_k| =s.
The proof of these facts follow essentially the same steps of the
aforementioned theorem (see p. 3067 and 3068 of <cit.>), and then we will
not do it here. Now, taking k→∞ and from diagonal method, we
obtain a subsequence of ( w_t_k) which converges uniformly
in the C^2 norm in compact subsets of U to a function
ϑ_s∈ C^2,α( U) satisfying
𝔐( ϑ_s) =0 in U, ϑ
_s|_∂ U=0, which is non-negative and such that sup_U|∇ϑ_s| =sup_∂ U|∇ϑ_s| =s. In particular, from regularity elliptic
PDE theory (<cit.>), we have ϑ_s∈ C^∞( U).We will show now that if n=2 then ϑ_s is unbounded
and, if n≥3, we have either (<ref>) or (<ref>).Let
ϱ>0 be the radius of the smallest open geodesic ball of ℝ
^n which contain M-U, centered at origin of ℝ^n and denote
such ball by 𝔅_ϱ( 0). We have ∂
U⊂𝔅_ϱ( 0) and we can
conclude that ∂ U∩∂𝔅_ϱ≠∅. Let
Λ_ϱ=M-𝔅_ϱ( 0).
From Proposition <ref> , there is v_ϱ∈ C^∞(
Λ_ϱ), v_ρ|_∂Λ_ϱ=0,
solution of (<ref>), satisfying what is stated in
(<ref>) and (<ref>) relatively to Λ_ϱ if n≥3.
Otherwise, if n=2, v_ϱ is unbounded and satisfies the equalities
in (<ref>).Let q_0∈∂ U∩∂𝔅
_ϱ. Since
d_M(q,∂Λ_ϱ)→0lim|∇ v_ϱ( q) | =+∞,
Usup|∇ϑ_s|
=∂ Usup|∇ϑ_s|
=s<+∞
and v_ϱ( q_0) =ϑ_s( q_0) =0,
it follows that there is an open set V_q_0 in U∩Λ_ϱ,
with q_0∈∂ V_q_0, such that ϑ_s<v_ϱ in
V_q_0. We claim that V_q_0 is unbounded. Suppose that V_q_0
is bounded. Since ϑ_s|_∂ V_q_0=v_ϱ|_∂
V_q_0, it follows that ϑ_s|_V_q_0 and
v_ϱ|_V_q_0 are distinct solutions to the Dirichlet
problem
{[ 𝔐(f)=0 in V_q_0, f∈ C^2( V_q_0
) ∩ C^0(V_q_0); f|_∂ V_q_0=ϑ_s|_∂ V_q_0 ]. ,
a contradiction, since a solution of (<ref>) is unique if the domain is
bounded. It follows that V_q_0 is unbounded. Note that we can have two
possibility for ∂ V_q_0: either ∂ V_q_0 is bounded
(in this case V_q_0=Λ_ϱ) or ∂ V_q_0 is
unbounded (in this case, setting Σ=∂ V_q_0, we have
Σ⊂U∩Λ_ϱ a complete
( n-1)-dimensional manifold of M).Assume first that
n≥3. In this case v_ϱ is bounded. If ∂ V_q_0 is
bounded, as in this case V_q_0=Λ_ϱ, this means that
ϑ_s<v_ϱ on Λ_ϱ and so we have (<ref>).
If ∂ V_q_0 is unbounded, as ϑ_s|_∂ V_q_0
=v_ϱ|_∂ V_q_0, we conclude that
d( q) →+∞limϑ_s|_∂
V_q_0( q) =h( n,ϱ) .
Assume now that n=2. Note first that on Λ_ϱ⊂ℝ^2 it is well know that, for all s>0, there is a half catenoid
v_ϱ,s∈ C^2( Λ_ϱ) which is
unbounded (logarithmic growth), which satisfies v_ϱ,s|_∂Λ_ϱ=0 and |∇v_ϱ
,s| =s on ∂Λ_ϱ (=∂𝔅
_ϱ). The same arguments used in Proposition <ref> give us that
v_ϱ,s is solution for the Dirichlet problem (<ref>) for zero
boundary data relatively to Λ_ϱ and satisfies |∇ v_ϱ,s| =s on ∂Λ_ϱ, since
𝔅_ϱ is centered at origin of ℝ^2. As
sup_∂ U|∇ϑ_s| =s and ∂
U∩∂𝔅_ϱ≠∅, there is 0<s^'<s
such that v_ϱ,s^'∈ C^2( Λ_ϱ) as described above satisfies v_ϱ
,s^'<ϑ_s in some open set V⊂ U∩Λ_ϱ. The same arguments used before to prove that V_q_0 is unbounded give
us that V is unbounded and we see that if ∂ V is bounded then
V=Λ_ϱ. Thus, if ∂ V is bounded, we have
v_ϱ,s^'<ϑ_s on Λ_ϱ and then,
ϑ_s is unbounded. If ∂ V is unbounded, as v_ϱ
,s^'|_∂ V=ϑ_s|_∂ V, we have
d( q) →+∞limϑ_s|_∂
V( q) =+∞
since
d( q) →+∞limv_ϱ,s^'
|_∂ V( q) =+∞.
and, therefore, ϑ_s is unbounded.Now we will prove the
last affirmation of the proposition.As ϑ_s∈ C^2,α( U) is a solution of the Dirichlet problem
(<ref>) with ϑ_s|_∂ U=0, it follows from Proposition
3 of <cit.> that u=ϑ_s∘π∈ C^2,α(
Ω) satisfies ℳ( u) =0 in
Ω with u|_∂Ω=0 and |∇u( p) | =|∇(
ϑ_s∘π) ( p) |, p∈Ω,
where ℳ is the operator defined in (<ref>). Note that we have
necessarily Ω unbounded with d_E( p) →∞
if only if d( π( p) ) →∞, where
d_E is the Euclidean distance in ℝ^n+1 to ∂Ω.
Suppose that
d_E( p) →∞lim|∇u( p) |≠0.
Then there is ε>0 and a sequence ( p_n) in
Ω, with d_E( p_n) →∞ when
n→∞ such that |∇u(
p_n) |≥ε for all n large enough, n≥
n_0. For each n∈ℕ, define
Ω_n={ p∈ℝ^n+1;p+p_n∈Ω}
and consider the sequence of functions u_n:Ω_n⊂ℝ
^n+1⟶ℝ given by u_n( p) =u(
p+p_n). Note that 0∈Ω_n for all n since (
p_n) ⊂Ω. Also
ℝ^n+1=⋃_n∈ℕΩ_n.
Indeed, given w∈ℝ^n+1, if the sequence (w+p_n)_n were
contained in ℝ^n+1-Ω, since π( ℝ
^n+1-Ω) is compact, we would have d_E( w+p_n
) ≤ R for all n, for some R>0, a contradiction, since
d_E( p_n) →∞. It follows that, as
u_n( 0) =u( p_n), for all n≥ n_0 we
have |∇u_n( 0) |≥ε. Note that ( u_n) is uniformly bounded
since, by hypothesis, n≥3 and ϑ_s is bounded. Then (
u_n) has a subsequence ( u_n_k) which converges
uniformly on compact subsets of ℝ^n+1 to a bounded function
u defined on the whole ℝ^n+1 which satisfies
ℳ( u) =0. Assume that dimension of M
satisfies 3≤ n≤6. Then Ω⊂ℝ^m, 4≤ m≤7.
From Bersntein Theorem extended to ℝ^m, 2≤ m≤7, by Simons
(<cit.>) - which is false for m≥8 (<cit.>) - it follows that
u has to be constant. Therefore, we cannot have |∇u_n_k( 0) |≥ε for
all n_k≥ n_0, a contradiction. Hence
d_E( p) →∞lim|∇u( p) | =0.
From Proposition 3 of <cit.> we have |∇u| =|∇ϑ_s|∘π and,
therefore
d( q) →∞lim|∇ϑ_s( q) | =0.
If λ=0 and n=2, it was proved in <cit.> that, setting
ϑ_∞( p) :=s→∞limϑ_s( p) ,p∈ U,
ϑ_∞∈ C^2( U) ∩ C^0( U) and is an unbounded solution of the Dirichlet problem
(<ref>) with ϑ_∞|_∂ U=0, which satisfies
d_E( p) →∞lim|∇u( p) | =0.
If λ=0 and n≥3, it was proved in <cit.> that ϑ
_∞, as defined in (<ref>), is in C^2( U), is a
bounded solution of the Dirichlet problem (<ref>) and its graph is
contained in a C^1,1-manifold Υ⊂U×ℝ such that ∂Υ=∂Ω.
Let G, π and M=(ℝ^n,⟨ ,⟩
_G), n≥3, as defined in (<ref>), (<ref>) and (<ref>),
respectively. Let U be an exterior C^2,α-domain in M,
U=π( Ω), where Ω⊂ℝ^n+1 is a G-invariant C^2,α-domain and let
d=d_M( .,∂ U). Assume that M-U satisfies the
interior sphere condition of radius r>0. Let ℒ
=
ℒ
( r,n,λ a) be as given in (<ref>), where λ and
a are given in the definition of G. Then, given c∈0,
ℒ
], there is w_c∈ C^2( U) ∩ C^0( U) solution of the Dirichlet problem (<ref>) relatively to U,
with w_c|_∂ U=0, which satisfies
d( q) →∞lim w_c( q)
=c.
In particular, if c∈0,
ℒ
), then w_c∈ C^2,α( U).
If λ=0 the result is already contemplate in Theorem 1 of <cit.>.
Assume λ≠0. Given c∈0,
ℒ
), define
ϝ={[ f∈ C^0( U) ;f is subsolution
relative to 𝔐; f|_∂ U=0 and d( q) →∞limsup f( q) ≤ c ]} .
Note that ϝ≠∅ since f_0≡0∈ ϝ. From
comparison principle we have f≤ c for all f∈ϝ. From Perron
method applied relatively to operator 𝔐 (<cit.> - Section 2.8,
<cit.> - Section 3), we conclude that
w_c( q) :=sup{ f( q) ;f∈ϝ}, q∈U,
is in C^∞( U) and satisfies𝔐
( w_c) =0 in U. We will show now that
lim_d( q) →∞w_c( q) =c.
Consider α>0 such that 𝔅_α( 0), the
geodesic ball of ℝ^n center at origin of ℝ^n, of
radius α>0, contain M-U and be such that v_α(
∞) >c, where v_α is as defined in (<ref>) (note
that v_α( ∞) =α h( n,1) and
h( n,1) >0). Define now f∈ C^0( U) by
f( q) ={[ 0 if q∈U∩𝔅_α( 0); max{0,v_α( q) -( v_α( ∞)
-c) }, if q∈ M-𝔅_α( 0) . ].
From Proposition <ref> we have f a non-negative (generalized) subsolution
to the Dirichlet problem (<ref>) (for zero boundary data) relatively to
U, which satisfies
d( q) →∞limf( q) =c.
It follows that f∈ϝ and that f≤ w_c≤ c. Then we have
(<ref>).Given q_0∈∂ U, by hypothesis there is a
geodesic open ball of M, say B_r, of radius r>0, contained in M-U
and such that ∂ B_r is tangent to ∂ U (=∂(
M-U)) at q_0. From Corollary <ref>, there is W∈
C^0( M-B_r) a (generalized) supersolution
relatively to the operator 𝔐 on M-B_r such that c≤
W( ∞) =
ℒ, with W|_∂ B_r=0, which is C^1 in a neighborhood of
∂ B_r in M-B_r and such that
d_M( q,∂ B_r) →0lim|∇ W( q) | =+∞.
From the comparison principle, since U⊂ M-B_r, it follows
that on U, we have 0≤ w_c≤ W. As q_0 is arbitrary,
we conclude that w_c∈ C^0( U) with
w_c|_∂ U=0.Assume that 0≤ c<
ℒ. Let δ=( c+
ℒ
) /2. Let L( σ) :=
ℒ
( σ,n,λ,a), σ∈(0,r], where ℒ is given by (<ref>). Since L∈ C^0(0,r], either there is σ
_0∈(0,r) such that L( σ_0) =δ, or
δ<L(σ) for all σ∈( 0,r). Take r^'=σ_0 in the first case and r^' any point in (0,r) in the
second case. Let B_r^' be the open geodesic ball of M,
B_r^'⊂ B_r, with the same center of B_r. Consider the
correspondent t_0>0 and the function w∈ C^2( Λ
_0) ∩ C^0( Λ_0), w|_∂
B_r^'=0, where Λ_0={ q∈ M-B_r^'
;d_M( q,∂ B_r^') ≤ t_0} is such
that 𝔐( w) ≤0, as given in Corollary <ref>.
If the second case occurs, the height of this w is greater than δ at
its correspondent distance t_0 to ∂ B_r^' and, as w is
radial with respect to the center of B_r^', there is t_0
^'<t_0 such that, at the distance t_0^' of ∂
B_r^', the height of w is δ. In any case, there is
0<t_0^'≤ t_0 such that, setting
W_r^'( q) ={[ w( q) if q∈Λ_0^'; δ if q∈ M-Λ_0^' ].
where
Λ_0^':={ q∈Λ;d( q) ≤
t_0^'} .
satisfies the same properties of the function W as given in Corollary
<ref>. Note that it is possible to translate the graph of W_r^'
in the ∂/∂ t-direction (e_k-direction) in a way that its
height at infinity is in [c,δ) and such that Γ, the intersection
of the hypersurface resulting of this displacement with {
t≥0} is such that
∂Γ=Γ∩ B_r=∂ B_r^'',
with B_r^''⊂ B_r a geodesic open ball of M with the
same center of B_r and radius r^'', being r^'<r^''<r. Now, move Γ keeping ∂Γ on M and
the center of ∂Γ on the geodesic of M linking the center of
B_r to q_0∈∂ U, until ∂Γ touches ∂ U
at q_0 and call Γ this final hypersurface. Observe that
such displacement is an isometry in M×ℝ. Denote by
B_r^'' the geodesic ball contained in B_r of
radius r^'' such that ∂B_r^''
=∂Γ. We have then that W_r^''
:M-B_r^''⟶ℝ is a
(generalized) supersolution relatively to 𝔐. Moreover, since our
translation in ∂/∂ t direction is small enough, W_r^'' satisfies 𝔐( W_r^'') =0 in
Λ_0^'', where Λ_0^'' is
a neighborhood in U such that q_0∈∂Λ_0^''.
In particular, W_r^''∈ C^∞( Λ_0^''). From comparison principle, since 0≤
w_c|_∂ U≤ W_r^''|_∂ U and c≤
W_r^''( ∞), we conclude that 0≤
w_c≤ W_r^'' on U. As w_c(
q_0) =W_r^''( q_0)and W_r^''∈ C^∞( Λ_0^'') it follows that w_c∈ C^1( U) and, from elliptic PDE regularity theory ( <cit.>), it follows that
w_c∈ C^2,α( U) ∩ C^∞(
U).
99
ABRA. Aiolfi, D. Bustos, J. Ripoll: On the existence of foliations
by solutions to the exterior Dirichlet problem for the minimal surface
equation, Proceedings of the AMS vol. 150, no.7, p. 3063-3073 (2022).
BGGE. Bombieri, E. De Giorgi, E. Giusti: Minimal cones and the
Bernstein problem. Invent. Math. 7, p. 243–268 (1969).
CKP. Collin, R. Krust: Le problème de Dirichlet pour
l'équation des surfaces minimales sur des domaines non bornés Bull.
Soc. Math. France, 119, no. 4, p. 443-462 (1991).
DRM. Dajczer, J. Ripoll: An extension of a theorem of Serrin to
graphs in warped products, J. Geom. Anal., 15 193–205 (2005).
DHLM. Dajczer, P. Hinojosa, J.H de Lira: Killing graphs with
prescribed mean curvature, Calc. Var. Partial Differ. Eq. 33, p. 231–248 (2008).
DLM. Dajczer, J.H de Lira: Killing graphs with prescribed mean
curvature and Riemannian submersions, Ann. Inst. H. Poincaré (Anal. Non
Linéaire), vol. 26, no 3, p. 763-775 (2009).
ERoR. Sa Earp, H, Rosenberg: The Dirichlet problem for thé
minimal surface equation on unbounded planar domains, J. Math. Pures Appl.,
68, p. 163-183 (1989).
HLWu-yi Hsiang, B. Lawson, Jr: Minimal submanifolds of low
cohomogeneity, Journal of Differential Geometry, p. 1-38 (1971).
CHHLJB Casteras, E. Heinonen, I. Holopainen, J. Lira: Asymptotic
Dirichlet problems in warped products, Math. Z. 295, 211–248 (2020).
JSH. Jenkins, J. Serrin: The Dirichlet problem for the minimal
surface equation in higher dimensions. J. Reine Angew. Math. 229, p. 170–187 (1968).
JZ. Jin: Growth rate and existence of solutions to Dirichlet
problems for prescribed mean curvature equation on unbounded domains,
Electronic J. of Diff. Equations, 24, p. 1–15 (2008)
GTD. Gilbarg, N. Trudinger: Elliptic Partial Differential Equations
of Second Order, Springer, Berlin, 1998.
KE. Kuwert: On solutions of the exterior Dirichlet problem for the
minimal surface equation, Ann. Inst. H. Poincaré (Anal. Non
Liéaire), vol. 10, no. 4, p. 445-451 (1993).
KrR. Krust: Remarques sur le problème extérieur de Plateau,
Duke Math. J., Vol. 59, pp. 161-173 (1989).
KTN. Kutev and F. Tomi: Existence and nonexistence for the exterior
Dirichlet problem for the minimal surface equation in the plane, Differential
Integral Equations, 11, no. 6, p. 917–928 (1998).
NJ. C. C. Nitsche: Vorlesungen über Minimalflächen,
Grundlehren der mathematischen Wissenschaften 199, Springer-Verlag, Berlin (1975).
OR. Osserman: A Survey of Minimal Surfaces, Van Nostrand Reinhold
Math. Studies 25, New York, 1969.
RJ. Ripoll: Some characterization, uniqueness and existence results
for euclidean graphs of constant mean curvature with planar boundary,
Pacific Journal of Mathematics, Vol. 198, N. 1, 175-196 (2001).
RTaJ. Ripoll, F. Tomi: On solutions to the exterior Dirichlet
problem for the minimal surface equation with catenoidal ends, Adv. Calc. Var.
7, no. 2, p. 205–226 (2014).
RTJ. Ripoll, F. Tomi: Group invariant solutions of certain partial
differential equations, Pacific J. of Math 315 (1), p. 235-254 (2021).
SSimons, J.: Minimal varieties in Riemannian manifolds. Ann. of
Math., 88, p. 62–105 (1968).
Ari João Aiolfi
Departamento de Matemática
Universidade Federal de Santa Maria
Santa Maria RS/Brazil
[email protected]
................................
Caroline Maria Assmann
Instituto de Matemática
Universidade Federal do Rio Grande do Sul
Porto Alegre RS/Brazil
[email protected]
................................
Jaime Bruck Ripoll
Instituto de Matemática
Universidade Federal do Rio Grande do Sul
Porto Alegre RS/Brazil
[email protected]
|
http://arxiv.org/abs/2306.05967v1
|
20230609153335
|
Quantum Calculation of Classical Kinetic Equations: A Novel Approach for Numerical Analysis of 6D Boltzmann-Maxwell Equations in Collisionless Plasmas Using Quantum Computing
|
[
"Hayato Higuchi",
"Juan William Pedersen",
"Akimasa Yoshikawa"
] |
physics.plasm-ph
|
[
"physics.plasm-ph",
"physics.space-ph",
"quant-ph"
] |
[email protected]
Graduate School of Science, Kyushu University, Motooka, Nishi-ku, Fukuoka 819-0395, Japan
[email protected]
Graduate School of Arts and Sciences, University of Tokyo,
Komaba, Meguro-ku, Tokyo 153-8902, Japan
[email protected]
Faculty of Science, Kyushu University, Motooka, Nishi-ku, Fukuoka 819-0395, Japan
A novel quantum algorithm for solving the Boltzmann-Maxwell equations of the 6D collisionless plasma is proposed.
The equation describes the kinetic behavior of plasma particles in electromagnetic fields and is known for the classical first-principles equations in various domains, from space to laboratory plasmas.
We have constructed a quantum algorithm for a future large-scale quantum computer to accelerate its costly computation.
This algorithm consists mainly of two routines: the Boltzmann solver and the Maxwell solver.
Quantum algorithms undertake these dual procedures, while classical algorithms facilitate their interplay.
Each solver has a similar structure consisting of three steps: Encoding, Propagation, and Integration.
We conducted a preliminary implementation of the quantum algorithm and performed a parallel validation against a comparable classical approach.
IBM Qiskit was used to implement all quantum circuits.
Quantum Calculation of Classical Kinetic Equations: A Novel Approach for Numerical Analysis of 6D Boltzmann-Maxwell Equations in Collisionless Plasmas Using Quantum Computing
A. Yoshikawa
July 31, 2023
==============================================================================================================================================================================
§ INTRODUCTION
The space plasma environment, extending from the Sun to the magnetosphere-ionosphere-atmosphere, includes regions of frozen conditions, zones of anomalous resistance caused by electromagnetic turbulence, interconnected regions characterized by weakly ionized gas systems in strong magnetic fields, coupled neutral-atmosphere chemical processes, and pure neutral-atmosphere collision systems.
Owing to their complex interactions, an inclusive understanding and forecasting of the space environment remains an elusive goal, even with the advancements in high-performance instrumentation and in-situ observation of satellites.
Therefore, it is imperative to develop space plasma simulations capable of providing comprehensive insights, ranging from local spatial domains to the global schematic.Historically, the development of space plasma simulations has been constrained by computational time, memory capacity, and data storage limitations, resolving complex phenomena with restricted physics at local space scales.
In light of these constraints, space plasma simulations can be divided into two principal scale hierarchies. One approach endeavors to reproduce Macroscopic phenomena using a coarse approximation, whereas the other aims to recreate Microscopic phenomena derived from first principles.
Examples of the former include magnetohydrodynamics (MHD), while the latter include techniques such as particle-in-cell (PIC) or the Vlasov equation (hereafter referred to as the collisionless Boltzmann equation).
The choice between global simulation and comprehensive simulation of physical processes depends on the required space and time scales.
However, several thematic concerns have emerged that require simulation via coupling between scale hierarchies. For example, we describe the plasma instability of the current sheet and the initiation mechanism of magnetic reconnection.
The importance of kinetic effects resulting from ion-electron dynamics during the onset of magnetic reconnection has been demonstrated <cit.>.
To emulate this, a multi-hierarchical simulation with inter-domain coupling of MHD and PIC has been developed, which allows to analyze the influence of macroscopic dynamics on the microscopic physics of magnetic reconnection <cit.>.In contrast, the collisionless Boltzmann equation requires advanced numerical computations of the 6D distribution function in both space (3D) and velocity (3D) of the particles, and has traditionally been limited to the analysis of low-dimensional, low-resolution or microscopic phenomena.
Given the susceptibility of direct methods to numerical diffusion, the more accurate electromagnetic Vlasov method has been designed and implemented<cit.>.
The considerable progress in its research has allowed the elucidation of numerous authentic physical phenomena through the use of full electromagnetic Vlasov simulation, notwithstanding certain limitations regarding dimensionality and lattice number, which depend on the availability of computational resources<cit.>.
Theoretically, the integration of a collision term into the Boltzmann-Maxwell equations provides a comprehensive representation of the collision effects present in the complex coupled magnetosphere-ionosphere-atmosphere system of the Earth.
However, the current state of simulation technology is such that the fluid equations incorporating these collision effects have not yet been successfully modeled. The effects resulting from ionospheric collisions affect a variety of facets, ranging from auroras to magnetospheric dynamics (e.g. <cit.>), and further lead to the manifestation of complex phenomena (e.g. <cit.>). Consequently, the collisionality Boltzmann-Maxwell equations encompass a plethora of significant phenomena within their domain of interest that are relevant to space-earth electromagnetics. In an idealized scenario, the entirety of these phenomena could be computed using the collisional Boltzmann-Maxwell equations, eliminating the need for scaling factorial coupled simulations and the reliance on a variety of assumptions. However, performing high-order numerical computations for the first-principles collisional Boltzmann-Maxwell equation requires the establishment of extremely precise numerical methods, coupled with an enormous computational burden O(L^6) (where L is the number of lattices per spatial degree of freedom), which is currently unattainable even with the computational power of today's supercomputers.In recent years, advances in quantum computing, both software and hardware, have demonstrated numerous advantages of quantum algorithms, such as those represented by <cit.>. Following Google's achievement of quantum supremacy in 2019 <cit.>, the pragmatic implementation of quantum computing in plasma simulation, weather forecasting, fluid simulation, and various fields is attracting interest.
In numerical computation, the first paper on solving linear equations with quantum computer, the so-called the HHL algorithm <cit.>, was published.
Subsequently, a quantum algorithm for linear ordinary differential equations (ODE)<cit.> and for partial differential equations(PDE)<cit.>, and many for fluid simulations have been reported in recent years <cit.>.
The employed methodologies vary considerably. Some use quantum computational versions of the lattice gas model <cit.> or the lattice Boltzmann method <cit.>, some use quantum Fourier transforms to solve the Poisson equation, some use HHL algorithms and Hamiltonian simulations and Some combine it with the HHL algorithm and Hamiltonian simulations, others reduce from PDEs to ODEs to solve nonlinear ODEs, and so on.
Among them, the quantum lattice Boltzmann method is constructed by considering the streaming operation as Quantum Walk <cit.><cit.>.
Similarly, a quantum algorithm for the Dirac equation was proposed <cit.>, using the similarity of a sequence of time-evolving operations to Quantum Walk.
And Todorova et al. developed a quantum algorithm for the collisionless Boltzmann equation that performs discrete real and discrete velocity space propagation by Quantum Walk using a discrete-velocity method <cit.>.
We consider that this method has an advantage over other quantum differential equation solving methods in that it is easier to introduce first-principles collision terms.
* Collisionless Boltzmann-Maxwell equations with u(:velocity) constant and the electromagnetic field E,B under vacuum conditions acting one way:
∂ f/∂ t + u_const·∂ f/∂x + q/m(E+u_const×B)·∂ f/∂v = 0,
∇^2 E-1/c^2∂^2 E/∂ t^2=0,
∇^2 B - 1/c^2∂^2 B/∂ t^2=0.
We developed a quantum algorithm for the 6D Boltzmann-Maxwell equations for collisionless plasmas under the above conditions based on the efficient quantum walk circuit<cit.>. In this process, we calculated the time evolution problem of the 6D distribution function with the addition of velocity space, referring to the quantum algorithms for the the discrete velocity method in the the Boltzmann equation<cit.> and the Macro step in the Navier-Stokes equations<cit.>. Thus, the implementation of the collision term, which is the final goal of our project, is much easier and can be developed step by step.
Furthermore, according to our quantum algorithm, it is simpler and computationally less expensive to solve all regions with the collisionless Boltzmann-Maxwell equations than with Macro-Micro's hierarchically coupled simulators. The quantum computer's most important advantage, the lattice information in the spatial direction, is parallelized into a single state function by encoding amplitude embedding. The results show that the order of the Quantum Volume as the scale of the quantum circuit is O(N_t(log_2(L))^2), which is an improvement over the order of the computational volume O(N_tL^6) of a similar classical algorithm. In the future, we will develop a quantum algorithm for the collisional Boltzmann-Maxwell equations and apply it to the plasma region from the sun to the Earth's magnetosphere-ionosphere-atmosphere. Thus, this will provide a framework in order to understand and fully predict the space plasma environment. At that time, we expect the device to be used is a future fault-tolerant large-scale quantum computer. This paper develops the first quantum algorithm for this purpose and summarizes the methodology and verification results.This paper is organized as follows: Section <ref> and <ref> describe the model of numerical computation, Section <ref> describes our Quantum Algorithm of Boltzmann solver, and Section <ref> compares and verifies the results of the quantum algorithm with similar classical algorithms. In Section <ref>, we discuss current issues and future solutions.
§.§ Governing equations
We employ the collisionless plasma Boltzmann and Maxwell equations within an electromagnetic field as governing equations.
Specifically, these equations are given by
* The collisionless plasma Boltzmann equation with an electromagnetic field:
∂ f/∂ t + u_const·∂ f/∂x + q/m(E+u_const×B)·∂ f/∂v = 0,
* Wave equation for the electric field E in vacuum:
∇^2 E-1/c^2∂^2 E/∂ t^2=0,
* Wave equation for the magnetic field B in vacuum:
∇^2 B - 1/c^2∂^2 B/∂ t^2=0.
Where f is the distribution function of the plasma particles, u is the fluid velocity of the plasma, which we assume to be constant, q/m is the charge to mass ratio of the particles and E and B are the electromagnetic fields. The Maxwell equations can be rewritten in the form of wave equations for the electric and magnetic fields respectively, as above, to implement the quantum algorithms more efficiently.
§.§ Numerical simulation method
For the execution of nonlinear partial differential equations (<ref>,<ref>,<ref>) on quantum computers, these equations require discretization by methods such as the finite difference technique or the finite element method. In the following discourse, the finite difference approach is adopted for the Boltzmann-Maxwell equation, resulting in difference equations that are implementable on quantum circuits.
Proceeding with the application of the Forward Time Centered Space(FTCS) scheme, we differentiate the Boltzmann equations for collisionless plasma and derive a discretized representation. The differencing equation for the governing equation (<ref>) is given by
f(x,y,z,v_x,v_y,v_z;t+Δ t) = f - u_xΔ t/2Δ x(f_x+Δ x -f_x-Δ x) - u_yΔ t/2Δ y (f_y+Δ y -f_y-Δ y ) -u_zΔ t/2Δ z(f_z+Δ z - f_z-Δ z)
- q(E+u_const×B)_xΔ t/2mΔ v_x(f_v_x+Δ v_x- f_v_x-Δ v_x )
-q(E+u_const×B)_yΔ t/2mΔ v_y( f_v_y+Δ v_y- f_v_y-Δ v_y)
- q(E+u_const×B)_zΔ t/2mΔ v_z(f_v_z+Δ v_z-f_v_z-Δ v_z),
where the value of f(x,y,z,v_x,v_y,v_z;t), namely the distribution function at the reference point x,y,z,v_x,v_y,v_z and time t, is simply denoted as f, and the same at the point deviating by one unit distance in each direction is denoted with subscripts: (e.g.) f_x+Δ x := f(x+Δ x,y,z,v_x,v_y,v_z;t).We simplify the difference Boltzmann equation with the following assumption:
u_xΔ t/2Δ x = u_yΔ t/2Δ y = u_zΔ t/2Δ z = 1.
Similarly, the difference equations for the electric and magnetic fields are given as
E(x,y,z;t+Δ t) = (2-2Δ t^2/μ_0 ϵ_0(1/Δ x^2+1/Δ y^2+1/Δ z^2))E-E_t-Δ t
+Δ t^2/μ_0 ϵ_0(1/Δ x^2(E_x+Δ x+ E_x-Δ x)+1/Δ y^2(E_y+Δ y+ E_y-Δ y)+1/Δ z^2(E_z+Δ z+ E_z-Δ z)),
B(x,y,z;t+Δ t) = (2-2Δ t^2/μ_0 ϵ_0(1/Δ x^2+1/Δ y^2+1/Δ z^2)) B-B_t-Δ t
+Δ t^2/μ_0 ϵ_0(1/Δ x^2(B_x+Δ x+ B_x-Δ x)+1/Δ y^2(B_y+Δ y+ B_y-Δ y)+1/Δ z^2(B_z+Δ z+ B_z-Δ z)),
where quantities such as E and B are defined in the same manner as f above.
Furthermore, for simplicity of notation, we set hereafter as the Lorentz force term as
F := u×B .
Also, the speed of light c in equation (<ref>,<ref>) is rewritten here using the permittivity and the permeability (ϵ_0 and μ_0) in the vacuum.
Similar to the Boltzmann equation example, we make the following assumption:
Δ t^2/μ_0 ϵ_0 Δ x^2 = Δ t^2/μ_0 ϵ_0 Δ y^2 = Δ t^2/μ_0 ϵ_0 Δ z^2 = 1 .
Under the postulates of this manuscript, no velocity is obtained from the first-order velocity moment of the distribution function. Given the use of uniform velocities in both the temporal and spatial domains, the discretized magnetic field equation transforms into the propagation equation of the Lorentz force term.
As a result, we obtain the discretized Botzmann-Maxwell equation to be implemented as follows:
f(x,y,z,v_x,v_y,v_z;t+Δ t) = f - (f_x+Δ x -f_x-Δ x) - (f_y+Δ y -f_y-Δ y ) -(f_z+Δ z - f_z-Δ z)
- q(E+F)_xΔ t/2mΔ v_x(f_v_x+Δ v_x- f_v_x-Δ v_x ) -q(E+F)_yΔ t/2mΔ v_y( f_v_y+Δ v_y- f_v_y-Δ v_y)
- q(E+F)_zΔ t/2mΔ v_z(f_v_z+Δ v_z-f_v_z-Δ v_z),
E(x,y,z;t+Δ t) = -4E-E_t-Δ t+ E_x+Δ x + E_x-Δ x+ E_y+Δ y + E_y-Δ y + E_z+Δ z + E_z-Δ z,
F(x,y,z;t+Δ t) = -4F-F_t-Δ t+ F_x+Δ x + F_x-Δ x+ F_y+Δ y + F_y-Δ y + F_z+Δ z + F_z-Δ z.
This allows us to evolve the values of f and , (E, B) independently. We call the quantum routines that perform this evolution the Boltzmann solver and the Maxwell solver, respectively.
For the evolution of f (Boltzmann solver), we need the values of E and F at each time step as they appear in the right-hand side of the equation (<ref>), so we use the values obtained by the Maxwell solver.
§ QUANTUM ALGORITHM
In this section, a quantum algorithm based on the discretized Boltzmann-Maxwell equations (<ref>,<ref>,<ref>) is constructed and implemented on quantum circuits. This quantum algorithm can be divided into two independent routines: the Boltzmann solver and the Maxwell solver.
They take an initial function of f and (E, B) as input, respectively.
Both routines fix time and output physical quantities that evolve in one time step according to difference equations (<ref>,<ref>).
By iterating this one-step evolution many times, we can obtain the value of a physical quantity that has evolved for an arbitrary time step.The electric and magnetic fields derived by Maxwell solver are incorporated into the Propagation circuit of the Boltzmann solver as shown in the FIG. <ref>, thereby coupling each routine.
The quantum calculations in this paper are carried out exactly in a way that deals with state vectors using a classical simulator provided by IBM Qiskit.
It is straightforward to construct an authentic quantum algorithm based on measurements.
§.§ Boltzmann
Our Boltzmann solver can be segmented into three principal steps: Encoding, Propagation and Integration.
§.§.§ Encoding
First of all, it is necessary to encode the classical information of the physical quantities into the amplitudes of quantum states.
Fixing the number of lattice sites in all spatial and velocity directions to be L, f will have V := L^6 degrees of freedom. In the encoding step, we associate each of these degrees of freedom with one computational basis and encode the value of f in the amplitude of the corresponding quantum state. Thus, a total of V bases must be prepared in total, requiring ⌈log_2 V ⌉ qubits. This method of encoding classical information into quantum information amplitudes is commonly referred to as the amplitude embedding technique.To elucidate the relationship between physical quantities and probability amplitudes, the following conversion from a function f(x,v;t) to a vector f_i , (0 ≤ i ≤ V-1) is implemented.
The subscripts i specify a point in the 6D lattice space. For example, i = 0 corresponds to the origin point (x,v) = (0,0,0,0,0,0), and i = 1 represents the value of the distribution function moved by one lattice point in the x direction: (x,v) = (Δ x,0,0,0,0,0). Namely, the amount of f_i follows
(e.g.) f_0 = f(0,0,0,0,0,0;t=t_r),
f_1 = f(Δ x,0,0,0,0,0;t=t_r).
Note that the quantum state does not contain any information about time, since the propagation takes place with fixed time.
We will assume L = 2^N_L in the following. As evidenced in Section <ref>, our actual numerical calculations are executed with N_L = 3 (L=8).
The first important algorithm in the Encoding step is with a given distribution function at a fixed t = t_r to prepare a quantum state, which we name ϕ_0, with these values in its amplitudes:
ϕ_0 = ∑_i=0^V-1f̃_i |i⟩,
where f̃ is the normalized distribution function as follows:
f̃_i = C f_i , C = (∑_i=0^V-1 |f_i|^2 )^-1/2.
At the initial time step of t = 0, an arbitrary distribution can be designated as an initial function. Post the second step, the distribution function generated by the Boltzmann solver in the prior step ought to be provided as input. This iterative process allows for the computation of the distribution function at any desired time step.
This procedure of state preparation can be executed in alignment with Appendix <ref>.It should be noted that, within the context of this manuscript, we have formulated the algorithm in a manner that measures f post each step and re-encodes it in the subsequent step, in order to circumvent excessive enlargement of the quantum circuit's depth. This design necessitates O(V) measurements at every time step, failing the advantage of the quantum algorithm. However, it is straightforward to connect each time step seamlessly. Namely, any measurements are required between each time step, implying that such a design will be beneficial when managing large-scale quantum apparatuses in the future. Further discussion on quantum advantage will be given in later sections.The qubits prepared within this context are termed as the physical qubits, denoted as .
Looking more closely, is prepared by a total of 6 closed Hilbert spaces corresponding to spatial and velocity degrees of freedom, each having N_L (=log_2 L) qubits. Namely, we write it as
=
Subsequent to the Propagation step, the ensuing quantum algorithms necessitate an additional qubit, which depending on their role, is identified as either subnode qubits or ancilla qubits . As will explaind later the number of subnode and ancilla qubits are fixed to 4 and 1, respectively, regardless of the parameters and physical setup. Thus, the numbers of qubits required by the Boltzmann solver are
N_phis = 6 N_L , N_sub = 4 , N_anc = 1,
and the following quantum state is prepared and output in after this Encoding step:
|ϕ_1⟩ = ϕ_0⊗0⊗0,
= ∑_i = 0^V-1f̃_i i00.
§.§.§ Propagation
In the Propagation step, we partially utilize the tequniques of quantum algorithm method <cit.> and implement an algorithm that multiplies each probability amplitude of |ϕ_1⟩ by the coefficient of each term in the discretized equation.
To solve the evolution equation (<ref>), we need to prepare and add up all the terms that arise in the equation such as
f, ∓ f_x±Δ x,⋯, ∓q(E+F)_xΔ t/2mΔ v_xf_v_x±Δ v_x,⋯.
After passing through the encoding step, we are now in possession of a quantum state |ϕ_1⟩, within which the data of the distribution function are encoded in the amplitude. Therefore, by considering an algorithm that multiplies each coefficient such as q(E+F)_xΔ t/2mΔ v_x by the corresponding state, the amplitudes of all states are updated to the state with the appropriate coefficient appearing in equation (<ref>). We will deal with the explicit sign in the equation later. The values of E and F at the certain time step are obtained from Maxwell solver.Subnodes serve to identify the terms that arise at a specific time step, namely f, f_x±Δ x,⋯, f_v_x±Δ v_x⋯. In total, there are 13 (= 1+ 2× 6) terms: one term f, which precedes propagation, and terms propagated by each ± 1 unit for each of the six directions in space and velocity. Hence, 4 (= ⌈ 13 ⌉) qubits are necessitated as a subnode. It should be noted that this number remains uninfluenced by physical quantities like volume. For simplicity, we have associated them as depicted in TABLE <ref>. Here, ϵ_j is the coefficient applied to each term, and σ_j is the sign explicitly attributed to each term in TABLE <ref>.
In fact, both ϵ_j and σ_j are coefficients in the difference equation (<ref>), so it is possible to define epsilon to include the sign of σ_j. However, we choose to distinguish between them because ϵ_j represents a quantity that depends on a specific assumption (as indicated by the assumption (<ref>,<ref>), while σ_j is a universally determined quantity. By making this distinction, we think we can minimize the part that we need to be modified based on different assumptions.
As elucidated below, the coin operator is accountable for the multiplication of these coefficients, and the shift operator assumes responsibility for correlating each term with the basis of the subnode.
We can create the appropriate coefficients by first make the subnodes in superpotition using the H-gate. Then apply the diagonal matrix with {ϵ} as components:
Λ := diag(ϵ_0, ϵ_1, ⋯, ϵ_15 ).
The operation with this diagonal matrix is not a unitary and thus it must be embedded in a unitary matrix of larger size. Since the coefficients are real, this procedure can be done easily as explained in the Appendix <ref>. Here, we use the ancilla qubit | a_0 ⟩ to create a unitary matrix of larger size. We call this whole operator acting on the subnode (and the ancilla qubit) the “coin operator” according to the terminology of quantum walk. As a result, we obtain the state after operating the coin operator as follows:
U_Coin |ϕ_1⟩ = ∑_i=0^V-1 U_Coinf̃_i i00,
= ∑_i=0^V-1∑_j=0^15f̃_i ϵ̃_jij0+ |*⟩1,
where ϵ̃ represents a normalized quantity. |*⟩ represents the computationally unnecessary states, which are identified by the ancilla qubit being 1.
Next, so-called increment/decrement gates are applied on both subnode and physical qubits to associate the basis of subnode and physical amount at different points.
The increment/decrement gates are operators that shift one computational basis, respectively. Specifically, those operator satisfy
U_Incr. | i ⟩ = | i+1 ⟩,
U_Decr. | i ⟩ = | i-1 ⟩.
Suppose the periodic boundary condition on the N-qubits system:
U_Incr. | 2^N-1 ⟩ = | 0 ⟩,
U_Decr. | 0 ⟩ = | 2^N-1 ⟩,
those operator follow the relation:U_Incr.^† = U_Decr..
The increment circuit can be specifically configured as follows.
10mm-9mmU_Incr.= @C=1em @R=.7em 1 1 1 X
1 1
1
-9mm , U_Decr.= @C=1em @R=.7em 1 1 1 X
1 1
1
By performing controlled-increment/decrement gates on the subnode as control registers and the physical qubits as target registers, we can map the subnode to a physical quantity on each lattice point. We call this sequential operations as the "shift operator". The circuit of the shift operator is shown in FIG.<ref>.
As a result, after applying both the coin operator and the shift operator, we obtain the following state as a final output of this propagation step:
|ϕ_2⟩ = ∑_i=0^V -1∑_j=0^15ϵ̃_jf̃_i,jij0+ |*⟩1.
We can articulate the exact correlation between f̃i and f̃i,j as outlined herein. Initially, we had the capacity to signify the index i as i = sL + t, (0≤ s < 6, 0≤ t < L), which, for instance, correlates with the direction x when s=0, y when s=1, and so forth, and the coordinates of the corresponding directions are symbolized by t.
The shift operator moves computational bases in each subspace by ± 1, respecting periodic boundary conditions in each orientation. This ± 1 direction is specified by the index j as shown in TABLE <ref>. Therefore, f̃_i,j can be represented as follows:
f̃_i,j = f̃_ sL + (t+(-1)^j)modL,
when i=sL + t, (0≤ s < 6, 0≤ t < L).
§.§.§ Integration
Passing through the encoding and propagation steps so far, we obtain a state in which the all 13 terms arising in the right-hand side of the equation (<ref>) for a fixed time step under are encoded in the amplitude of each basis state.
In this step, we perform a superposition of subnode states to compute the sum of all terms and collect them into the amplitude of a single state0000. However, As a preprocessing step, we need to invert the phases of certain states as explained below.The amplitude of each basis are multiplied by the coefficients in the difference equation <ref>, excluding the explicit sign, which is denoted by sigma in TABLE <ref>. Therefore, we need to inverse the phase of corresponding state for the terms with a minus sign. This process is also very simple and only requires one application of Z gate as shown in circuit <ref> before applying H gates.Finally, we superimpose all sunode states by applying H as shown in circuit(<ref>).
20mm @C=0.8em @R=1.2em Z H
H
H
H
As a result, the amplitudes of the states from 0000to1111 are summed and gathered as the amplitude of 0000 state with equal weighting of 1/4.
Therefore, we finally obtain the following state
|ϕ_3⟩ = 1/4∑_i=0^V-1∑_j=0^12σ_jϵ̃_jf̃_i,ji00000 + |*⟩1 .
With more clear form, we can write
∑_j=0^12σ_jϵ̃_jf̃_i,j ∼ f - (f_x+Δ x -f_x-Δ x) - (f_y+Δ y -f_y-Δ y ) -(f_z+Δ z - f_z-Δ z)
- q(E+F)_xΔ t/2mΔ v_x(f_v_x+Δ v_x- f_v_x-Δ v_x ) -q(E+F)_yΔ t/2mΔ v_y( f_v_y+Δ v_y- f_v_y-Δ v_y),
= f(x,y,z,v_x,v_y,v_z;t+Δ t),
where the distribution function is at the corresponding point of (x,y,z,v_x, x_y, v_z) to the index i. Since the normalizing factors of f and ϵ are involved here, the relation is denoted as “∼”.According to the resultant state |ϕ_3⟩, we can measure the physical and subnode qubits and focus on the 0 to obtain a distribution function that is one time step evolved according to the Boltzmann-Maxwell equation. For further time steps, we can use this distribution function as an initial value to input to the first encoding step, and further time evolution can be implemented by performing similar steps.Here are remarks on this algorithm, most of what is touched on here will be discussed more comprehensively in the Section <ref>.
First, we asserted that the measurement of the state delivers the value of the distribution function; however, what is specifically attained is the square of the absolute value of the distribution function. Nevertheless, given that the value of the distribution function f is consistently real and non-negative, the precise value of f can be accurately recovered from the measurements.
On the other hand, E and B handled by Maxwell solver in Appendix <ref> are real but also have negative values, so not exactly the same algorithm can be used.
However, during computation with real quantum algorithms, there isn't a genuine necessity to measure the values of E and B. The primary function of the Maxwell solver is simply to convey these values to the Boltzmann solver within the quantum circuit, hence this does not present a significant issue.
If one want to measure E and B values as well, a further ancilla node that identifies the sign must be prepared, and an additional quantum oracle is also needed.Next, Actually measuring f does not lead to quantum advantage. This is because f still has O(V=L^6) degrees of freedom, and it is inevitable to measure it O(V) times in order to obtain full information.
However, this problem can be avoided because what we are physically interested in is not f itself, but the velocity moment quantity obtained by integrating f with respect to velocity v.
If we could implement this integral, i.e., just a sum in the discrete system, in an efficient quantum algorithm, the computational complexity would be superior to that of a naive classical algorithm.
Furthermore, we believe that it is possible to reduce the Hilbert space to be measured based on physical conditions such as uniformity with respect to a certain spatial direction, limiting the measurement to the physical space of interest, etc.
§ COMPARISON
In this paper, all quantum circuits were exactly simulated by dealing directly with statevectors. Thus it is expected that the results will be in exact agreement with numerical calculations using conventional classical algorithms.
We prepared L=8 lattice sites in each spatial and velocity direction and calculated with the volume V = 8^6. As for the quantum algorithm 6×⌈log_2 L⌉ = 18 qubits were used as |phys⟩.
And we set Δ x=Δ y=Δ z=30 m, Δ t=10^-7s, satisfying the assumption (<ref>). Thereby, v_x=v_y=v_z=3× 10^8m/s is constant at the speed of light. The plasma particles are assumed to be positrons and set e=1.6× 10^-19C,m_e= 9.1× 10^-31kg, so we put Δ v_x=Δ v_y=Δ v_z=10^5m/s.
In this section, for simplicity, we re-scale variables x, y, ⋯ dividing by the unit Δ x, Δ y, ⋯ and denote them as coordinates on a lattice space. That is, x = n denotes the point where x = n Δ x physically.
§.§ Initial condition
As the initial distribution function, we employed a simple setup: we set 0 for (x=1,y=1) or (v_x=1,v_y=1), and set 1 for the other spaces. Namely,
. f(x,y,z,v_x,v_y,v_z;t=0)|_x=1 ∩ y=1 = 0,
. f(x,y,z,v_x,v_y,v_z;t=0)|_v_x=1 ∩ v_y=1 = 0,
f(x,y,z,v_x,v_y,v_z;t=0) = 1 (otherwise).
This is a simple setup to compare the agreement with the classical algorithm, and in practice it is necessary to give a suitable initial condition corresponding to considering physical phenomena such as plasma.
Since we implemented the increment/decrement circuits periodic (<ref>), the simulation results are also periodic so that the 0-th and L-th lattice points are identical for all directions.
§.§ Simulation result
We implemented our quantum algorithm with the input conditions and advanced time evolution from time step = 0 to time step = 3.
Comparing FIG. <ref> and FIG. <ref>, the simulation results of the quantum algorithm perfectly match those of the classical algorithm with similar conditions and methods. This is because we are simulating exactly with statevector in this case, and the actual results based on measurements will have statistical errors depending on the number of shots.Although f should take values between 0 and 1, this is not the case in FIG. <ref> and FIG. <ref>.
This is a consequence of numerical diffusion due to discretization using the FTCS scheme, which occurs universally in classical algorithms.
As noted in the discussion, the numerical diffusion is reduced by O(Δ t) in the time direction and O((Δ x)^2) in the space direction , so it is guaranteed to give correct results if the calculation is performed on a sufficiently large system.
The propagation in real space and velocity space is different, showing that it is acted upon by the electromagnetic field solved with the Maxwell solver. We achieved one of our goals in this paper, that is, the coupling of the Boltzmann equation and the Maxwell equation. However, note that this is a unilateral interaction from the Maxwell equation, since the assumption of uniform velocity and vacuum condition is used.
§ DISCUSSION
Our plasma simulator is not yet able to cover generic phenomena according to the governing equations (<ref>,<ref>,<ref>). This paper is in the middle stage of our project. This means that our plasma simulator does not yet account for velocity inhomogeneity in the convective term of the distribution function, the interaction between electromagnetic fields and plasma particles, and the collisional effects. To add these physical effects, new quantum algorithms must be developed.
* Self-consistent collisionless Boltzmann-Maxwell equations interacting with the electromagnetic field by calculating ρ charge density, velocity, and j current density in moment quantities of the distribution function:
∂ f/∂ t + v·∂ f/∂x + q/m(E+v×B)·∂ f/∂v = 0,
∇^2 E-1/c^2∂^2 E/∂ t^2=1/ϵ_0∇ρ+μ_0∂j/∂ t,
∇^2 B - 1/c^2∂^2 B/∂ t^2=-μ_0(∇×j).
The next stage will be to improve the current quantum algorithm to the quantum algorithm for the collisionless Boltzmann-Maxwell equation described above.
To do this, a quantum algorithm that calculates the amount of velocity moments in the distribution function should be developed. Thereby, the electromagnetic field and plasma particles can interact with each other via velocity inhomogeneity, charge density, and current density. This stage can simulate all the complex kinetic effects of collisionless plasma in an electromagnetic field; it simulates macroscopic MHD phenomena that reflect kinetic effects as Micro phenomena. In other words, even macroscopic phenomena can fall back to microscopic phenomena, thus contributing to the complete understanding of the physical process and to the prediction. The domain covers space plasmas in space planetary science, such as the solar surface, and the earth's magnetosphere and astrophysics, such as black hole accretion disks and interstellar winds.
* Self-consistent collisional Boltzmann-Maxwell equations interacting with an electromagnetic field, with the addition of a first-principles collision term:
∂ f/∂ t + v·∂ f/∂x + q/m(E+v×B)·∂ f/∂v = Col(f,f^'),
∇^2 E-1/c^2∂^2 E/∂ t^2=1/ϵ_0∇ρ+μ_0∂j/∂ t,
∇^2 B - 1/c^2∂^2 B/∂ t^2=-μ_0(∇×j).
Furthermore, in the final stage, this quantum algorithm will be improved to a quantum algorithm for computing the collision term from the distribution function. By adding a first-principles collision term, the domain of coverage is further extended. It covers the highly complex collisional effects of space plasma versus neutral atmospheres, simulating the ionospheric dynamics of various planetary systems; except for Maxwell solver, it calculates non-equilibrium states of rarefied gases first principles; apply Boltzmann solver and it solves problems of neutrinos and bubble structure in the universe.
We used a finite difference FTCS scheme as our numerical model; the FTCS scheme has numerical errors on the order of O(Δ t), O(Δ x_i^2) and O(Δ v_x_i^2) per time evolution.
Previously, 6D Vlasov simulation research using classical computers has been able to allocate only L∼100 (L: lattices per spatial degree of freedom), even using supercomputers.
Therefore, simple numerical methods such as the FTCS scheme are not very appropriate for classical algorithms because of the large numerical errors.
However, in the case of quantum computation with a large-scale quantum computer in a domain that is impossible with a classical computer, the number of lattices per spatial degree of freedom (≫100 lattices) is a very large quantity, and thus the numerical error is inevitably very small.
For example, we estimate that L>10^6 is needed to simulate the auroral electron acceleration problem in the magnetosphere-ionosphere.
For that very large L, the numerical error from the FTCS scheme is small enough.
Moreover, since L increases exponentially with the line increase in hardware logical qubits, the speed of expansion and growth of the computational domain and the speed of improvement in accuracy become exponential.
The greatest advantage of quantum algorithms over classical algorithms is massively parallelization. We estimate the Quantum Volume of our quantum algorithm and describe the quantum advantage of the Boltzmann-Maxwell equation. Simply, we will call Quantum Volume=width(number of qubits)× depth(number of gates) in our quantum algorithm.
The width of this quantum algorithm is 6log_2(L)+6 where L denotes the number of lattice points in each direction. Comparing to the classical algorithm O(L) computational complexity of the classical algorithm, the fact that it can be expressed in log_2(L) qubits is a quantum advantage.
On the other hand, the measured quantum circuits for L=2, L=4, and L=8, were found to be approximately 50× 12log_2(L) per time evolution. In case of time evolution to Time step = N_t, the approximated Quantum Volume would be 3600N_tlog_2(L)(log_2(L)+1). This is of the order of O(N_t(log_2(L))^2). Compared to the computational volume of a similar classical algorithm O(N_t L^6), the order is improved by compression of 6D spatial information. Thus, the larger L is, the higher the quantum superiority.
Our quantum algorithms are intended for a future large-scale quantum computer, but there remain several issues in terms of efficient algorithms.
There is a problem of the efficient preparation of the initial distribution function on quantum circuits.
The Encoding step <ref> method has the exponential complexity O(2^N) of preparing arbitrary quantum states in a 2^N-dimensional Hilbert space with an N qubit<cit.>.
This problem is an important topic in quantum computation, and various efficient methods have been proposed.
For example, Georgescu et al. developed an efficient method to prepare quantum states with polynomial complexity in a number of qubits<cit.>, and other efficient quantum state initialization methods such as log-concave. Other efficient methods for specific cases, such as log-concave probability distribution functions, have been reported as well<cit.>.
Although the initial distribution function varies depending on the physical phenomenon to be simulated, the Maxwell velocity distribution function, for example, is a log-concave probability distribution function and may be efficiently prepared<cit.>.
Our quantum algorithm is more efficient than the classical algorithm in spatial information, but not in the time direction. The reason for this is that the finite difference method of a numerical computation does not allow time information to enter the width of quantum circuits.
The finite difference method is a time-marching-based method for classical numerical calculations using the forward term on the left side of the difference equation.
Due to its nature, one of the degrees of freedom must always be in the depth when implemented in a quantum computer.
Variables that are not set to width are not accelerated, so there are restrictions on the number of lattices with respect to the number of degrees of freedom that can be set to depth, even for large-scale quantum computation.
One simple way to improve this is to rewrite the difference equation of the finite difference method so that the smallest number of lattice degrees of freedom is the evolution parameter instead of time.
Although only one degree of freedom is restricted, this method can keep the depth relatively small.A common problem in quantum differential equation solving is the problem of vanishing time-marching-based measurement probabilities. In general terms, quantum linear system algorithms have an exponentially decreasing measurement probability with respect to the time step, depending on the number of time steps. The quantum algorithm in this study suffers from the same problem.
The first possible solution to this problem is the application of the compression gadget proposed by Fang et al<cit.>. This is a time-marching-based quantum differential equation solving method that is independent of time steps by repeating uniform singular value amplification.They verified their implementation on linear ODEs, but it may be applicable to our PDEs.
Next, we also consider the use of different quantum differential equation solving methods as a solution.
Hamiltonian simulations are a common method for solving quantum differential equations, and the Vlasov-poisson and Vlasov-Maxwell equations have already been used<cit.>. While it is easy to implement the compression gadget <cit.> within a Hamiltonian simulation, we consider that it is difficult to implement the nonlinear Boltzmann-Maxwell equations with first-principles collision terms in a Hamiltonian simulation.
§ SUMMARY
In this paper, a novel quantum algorithm for solving the Boltzmann-Maxwell equation for collisionless plasmas has been formulated; both the Boltzmann and Maxwell equation solvers were structured with a similar quantum circuit.
To confirm the validity of our quantum algorithm, we performed simulations of the distribution function propagation process under the background electromagnetic field propagation using the Qiskit platform.
We compared the results of the quantum calculation with the results of the parallel classical calculation and found perfect agreement between them.
This completes the framework for efficiently solving nonlinear problems in various plasmas, such as space plasmas.
Prospective endeavors may cultivate the development of a more generalized quantum algorithm for the Boltzmann-Maxwell equation for collisional plasmas, wherein the vacuum condition is eliminated and first-principles collision terms are incorporated.
§ ACKNOWLEDGMENT
Discussions during the Yukawa Institute for Theoretical Physics (YITP) summer school YITP-W-22-13 on "A novel numerical approach to quantum field theories” were useful as we started this work.
HH would like to acknowledge the financial support of the Kyushu University Innovator Fellowship Program (Quantum Science Area).
The work of HH and AY is supported by JSPS KAKENHI Grant Numbers JP20H01961 and JP22K21345.
The work of JWP is supported in part by the JSPS Grant-in-Aid for Research Fellow Number 22J14732 and the JST SPRING, Grant Number JPMJSP2108.
§ MAXWELL SOLVER
The basic structure of the Maxwell solver is almost identical to that of the Boltzmann solver. Similar to the Boltzmann solver, the Maxwell solver consists of three steps: encoding, propagation, and integration. The algorithm is briefly described, with special emphasis on the differences to the Boltzmann solver.
§.§ Encoding
In Maxwell solver, the physical quantities E and B are written together as g, and develop them simultaneously according to the equations (<ref>,<ref>).
Since there are no velocity degrees of freedom, only N_phys = 3⌈log_2 L ⌉ qubit are prepared for , and one additional qubit representing time is also prepared.
requires N_sub = 6 qubit in this case. This is because we need N_species = 1 qubit to distinguish the difference of the physical quantity, namely E or B, N_direction =2(=⌈log_2 3 ⌉) qubits to specify the elements of the vector for them as they are vector, and N_term = 3(=⌈log_2 8 ⌉) qubits to indicate 8 terms appearing the equations (<ref>,<ref>).
Collectively, these are called subnodes, but their roles are actually divided as follows:
→.
These correspondences are shown in Table <ref> where ϵ and σ represent the the coefficient and explicit sign of each term in the equations (<ref>,<ref>).
Therefore, using exactly the same algorithm as the Boltzmann solver, we obtain the following state as the outcome of this encoding step:
|ϕ_1⟩ = ∑_i = 0^V-1∑_s = 0^1∑_d = 0^2g̃_i,t,di0sd00,
where the subscript i indicates a lattice point using the same rules as in the Boltzmann solver, g_i,t,d are given in TABLE <ref>, and g̃ is normalized g. At the first time step we need to specify the initial values for g.
§.§ Propagation
The structure of the Propagation step in Maxwell solver is fundamentally a Quantum Walk, similar to the Propagation in Boltzmann solver. Thus we need to construct the coin operator and the shift operator. However, the elements of the Coin operator, the time qubits, and the type of subnodes are different. Furthermore, the time increment circuit is used only with respect to the state 111 to use the physical quantity of one previous time. Therefore, in this section, Propagation step generate the states corresponding to the terms propagated in space-time by using the increment and decrement circuits.
The coin operator acts on the subnodes.
U_coinsdj = ϵ̃_s,d,jsdj,
where you can also find ϵ_s,d,j in TABLE <ref> and ϵ̃ is normalized ϵ.
One difference from the Boltzmann solver is that the right-hand side of the expression (<ref>,<ref>) contains a term g_i,t-1,s,d that also evolves in the time direction. This effect can be easily implemented by treating time as part of the spatial direction and applying the shift operator in the same way, but note that only the increment circuit is operated since the direction is only negative.
After operating the coin and the shift operator, we obtain the following state as the outcome of this propagation step:
|ϕ_2⟩ = ∑_i = 0^V-1∑_t = 0^1∑_s = 0^1∑_d = 0^2∑_j = 0^7ϵ̃_s,d,jg̃_i,t,s,ditsdj0+ |*⟩1,
where g̃_i,t,s,d represents the shift of ± 1 unit in each spatial and the temporal. As for the time direction, 1111 and the initial amplitude at 0000 are exchanged by the increment circuit (<ref>).The reason for this exchange is because one previous time state is needed to generate a term that propagates in the time direction.
§.§ Integration
In contrast to the Boltzmann equation, the Maxwell equation is a second-order differential equation. As a result, the signs σ_j that appear in the corresponding difference equation (<ref>) differ from those in the Boltzmann equation (as shown in Table <ref>). In such cases, an controlled-inverse gate, which is shown as follows, should be applied prior to the superposition by the H gate:
6.5mm @C=1em @R=1.4em 1
-1
The rest of the integration step can use the same method as the Boltzmann solver, but this time we are dealing with different physical quantities, E and B, in the same circuit, so we need to sum each of them and not confuse them.
As a result, we can specify the spatial lattice point (i) and the species, and obtain the time-evolved quantities E, B developed in the amplitude of 000.
§ CONSTRUCTION OF OUR COIN OPERATOR
In this section we consider an algorithm to multiply a vector to each quantum basis. Let Λ denote the multiplying vector:
Λ = ( λ_0, λ_2, ⋯λ_M-1),
where we suppose that {λ} take real values and Λ be normalized: ∑_iλ_i^2 = 1.To implement this algorithm, we need operate a diagonal matrix 𝒜 having entries corresponding to Λ but this cannot be done directly because it is not unitary operator in general.
Thus we realized this non-unitary operation by using one ancilla qubit and embedding the matrix 𝒜 in a unitary matrix with larger size, which is known as the block encoding method. As {λ} are always real, this procedure can easily be implemented as follows:
U = ([ 𝒜 ℬ; ℬ -𝒜 ]),
with
𝒜 = diag( λ_1, λ_2, ⋯),
ℬ = diag( √(1 - λ_1^2), √(1 - λ_2^2), ⋯).
After performing this unitary operation on an arbitrary state:
|ψ⟩ = ∑_iα_ii0| i ⟩_phys | 0 ⟩_anc,
we obtain the following state:
|ψ ^'⟩ = U |ψ⟩,
= ∑_iλ_iα_ii0 + |*⟩1,
which we can distinguish desired/unnecessary states with 0/1.
unsrt
|
http://arxiv.org/abs/2306.03927v2
|
20230606180008
|
Floquet time-crystals as sensors of AC fields
|
[
"Fernando Iemini",
"Rosario Fazio",
"Anna Sanpera"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.other",
"cond-mat.stat-mech"
] |
Instituto de Física, Universidade Federal Fluminense, 24210-346 Niterói, Brazil
Institute for Physics, Johannes Gutenberg University Mainz, D-55099 Mainz, Germany
The Abdus Salam International Center for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy
Dipartimento di Fisica “E. Pancini", Università di Napoli “Federico II”, Monte S. Angelo, I-80126 Napoli, Italy
Física Teòrica: Informació i Fenòmens Quàntics, Universitat Autònoma de Barcelona, 08193 Bellaterra, Spain
ICREA, Pg. Lluís Companys 23, 08010 Barcelona, Spain
We discuss the performance of discrete time crystals (DTC) as quantum sensors. The long-range spatial and time ordering displayed by DTC, leads to an exponentially slow heating, turning DTC into advantageous sensors.
Specifically, their performance (determined by the quantum Fisher information) to estimate AC fields, can overcome the shot-noise limit while allowing for long-time sensing protocols. Since the collective interactions
stabilize their dynamics against noise, these sensors become robust to imperfections in the protocol. The performance of such a sensor can also be used in a dual role to probe the presence or absence of a many-body localized phase.
Floquet time-crystals as sensors of AC fields
Anna Sanpera
July 31, 2023
=============================================
Time crystals emerge as peculiar non-equilibrium phases featuring long-range time ordering. First proposed by Wilczek <cit.>, time crystals have since then been explored in different theoretical frameworks, as well as observed in various experimental platforms (for an overview of the field see Ref.<cit.>).
The first experimental observations of time-crystalline behaviour <cit.> has been realized in the so called Discrete (Floquet) Time Crystals (DTC) following earlier theoretical proposals <cit.>. A DTC is a periodically driven system, H(t)=H(t+ T_S), whose dynamics as expressed by, e.g. an order parameter, oscillates at a multiple of the driving frequency: O(t)=O(t+nT_S) (n∈ℤ; n>1); thus breaking the discrete time-translational symmetry of the system.
Beside their relevance in understanding fundamental aspects of quantum matter, time crystals may offer several advantages in quantum technological applications. Investigations in this direction are however
at an early stage. In this work we make a step in this direction by exploring their use in quantum metrology.
Quantum systems have proven to be powerful sensors for the detection of many different physical quantities, from electric and magnetic fields to frequencies, temperature, gravitational fields among others <cit.>. Their high sensitivity to external disturbances allows them to achieve unprecedented accuracy for metrological protocols. On the one hand the quantum-enhanced sensitivity of such sensors can strongly rely on the correlations among its constituents. This can arise exploiting the criticality of phase transitions, which enhances the correlation length of the system and its susceptibility to external fields <cit.>, or alternatively using quantum entangled systems (e.g GHZ-like states) allowing to overcome standart shot-noise limits (bounded by classical statistical correlations), and leading to ultimate Heisenberg limit precision <cit.>;
On the other hand, quantum correlations can inevitably heat up the system in non-equilibrium dynamics, leading to noise and instability in the measurements, thus deteriorating the sensor performance.
Therefore, possible sensors exploiting entanglement resources while robust to decoherence effects are highly promising for metrological tasks.
In this work we merge the two aforementioned concepts, discussing sensing with DTC's. Despite their peculiar characteristics, so far time crystals were rarely considered for possible quantum applications <cit.>. Our work makes progress along this gap, providing a concrete proposal for their use in AC field sensing. The estimation of AC fields has been a subject of intense debate, whose approaches are mainly based on the accumulation of the AC information either into dynamical decoupled spins by a series of pre-defined pulses <cit.>, on the non-equilibrium dynamics of an integrable many-body system <cit.> or from the perthermal stabilization of highly entangled states in periodically driven systems <cit.>. We take a different route exploiting DTC in disordered interacting systems for such a task, and show how them can improve the accuracy for estimating small (even infinitesimal) AC fields.
The schematic of our protocol is shown in Fig.<ref>(a) for which a DTC with its internal frequency ω_S=2π/T_S is put in contact to an AC field with frequency ω_AC, whose amplitude field h_ ac shall be estimated. We focus our discussions on estimating the amplitude field, nevertheless the reasoning and analysis could be directly extended for the estimation of the AC frequency.
The long-range spatial and time ordered dynamics of the spins in the DTC allows an optimal sensing for long time periods, along with enhancements due to the quantum interactions among the spins and intrinsic robustness to noise - see Fig.<ref>(d). Specifically, optimal sensing can be reached once the DTC is tuned into a period-doubling resonance to the AC field, ω_S = 2ω_AC. The uncertainty for estimating h_ ac decays with the duration of the sensing protocol faster than the classical limit till (i) either the thermalization time of system t_ th, which for a DTC can be exponentially large with the system size, or (ii) to the AC characteristic time t_AC∼ h_ ac^-1 where the field is now strongly settled in the system dynamics and one recovers the standard shot-noise limit. The DTC sensor therefore offers appealing advantages for the estimation of small AC fields.
The model. We start defining our system, composed of N spins and described by the Hamiltonian
Ĥ(t) = Ĥ_0(t) + h_ acĤ_AC(t),
where the second term is the AC field signal with amplitude h_ ac, and
Ĥ_0(t) = Ĥ_0(t+T_S) represents the internal Hamiltonian for the sensor, whose interactions and periodicity could be tuned to optimize the estimation of the AC amplitude field. Specifically, we consider
Ĥ_0(t) = ∑_i( J_iσ̂_i^z σ̂_i+1^z + ∑_α=x,z h_i^ασ̂_i^α) +
+ ∑_n=1^∞δ(t-nT_S) ϕŜ^x,
and Ĥ_AC(t) = α(t) Ŝ^z, where Ŝ^β = ∑_i σ̂_i^α/2 is the total spin magnetization along the β direction, and α(t) = sin(ω_AC t + θ_AC) is the modulated AC field. The internal Hamiltonian is subject to a
periodic kick with frequency ω_S = 2π /T_S inducing global ϕ spin rotations, while in between the kicks it evolves as disordered interacting spins with J_i ∈ [-𝒥, +𝒥], h_i^z ∈ [-𝒲_z,+𝒲_z] and h_i^x ∈ [-𝒲_x,+𝒲_x] random variables drawn from a uniform distribution in the amplitude range 𝒥, 𝒲_z and 𝒲_x, respectively. In all our simulations we use initial separable states |ψ (0)⟩ = [cos(ϑ/2) |↑⟩ + e^i φsin(ϑ/2)|↓⟩ ]^⊗ N for the dynamics, with ϑ∈ [0, π/4] and
φ∈ [0, 2π] random phases, and average our results over n_ dis = 10^3 to 10^4 (depending on the system size) disorder realizations.
Sensing and quantum Fisher information.
The typical protocol to estimate an unknown parameter, h_ ac in the present case involves a few steps: (i) initialization of the sensor in advantageous states (usually relying on quantum correlations among its constituents), (ii)
interacting it to the signal of interest and let them evolve, till (iii) a final measurement is performed on the sensor state. The unknown parameter is therefore encoded in the sensor state, ρ̂(t,h_ ac), which could be inferred from the final measurements. The most general measurement can be described by a set of positive operator-valued measure (POVM) operators, {Ê_j }_j, for which the outcomes are given with probability p_j(t) = Tr(Ê_j ρ̂(t,h_ ac)). The uncertainty on estimating the unknown parameter from such a POVM measurement is bounded by the Cramer-Rao inequality Δ h_ ac(t) ≥ 1/√(F_C(h_ ac,ρ̂(t,h_ ac))), with Δ h_ ac(t) the variance of the estimation and F_C the classical Fisher information. Optimizing over all POVM measurements one obtains a tighter bound,
max_{POVMS}Δ h_ ac(t) ≥ 1/√(F(h_ ac,ρ̂(t,h_ ac)))
with F(h_ ac,ρ̂(t,h_ ac)) given by the quantum Fisher information <cit.>. The optimal measurement can be obtained from the eigenvectors of the symmetric logarithmic derivative (SLD) operator, defined by the relation ∂_h_ acρ̂(t,h_ ac) = (ρ̂(t,h_ ac) L̂_h_ ac + L̂_h_ acρ̂(t,h_ ac))/2, for which the Fisher information reduces to F(h_ ac,ρ̂(t,h_ ac))=Tr(ρ̂(t,h_ ac) L̂_h_ ac^2). For pure states the Fisher information has an alternative simpler form,
F_h_ ac(t)/4 = ⟨∂_h_ acψ(t,h_ ac) | ∂_h_ acψ(t,h_ ac) ⟩ -
|⟨ψ(t,h_ ac) | ∂_h_ acψ(t,h_ ac) ⟩ |^2
where ∂_h_ ac = ∂/∂ h_ ac is the partial derivative with respect to the estimated parameter and for simplicity of notation we denote from now on F_h_ ac(t) ≡ F(h_ ac,|ψ(t,h_ ac)⟩⟨ψ(t,h_ ac)|).
One can derive a simpler form to compute the Fisher information, reformulating it in terms of time-correlated variances, as follows.
The unitary operator for the time evolution
Û(t) = 𝒯e^∫_0^t -i Ĥ(t')dt' with 𝒯 the time ordering operator, can be expanded for infinitesimal time steps as
Û(t) = lim_K →∞∏_k=1^K e^-i Ĥ(t_k) dt = lim_K →∞∏_k=1^K û_k
with dt = t/K, t_k = k dt and û_k ≡ e^-i H(t_k) dt. The partial derivative of the Floquet unitary follows directly
∂Û(t)/∂ h_ ac = lim_K →∞∑_k'=1^K ( ∏_k>k'û_k)
∂û_k'/∂ h_ ac( ∏_k< k'û_k ).
and can be rewritten in the simpler form
∂Û(t)/∂ h_ ac = Û(t)
Ŝ(t), with
Ŝ(t) = -i ∫_0^t Û(t')^†Ĥ_AC(t')
Û(t') dt'
the scrambled signal operator. Therefore, from this formulation the Fisher information is given by,
F_h_ ac(t)/4 = var(Ŝ(t))
where var(Â) = ⟨Â^2 ⟩ - ⟨Â⟩^2 is the variance of the operator with the expectation value computed for the initial state of the sensor ⟨ ... ⟩≡⟨ψ(0)| ...| ψ(0) ⟩.
Case with ϕ = π, h_i^x = 0, h_ ac→ 0:
In the simplest case of the model one can obtain analytical results for its sensing performance of the AC field. Specifically, we consider perfect kick flips (ϕ = π), no transverse magnetic fields h_i^x = 0, ∀ i and the limit of infinitesimal AC amplitude h_ ac→ 0. In this case the the dynamics induces no decoherence among the spins (apart from the global kick) and is largely constrained, e.g. mapping classical states onto classical ones. Despite its simplicity, we can grasp several important properties for sensing with such systems.
The Heisenberg operators are straightly computed
Û(t,0)^†Ĥ_AC(t) Û(t,0) = -α(t) (-1)^[t/T_S]Ŝ_z with [t/T_S] the largest integer smaller than or equal
to t/T_S. Therefore the Fisher information reduced to,
F_h_ ac→0(t)/4 = (φ(t))^2 var(Ŝ_z(0))
which splits into two independent terms. On one hand it scales with the spin variance of the initial state along the z-direction.
Such dependence highlights the interacting nature of the spins, whose correlations shall provide an enhanced sensing for the system (possible superlinear scaling with N).
On the other hand it has a purely time-dependent term according to the accumulated phase φ(t) ≡∫_0^t α(t') (-1)^[t'/T_S] dt' during the dynamics.
It is worth remarking that this term is the same as for usual non-interacting sensing protocols, such as Carr-Purcell (CP) pulse trains <cit.> or periodic dynamical decoupling (PDD) sequences <cit.>.
The maximun accumulated phase occurs when the α(t) modulation is always in phase with the sign of the Heavside function (-1)^[t'/T_S], leading to a quadratic growth with time (∼ t^2). In other words, when the spin dynamics is fully in phase with the AC field.
This happens e.g., when the spins are in period-doubling resonance with the field, ω_S =2 ω_AC, which for stroboscopic times t=pT_S reduces to
. φ|_ω_S = 2ω_AC = 2pT_S cos(θ_AC)/π.
with p an integer number. Slightly deviations from the resonant case ω_S→ 2ω_AC + ϵ/T_S lead to off-resonance effects for time scales at the order of p_off = θ_AC/ϵ, for which the accumulated phase decreases its overall growth.
In summary from this simplest case we see that (i) the sensor can be enhanced due to the correlations among its spins, (ii) the optimal performance occurs for period-doubling resonance, and (iii) that off-resonance conditions impose a time scale for the optimal growth of Fisher information.
General case. We now turn to the more realistic case, considering both imperfections in the kicks (ϕ≠π) as well as decoherence in the dynamics (h_i^x ≠ 0). We first consider the estimation of infinitesimal fields h_ ac→ 0, i.e. linear response regime. The interactions among the spins now strongly impact the dynamics of the system and, in general one expects thermalization for short times spoiling the precision of the sensor. A DTC is nevertheless robust to perturbations, and in this way the sensor performance can be qualitatively preserved.
In fact, we observe that as long as the time-ordered period doubling dynamics is maintained in the sensor - Fig.(<ref>b) - the FI grows optimally in time - Fig.(<ref>a). We focus here on the resonant case, with ω_S = 2 ω_AC. We see that the Fisher information grows optimally till a transient time, and later slower its growth. The transient time is directly related to the thermalization of the DTC period doubling dynamics, which scales exponentially with system size t_th,DTC∼ e^γ N, with γ > 0.
The optimal performance depends on the tuning of the sensor frequency with the AC probe field - see Fig.(<ref>c). Their mismatch sets an off-resonance lifetime
t_ off∼ |ω - 2ω_AC|^-γ', with γ'>0 for the optimal performance of the sensor (similar to p_ off discussed in the previously).
Interestingly, the response of the sensor to the AC field keeps a well structured form, as highlighted in Fig.(<ref>d). The collective interactions among the spins stabilize the sensor against noise. Moreover, the maximum growth for the Fisher information can beat the standart classical limit, with a higher than quadratic growth in time F_h_ ac(t) ∼ t^β with β≳ 2. For the studied system parameters and finite time windows we find β≈ 2.1,
highlighting moderate enhancements due to the entanglement among the spins. We recall that one may also profit from the initial preparation of the sensor, precisely, from the spin variance of the initial state and its possible superextensivity with N. Since the DTC dynamics is robust to initial conditions, the sensor performance shall inherits this characteristics.
It is important to contrast such behaviors once the sensor loses its DTC ordering, e.g. due to large imperfections in the kicking phase, or stronger decoherence in the dynamics. In this case the thermalization time for the system is not extensive with the system size (usually of the order of t_ th∼𝒪(T_S)). Therefore any ordered dynamics among the spins quickly fades, deteriorating the sensor performance. We show in Fig.(<ref>) the case for a large kicking imperfection with ϕ = 2.6. We see - Fig.(<ref>a) - that the thermalization time for the magnetization is of the order of a few kicks, roughly independent on the system size. The Fisher information - Fig.(<ref>b) - grows slower than quadratic in time and has no structured response to the probe field, rather it features a noisy dependence to the AC field. Interesting, despite its poor performance for estimating the field,
the sensor, in this case, still shows an important dual role. Specifically, from its response to the field it could be exploited as a probe to determine the underlying (ordered or not) phase of a system.
Beyond the linear response regime, i.e. considering finite h_ ac≪ 1 effects, the AC field shall become dominant for sufficiently long times. In this limit the sensor state is strongly influenced by the field, and therefore we expect a high sensibility for its estimation. Such effects appear in our sensor at time scales of the order of t_AC∼ h_ ac^-1. In the resonant case (ω_S = 2ω_AC) we observe that the AC field tends to stabilize the period-doubling magnetization of the system, as shown in Fig.(<ref>a).
The Fisher information for such regimes (t ≳ t_AC) changes its behavior to a purely quadratic growth in time - Fig.(<ref>b)- therefore recovering the standart classical limit. It is important to remark that such
quadratic growth behavior is apparently generic, i.e., independent on the sensor specifics (DTC or ergodic), number of spins in the system or on the thermalization time (further details in SM). Therefore one may interpret such time scales as strong AC regimes whose detailed many-body properties of the sensor lacks any dominant importance for the measurement estimation.
On top of the t_AC time scale, off-resonant effects also influence the sensor performance. Even for times larger than t_AC we observe an off-resonance characteristic time t_ off∼ |ω - ω_AC|^-γ” with γ”>0 which tend to destabilize the period-doubling magnetization. The Fisher information, on the other hand, slightly slows down its growth for later in time recovering the quadratic growth.
Similar to the infinitesimal sensor limit, the Fisher information keeps its structured dependence with the AC probe field, with a peak at period-doubling resonance (details in SM).
Conclusions. In this work we discuss DTC's as sensors for AC fields. Their optimal performance - reached once the sensor is set on period-doubling resonance to the field - is shown to offer several advantages, overcoming the shot-noise limit, allowing long-time sensing measurements and being inherently robust to noise or imperfections in the protocol. The sensor offers moreover a promising dual role, acting as a probe for the ordering of general systems. Our analysis focused on spin-half DTC's displaying period doubling dynamics. It would be interesting to explore other forms of time crystal sensors, such as those featuring higher period-n-tuplings <cit.> or supported in open system dynamics with either discrete <cit.> or continuous <cit.> time translation symmetry breaking.
Acknowledgements. F.I. acknowledges financial support from Alexander von Humboldt foundation and the Brazilian funding agencies CAPES, CNPQ, and FAPERJ (Grants No. 308205/2019-7, No. E-26/211.318/2019, No. 151064/2022-9, and No. E-26/201.365/2022).
A.S. acknowledges financial support from
the Spanish Agencia Estatal de Investigación, Grant
No. PID2019-107609GB-I00, the European Commission
QuantERA grant ExTRaQT (Spanish MICINN project
PCI2022-132965), and Catalan Government for the
project QuantumCAT 001-P-001644, co-financed by the
European Regional Development Fund (FEDER).
R.F. ackowledges financial support from PNRR MUR project PE0000023-NQSTI and by the European Union (ERC, RAVE, 101053159). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
99
Wilczek2012 F. Wilczek, Phys. Rev. Lett. 109, 160401 (2012).
Sacha_review Krzysztof Sacha and Jakub Zakrzewski, Time crystals: A review, Rep. Prog. Phys. 81, 016401 (2018).
Else_review Dominic V. Else, Christopher Monroe, Chetan Nayak, and Norman Y. Yao, Discrete time crystals, Annu. Rev.
Condens. Matter Phys. 11, 467 (2020).
Zaletel2023 Michael P. Zaletel, Mikhail Lukin, Christopher Monroe, Chetan Nayak, Frank Wilczek, Norman Y. Yao, Colloquium: Quantum and Classical Discrete Time Crystals, arXiv:2305.08904 (2023).
Sacha-book Krzysztof Sacha, Time Crystals, (Springer Nature), (2020)
Zhang2017
J. Zhang, P.W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A.C. Potter, A. Vishwanath, N.Y. Yao,
and C. Monroe, Nature 543, 217 (2017).
Choi2017a
S. Choi, J. Choi, R. Landig, G. Kucsko, H. Zhou, J. Isoya, F. Jelezko, S. Onoda, H. Sumiya, V. Khemani, C. von Keyserlingk,
N.Y. Yao, E. Demler, and M.D. Lukin1, Nature 543, 221 (2017).
Else2016
D. V. Else, B. Bauer, and C. Nayak,
Phys. Rev. Lett. 117, 090402 (2016).
Khemani2016
V. Khemani, A. Lazarides, R. Moessner, and S. L. Sondhi,
Phys. Rev. Lett. 116, 250401 (2016).
Degen_review C. L. Degen, F. Reinhard, P. Cappellaro, Quantum sensing, Rev. Mod. Phys. 89, 035002 (2017).
Giovannetti_review Vittorio Giovannetti, Seth Lloyd and Lorenzo Maccone, Advances in quantum metrology, Nature Photonics 5, 222–229 (2011).
Zanardi2008 Paolo Zanardi, Matteo G. A. Paris and Lorenzo Campos Venuti, Phys. Rev. A 78, 042105 (2008).
Invernizzi2008 Carmen Invernizzi, Michael Korbman, Lorenzo Campos Venuti and Matteo G. A. Paris,
Phys. Rev. A 78, 042106 (2008).
Estarellas2020 M. P. Estarellas et al., Simulating complex quantum networks with time crystals. Sci. Adv. 6, eaay8892 (2020).
Bomantara2018 Raditya Weda Bomantara and Jiangbin Gong†, Simulation of Non-Abelian Braiding in Majorana Time Crystals, Physical Review Letters 120, 230405 (2018).
Carollo2020 Federico Carollo, Kay Brandner, and Igor Lesanovsky, Nonequilibrium Many-Body Quantum Engine Driven by Time-Translation Symmetry Breaking, Phys. Rev. Lett. 125, 240602 (2020).
Montenegro2023 V. Montenegro, M. G. Genoni, A. Bayat, M. G. A. Paris, arXiv:2301.02103 (2023).
Suter2016 Dieter Suter and Gonzalo A. Álvarez
Rev. Mod. Phys. 88, 041001 (2016).
Khodjasteh2005 K. Khodjasteh and D. A. Lidar, Fault-Tolerant Quantum Dynamical Decoupling, Phys. Rev. Letters 95, 180501 (2005).
Zhou2020 Hengyun Zhou et al., Quantum Metrology with Strongly Interacting Spin Systems, Physical Review X 10, 031003 (2020).
Bayat2022 Utkarsh Mishra and Abolfazl Bayat, Scientific Reports 12, 14760 (2022).
Choi2017 Soonwon Choi, Norman Y. Yao and Mikhail D. Lukin,
Quantum metrology based on strongly correlated matter, arXiv:1801.00042 (2017).
Liu2020 Jing Liu, Haidong Yuan , Xiao-Ming Lu and Xiaoguang Wang, Quantum Fisher information matrix and multiparameter estimation, J. Phys. A: Math. Theor. 53 (2020) 023001.
Carr1954 H. Y. Carr and E. M. Purcell, Effects of Diffusion on Free Precession in Nuclear Magnetic Resonance Experiments, Phys. Rev. 94, 630 (1954).
Federica_2019 Floquet time crystals in clock models, Federica Maria Surace, Angelo Russomanno, Marcello Dalmonte, Alessandro Silva, Rosario Fazio and Fernando Iemini, Physical Review B 99, 104303 (2019).
Gong_2018 Zongping Gong, Ryusuke Hamazaki and Masahito Ueda, Discrete Time-Crystalline Order in Cavity and Circuit QED Systems, Physical Review Letters 120, 040404 (2018).
Iemini2018 F. Iemini, A. Russomanno, J. Keeling, M. Schirò, M. Dalmonte and R. Fazio, Boundary Time Crystals,
Physical Review Letters 121, 035301 (2018).
Supplemental Material
Fernando Iemini, Rosario Fazio and Anna Sanpera
In this Supplemental Material we give further details on the sensor performance along both its DTC and ergodic phases.
§ DTC SENSOR
We first show the DTC sensor dependence with the AC frequency, out of the linear response limit. In Fig.(<ref>) we show such a dependence for a fixed finite h_ ac=10^-1 amplitude field. We see that mismatching AC frequency to the internal DTC one tend to destroy the stability of the period doubling magnetization dynamics - Fig.(<ref>a). The time for which these effects occur is inversely proportional to the frequencies mismatching.
The effects on the Fisher information dynamics - Fig.(<ref>b) - is slowing down its growth, till at later times it recovers the quadratic one. Nevertheless, the Fisher information preserves a structured dependence on the AC frequencies for different fixed finite times - Fig.(<ref>c).
The sensor dependence with the amplitude field is shown in Fig(<ref>). We see that dominant AC effects appear in the dynamics at the characteristic time scale t_ AC∼ h_ ac^-1. For the sensor tuned on period-doubling resonance to the AC field, their main effects tends to stabilize the period-doubling magnetization and recover the quadratic growth in time of the Fisher information.
§ ERGODIC SENSOR
Once tuned to its ergodic phase, the sensor loses its long-range spatial time ordering and therefore its improved performance. We show in Fig(<ref>) the finite h_ ac effects in the case of a large kick imperfection (ϕ = 2.6) leading to an ergodic phase. In fact, we see no finite-size effects on the dynamics of the system, and for times larger than the AC characteristic time the Fisher information recovers its standart quadratic growth in time.
|
http://arxiv.org/abs/2306.09014v1
|
20230615101600
|
A Review and Comparative Study of Close-Range Geometric Camera Calibration Tools
|
[
"Jianzhu Huai",
"Yuan Zhuang",
"Yuxin Shao",
"Grzegorz Jozkow",
"Binliang Wang",
"Junhui Liu",
"Yijia He",
"Alper Yilmaz"
] |
eess.IV
|
[
"eess.IV"
] |
0000–0000/00$00.00 2021 IEEE
A Review and Comparative Study of Close-Range Geometric Camera Calibration Tools
Jianzhu Huai, Yuan Zhuang^†,
Yuxin Shao, Grzegorz Jozkow,
Binliang Wang, Junhui Liu, Yijia He, and Alper Yilmaz
Jianzhu Huai, Yuxin Shao, and Yuan Zhuang are with
the Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS),
Wuhan University, 129 Luoyu Road, Wuhan, Hubei, China.
Y. Zhuang is also with Hubei Luojia Laboratory, Wuhan, Hubei, China,
and Wuhan University Shenzhen Research Institute, Shenzhen, Guangdong, China.
Grzegorz Jozkow is with the Institute of Geodesy and Geoinformatics, Wroclaw University of Environmental and Life Sciences, Wroclaw, Poland
Alper Yilmaz is with the Department of Civil, Environmental, and Geodetic Engineering, The Ohio State University, Columbus, OH, US
^†Corresponding author: [email protected]
July 31, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In many camera-based applications, it is necessary to find the geometric relationship between incoming rays and image pixels, i.e., the projection model, through the geometric camera calibration (GCC).
Aiming to provide practical calibration guidelines, this work surveys and evaluates the existing GCC tools.
The survey covers camera models, calibration targets, and algorithms used in these tools, highlighting their properties and the trends in GCC development.
The evaluation compares six target-based GCC tools, namely, BabelCalib, Basalt, Camodocal, Kalibr, the MATLAB calibrator, and the OpenCV-based ROS calibrator,
with simulated and real data for cameras of wide-angle and fisheye lenses described by three traditional projection models.
These tests reveal the strengths and weaknesses of these camera models, as well as the repeatability of these GCC tools.
In view of the survey and evaluation, future research directions of GCC are also discussed.
geometric camera calibration, calibration tool, camera model, calibration target, calibration algorithm.
§ INTRODUCTION
Cameras are indispensable to a host of applications ranging from remote sensing <cit.>, surveying <cit.>,
robotics <cit.>, to endoscopy <cit.>.
These applications usually need the knowledge of the geometric relationship between the real-world points and their images in a camera (Fig. <ref>).
To solve for the geometric mapping, the geometric camera calibration (GCC) is introduced.
As one of the converging points of computer vision, internet of things, and robotics,
GCC have been extensively studied since 1970s and are still being actively researched today,
possibly driven by the evolving needs of various applications.
A wide range of cameras have been developed and can be categorized in several ways.
With varying operating principles, there are traditional cameras, depth cameras, event cameras, thermal cameras, and so on.
This paper focuses on the traditional cameras that measure intensities at pixels of an image due to visible light.
Other types of cameras are usually modeled with the same geometric models as traditional cameras.
Based on the angle of view (AOV), cameras can be roughly grouped into conventional cameras (typically 64^∘),
wide-angle cameras (100^∘), fisheye cameras,
and omnidirectional cameras (≥ 180^∘), with blurry boundaries between adjacent groups.
The conventional and wide-angle cameras are usually well represented by a pinhole model, i.e., the perspective model.
The omnidirectional cameras include fisheye cameras with an AOV ≥ 180^∘, and catadioptric cameras comprising of lenses and mirrors (“cata” for mirror reflection and “dioptric” for lens refraction).
There are also camera rigs consisting of multiple cameras which achieve a great AOV by stitching images.
Based on whether all incoming rays pass through a single point, cameras can be divided into central cameras of a single effective viewpoint, i.e., the optical center, and non-central cameras.
Central cameras include the conventional cameras, fisheye cameras with an AOV ≤ 195^∘, and many catadioptric cameras built by combining a pinhole camera and hyperbolic, parabolic, or elliptical mirrors.
Instances of non-central cameras include catadiptric cameras built with spherical mirrors.
As a special class in the non-central cameras, axial cameras have all projection rays intersect a line, e.g., the push-broom cameras on some remote sensing satellites.
Numerous geometric camera models <cit.> have been proposed, ranging from specific global models of a dozen parameters to generic local models of thousands of parameters.
Traditional geometric camera models are tailored for specific lens types, and expressed by a closed-form function of usually 100 parameters.
They are global since a parameter's change affects the projection of every incoming ray.
These models are well supported by the existing calibration tools, and structure from motion (SfM) packages.
By contrast, generic models can model a wide range of cameras by using lots of parameters each of which determines the projection of incoming rays in a local area,
for instance, B-spline models <cit.>.
To the extreme, a local model associates separate ray parameters for each pixel, giving a per-pixel model <cit.>.
These models achieve a continuous mapping between ray directions and image points by interpolation.
While they are typically more accurate than global models, they also require more data for calibration.
Numerous tools have been developed for carrying out GCC, each with a unique set of features.
They are often available as proprietary programs,
such as the camera calibrator in MATLAB <cit.> and Agisoft Metashape <cit.>,
or open-source programs, such as Kalibr <cit.>.
As for similarities, existing tools usually support global camera models and calibration with some planar target.
Notably, many tools are based on the same underlying packages, e.g., OpenCV <cit.>, thus,
they tend to have similar limitations.
Moreover, many programs developed independently are very close in functionality, implying a possible duplicate effort.
As for practical differences, these tools usually support different sets of camera models and calibration targets.
The diverse landscape of camera models and calibration tools on one hand offers ready-to-use solutions in a variety of situations,
but on the other hand, it gets overwhelming for practitioners to choose the proper calibration tool.
To address this difficulty, quite a few comparative studies have been conducted.
For instance, three calibration algorithms were compared in <cit.> for cameras with large focal lengths.
Digital displays and printed targets were compared in <cit.> for close-range cameras.
These reviews usually focus on components of GCC, such as camera models or calibration targets.
Overall, there is lack of a qualitative overview and quantitative comparison of existing GCC tools which elucidates choosing the proper camera model and calibration tool.
To fill this gap, we extensively review existing GCC tools from several practical aspects and benchmark several popular tools with simulated and real data.
To confine the scope while catering to a large audience, this paper focuses on traditional close-range grayscale or color monocular cameras,
as we believe they are actively studied and their calibration methods often carry over to other camera types with some adaption.
The contributions of this work are summarized as follows:
First, this review categorizes camera models, calibration targets, and calibration algorithms as used in GCC tools, providing a concise reference for these aspects.
We then qualitatively reveal the strengths and similarities of these calibration tools,
hopefully preventing repetitive development efforts in the future.
Second, an evaluation of six calibration tools is conducted for in-house cameras with varying AOV by simulation and real-data tests to show their accuracy and repeatability.
The evaluation clearly shows strengths and weaknesses of three popular global geometric camera models and indicates which calibration tool to use for close-range applications.
Third, based on the review and evaluation, we highlight future research directions for GCC.
The following text is organized as shown in Fig. <ref>.
Next, Section <ref> briefly reviews related work on comparative studies of GCC.
For the available camera calibration tools, Section <ref> sorts out the camera models,
the calibration targets, and the calibration algorithms.
The GCC tools are reviewed in Section <ref>.
Section <ref> presents experiments of six calibration tools with a range of cameras and three popular global camera models.
Finally, conclusions and future research trends are given in Section <ref>.
§ RELATED WORK
This section briefly reviews comparative studies and surveys about GCC from several aspects including camera models, calibration targets, and calibration methods.
§.§ Camera Models
Comparative studies about camera models are usually conducted in papers proposing new or enhanced models.
For fisheye cameras, in <cit.>, the double sphere (DS) model was proposed and compared with several global models
including the Kannala-Brandt (KB) model <cit.>, the extended unified camera model (EUCM) <cit.>,
the field of view (FOV) model <cit.>, validating that its accuracy approached that of the KB model with 8 parameters.
In <cit.>, a per-pixel generic model was shown to be more accurate than a pinhole camera model with radial distortion.
The generic B-spline model <cit.> was enhanced in <cit.> with a denser grid of control points for the cubic B-spline surface,
and it was shown that generic models led to more accurate results than traditional global models in photogrammetric applications.
Authors of <cit.> extensively reviewed existing camera models and established a taxonomy based on several criteria.
In this paper, we survey the camera models commonly found in GCC tools and provide their exact formulations for reference (Section <ref>).
§.§ Calibration Targets
To achieve high accuracy, GCC is often performed with a set of points of known positions, such as a calibration field used in remote sensing, and
calibration targets in close-range applications.
The diversity of calibration targets made necessary comparative analyses of these targets.
Regarding control point detection in camera calibration, circle grids and checkerboards were studied in <cit.> and it was found that
circles suffered from perspective and distortion biases whereas corner points of checkerboards were invariant to the distortion bias.
Schmalz et al. <cit.> systematically compared the active targets with digital displays to the printed checkerboard for GCC
with several combinations of displays, cameras, and lenses.
They found that calibration with the active target had much lower reprojection errors,
but required compensation for the refraction of the glass plate and multiple images per pose and hence a tripod or the like.
In an underwater environment, fiducial markers including the ARToolKit <cit.>, the AprilTag <cit.>, and the Aruco <cit.> were compared in <cit.>
where the AprilTag showed better detection performance but required higher computation.
In environments with occlusions and rotations, three markers, the ARTag <cit.>, the AprilTag <cit.>,
and the CALTag <cit.> were compared in <cit.> and the CALTag emprically achieved the best recognition rate.
For pose tracking in surgery, Kunz et al. <cit.> compared the Aruco and AprilTag markers and found that
both could achieve sub-millimeter accuracy at distances up to 1 m.
For localization of unmanned aerial systems, four fiducial markers, the ARTag, the AprilTag, the Aruco, and the STag <cit.>, were compared in <cit.> in terms of detection rate and localization accuracy.
The AprilTag, the STag, and the Aruco were shown to have close performance whereas the Aruco was the most efficient in computation.
In simulation, Zakiev et al.<cit.> reported that an Aruco marker had much better detection rate than an AprilTag marker
when the marker board rotated along an in-plane axis.
For drone landing, several variants of the AprilTag and the circular WhyCode <cit.> were compared in <cit.>
on an embedded system and the suitable variants were determined.
Unlikely above comparative studies about targets, our paper briefly surveys the calibration targets (Section <ref>) supported by the available GCC tools.
§.§ Calibration Algorithms
The algorithms for GCC are vast, ranging from target-based to self-calibration, from offline calibration to online interactive calibration.
Quite a few papers have reviewed the GCC methods in view of different applications.
For close-range photogrammetry, an overview of developments of camera calibration methods up to 1995 was provided in <cit.>.
Several calibration techniques up to 1992 for conventional cameras with a pinhole model were reviewed and evaluated in <cit.>.
For close-range applications, several target-based and self-calibration methods were compared in <cit.> with a 3D target and a checkerboard,
showing that the self-calibration methods based on bundle adjustment often achieved good calibration for consumer-grade cameras.
For time-of-flight range cameras, three intrinsic calibration methods were compared in <cit.> for
calibrating camera lens parameters and range error parameters by using a multi-resolution planar target.
For cameras of large focal lengths (≥35 mm), Hieronymus <cit.> compared three calibration methods,
one with a test field of a known geometric pattern, and two methods with devices for generating laser beams.
He found that these methods achieved comparable high accuracy for the pinhole model with radial and tangential distortion.
For cameras with lenses of focal lengths ≥50 mm in particle tracking velocimetry, Joshi et al.<cit.>
studied the accuracy of three camera calibration methods,
the direct linear transform (DLT) that ignores the distortion <cit.>,
a linear least squares method with the rational polynomial coefficient (RPC) model <cit.> but only using the numerator terms,
and Tsai's method which determines the intrinsic and extrinsic parameters in two steps <cit.>.
They found that errors of the Tsai's method were fluctuant due to the unstable nonlinear optimization.
For infrared cameras, Usamentiaga et al.<cit.> compared three calibration methods,
a DLT method, an iterative method, and a complete method that considered lens distortion, and unsurprisingly, the last method resulted in best distance measurements.
For roadside cameras, GCC methods based on vanishing points were compared in <cit.>, assuming no lens distortion.
For X-ray cameras ignoring radial distortion,
the DLT method <cit.>, Tsai's method <cit.>, and Zhang's method <cit.> were compared in <cit.>, and the DLT showed superiority in accuracy and operation simplicity.
For a camera-projector pair, Tiscareno et al.<cit.>
calibrated the camera with the DLT method, Tsai's method, and Zhang's method, and calibrated the projector with the DLT, through simulation.
They found that Zhang's method gave smaller reprojection errors than the others for camera calibration.
For zoom-lens cameras with varying focal lengths, calibration methods were reviewed in <cit.>.
Different from the preceding surveys and comparisons focusing on calibration methods, this paper reviews and compares GCC tools for close-range cameras of fixed intrinsic parameters.
§ GEOMETRIC CAMERA CALIBRATION COMPONENTS
This section reviews geometric camera models, targets, algorithms as available in existing calibration tools.
Before elaborating GCC, some definitions are clarified here.
The focal length is defined to be the distance between the camera's optical center and the sensor as in <cit.>.
Since the optical center is defined only for central cameras, the focal length is not defined for non-central cameras.
Accordingly, the focal length can take a range of values including the one when the camera is focused at infinity.
We define the principal/optical axis as the line passing through the optical center and orthogonal to the sensor chip.
For ease with pinhole cameras, the sensor is often inverted and placed in front of the optical center, forming the image plane <cit.>.
For a catadioptric camera, the mirror axis refers to the symmetry axis of the mirror.
We define the AOV of a lens to be the maximum angle formed by rays coming into the lens.
Likewise, the AOV of a camera is defined as the maximum angle formed by rays corresponding to the sensor's exposed pixels, along the sensor's horizontal axis, vertical axis, or diagonal,
leading to HAOV, VAOV, or DAOV, respectively.
Thus, the AOV of a camera depends on both the lens and the sensor.
§.§ Camera Models
The following describes the variety of camera models used in close-range applications, which have been adopted in GCC tools surveyed in this paper.
Camera models used for remote sensing, such as the affine camera model <cit.>, the RPC model <cit.>,
the detector directional model <cit.>, are referred to <cit.>.
We begin with global models for central cameras which dominate the GCC tools, and end with generic models.
These global models are typically defined in a (forward) projection manner where image points are formulated given world points or rays,
although the same formulae may be used the other way round to obtain a ray given an image point, i.e., backward projection / back-projection / unprojection,
for instance, (<ref>) and (<ref>).
For local models, however, the backward projection is usually used to express the camera model as the forward projection can be very complex <cit.>.
For the below camera models listed in Fig. <ref>, we describe either the forward or the backward model unless both are closed-form,
with the understanding that going the other way often requires iterative optimization.
A set of symbols is defined in order here.
We denote a point in the camera frame by 𝐱_c = [X_c, Y_c, Z_c] with Euclidean coordinates X_c, Y_c, and Z_c.
The measured image point is denoted by 𝐮_m = [u_m, v_m] with pixel coordinates u_m and v_m.
The world-to-image forward projection is denoted by π(𝐱_c, 𝐢): ℝ^3 →ℝ^2 where 𝐢 is the set of intrinsic parameters.
Its inverse, the image-to-world inverse projection model is
π^-1(𝐮_m, 𝐢): ℝ^2 →𝕊^2 where 𝕊^2 is the set of 3D unit vectors.
We denote by θ the incidence angle between an incoming ray and the optical axis.
We use the subscripts `m', `d', `n', and `c' to indicate measurement, distortion, normalization, and the camera coordinate frame.
§.§.§ Global Models for Wide-Angle Cameras
Conventional and wide-angle cameras of an AOV 100^∘ usually have little distortion and satisfy well the pinhole model.
The set of parameters in the pinhole projection without distortion are 𝐢 = [f_x, f_y, c_x, c_y],
including the focal length and the principal point along the image plane's two axes in units of pixels.
The distortion-free pinhole model is given by
𝐮_m = π(𝐱_c, 𝐢) =
[ f_x X_c/Z_c + c_x; f_y Y_c/Z_c + c_y ],
with the closed-form inverse model,
π^-1(𝐮_𝐦, 𝐢) =
1/√(x_n^2 + y_n^2 + 1)[ x_n; y_n; 1 ],
where
x_n = (u_m - c_x)/f_x and y_n = (v_m - c_y)/f_y.
To account for lens distortion, a variety of distortion models for pinhole cameras have been proposed.
The most popular one is probably the radial-tangential polynomial model, i.e., the plumb bob model or the Brown-Conrady model <cit.>.
Its intrinsic parameters, 𝐢 = [f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2], include the pinhole projection parameters, the radial distortion parameters k_j, j=1, 2, ⋯, p
(the maximum index p is usually truncated to two in practice), and the tangential distortion parameters p_1, p_2.
The pinhole radial tangential model is given by
[ x_n; y_n ] = [ X_c/Z_c; Y_c/Z_c ],
r_n^2 = x_n^2 + y_n^2,
[ x_d; y_d ] = [ x_n(1 + ∑_j=1^p k_j r_n^2j) + δ_ud; y_n(1 + ∑_j=1^p k_j r_n^2j) + δ_vd ],
[ δ_ud; δ_vd ] =
[ 2p_1 x_n y_n + p_2(r_n^2 + 2x_n^2); p_1 (r_n^2 + 2 y_n^2) + 2p_2 x_n y_n ],
π(𝐱_c, 𝐢) = [ f_x x_d + c_x; f_y y_d + c_y ].
This model usually suits well lenses with an AOV 120^∘<cit.>.
The inverse of (<ref>) has no closed-form solution, and usually requires an iterative procedure.
Notably, Drap and Lefèvre <cit.> propose an exact formula involving a power series to invert (<ref>).
Alternatively, the pinhole radial tangential model can also be defined in a backward manner, i.e.,
[ x_d; y_d ] = [ (u_m - c_x)/f_x; (v_m-c_y)/f_y ],
r_d^2 = x_d^2 + y_d^2,
[ x_n; y_n ] =
[ x_d(1 + ∑_j=1^p k_j r_n^2j) + δ_ud; y_d(1 + ∑_j=1^p k_j r_n^2j) + δ_vd ],
[ δ_ud; δ_vd ] =
[ 2p_1 x_d y_d + p_2(r_d^2 + 2x_d^2); p_1 (r_d^2 + 2 y_d^2) + 2p_2 x_d y_d ],
π^-1(𝐮_m,𝐢) =
1/√(x_n^2 + y_n^2 + 1)[ x_n; y_n; 1 ].
Obviously, for the same camera, the parameters of the backward model differ from those of the forward model.
This backward model is less common but has been used in e.g., the PhotoModeler <cit.>.
The forward pinhole radial tangential model in (<ref>) can be simplified to the division model proposed by <cit.>
which is a radial symmetric model with the set of intrinsic parameters 𝐢 = [f_x, f_y, c_x, c_y, k_1],
[ x_d; y_d ] = [ (u_m - c_x) / f_x; (v_m - c_y) / f_y ],
r_d = √(x_d^2 + y_d^2),
[ x_n; y_n ] =
[ x_d / (1 + k_1 r_d^2); y_d / (1 + k_1 r_d^2) ],
π^-1(𝐮_m, 𝐢) =
1/√(x_n^2 + y_n^2 + 1)[ x_n; y_n; 1 ].
A backward rational model is proposed in <cit.>,
[ x_n; y_n ] =
[ x_d; y_d ]1 + ∑_j=1^p k_j^1 r_d^2j/1 + ∑_j=1^q k_j^2 r_d^2j,
π^-1(𝐮_𝐦, 𝐢) =
1/√(x_n^2 + y_n^2 + 1)[ x_n; y_n; 1 ],
with the intrinsic parameters 𝐢 = [f_x, f_y, c_x, c_y] ∪𝐤^1 ∪𝐤^2
where 𝐤^1 = [k_j^1, j = 1, 2, ⋯, p] and
𝐤^2 = [k_j^2, j=1, 2, ⋯, q].
The rational model in OpenCV <cit.> supports p ≤ 3 and q ≤ 3.
Furthermore, the thin prism effect is considered in <cit.> along with radial and tangential distortion, where the model is defined as
[ x_n; y_n ] = [ X_c/Z_c; Y_c/Z_c ],
r_n^2 = x_n^2 + y_n^2,
[ x_d; y_d ] = [ x_n(1 + k_1 r_n^2) + δ_ud + δ_up; y_n(1 + k_1 r_n^2) + δ_vd + δ_vp ],
[ δ_up; δ_vp ] =
[ s_1 r_n^2; s_2 r_n^2 ],
π(𝐱_c, 𝐢) = [ f_x x_d + c_x; f_y y_d + c_y ],
where the tangential distortion [δ_ud, δ_vd] is given in (<ref>).
Overall, the intrinsic parameter set is
𝐢 = [f_x, f_y, c_x, c_y, k_1, p_1, p_2, s_1, s_2].
The OpenCV considers more terms for the thin prism effect by
δ_up = s_1 r_n^2 + s_2 r_n^4 and δ_vp = s_3 r_n^2 + s_4 r_n^4.
§.§.§ Global Fisheye Camera Models
Fisheye cameras typically have an AOV ≥ 100 ^∘, and can reach 280^∘ [https://www.back-bone.ca/product/entaniya-280/].
They are quite common but show great distortion, thus, quite a few global
models have been proposed.
The most popular ones are probably the KB model <cit.> and the FOV model <cit.>.
The full KB model proposed in <cit.> has 23 parameters where four describe the affine transform (<ref>),
five describe an equidistant radial symmetric distortion, and the other 14 describe the asymmetric distortion.
The commonly used KB-8 model is radially symmetric and has 8 intrinsic parameters, 𝐢 = [f_x, f_y, c_x, c_y, k_1, k_2, k_3, k_4].
It is defined by
π(𝐱_c, 𝐢) =
[ f_x d(θ) X_c/r_c + c_x; f_y d(θ) Y_c/r_c + c_y ],
r_c = √(X_c^2 + Y_c^2) = Z_c tan(θ),
d(θ) = θ + k_1 θ^3 + k_2 θ^5 + k_3 θ^7 + k_4 θ^9,
Unlike the KB-9 in <cit.>, the KB-8 model sets the coefficient of the term θ in d(θ) to be 1.
The KB-8 model can handle an AOV ≥ 180^∘,
but when it is formulated as an equidistant distortion on top of a pinhole projection as in Kalibr <cit.> and OpenCV,
the projection will fail for points of Z_c≤0.
The Scaramuzza model <cit.> for central catadioptric cameras and fisheye cameras up to a 195^∘ AOV resembles the inverse of the KB-8 model.
It is defined in a backward manner for a measured image point [u_m, v_m] as
[ u_m; v_m ] =
[ c d; e 1 ][ u_h; v_h ] + [ c_x; c_y ],
π^-1(𝐮_m, 𝐢) =
1/√(u_h^2 + v_h^2 + w_h^2(ρ_h))[ u_h; v_h; w_h(ρ_h) ],
ρ_h = √(u_h^2 + v_h^2),
w_h(ρ_h) = a_0 + a_2 ρ_h^2 + a_3 ρ_h^3 + a_4 ρ_h^4,
where u_h, v_h are the ideal coordinates of the image point on a hypothetical plane orthogonal to the mirror axis.
The parameter vector for the model is 𝐢 = [a_0, a_2, a_3, a_4, c_x, c_y, c, d, e].
Since c in the 2×2 stretch matrix is about one, a_0 is similar in role to f_x or f_y in (<ref>).
This model is available in the MATLAB camera calibrator <cit.>.
For projecting a world point to the image, a polynomial approximation of the involved forward projection is adopted in <cit.> to reduce the computation.
The FOV model <cit.> has one distortion parameter and a closed-form inversion.
It has been popular for fisheye lenses in consumer products, e.g., Tango phones.
With intrinsic parameters 𝐢 = [f_x, f_y, c_x, c_y, ω], its definition is given by
π(𝐱_c, 𝐢) =
[ f_x X_c r_d/r_u + c_x; f_y Y_c r_d/r_u + c_y ],
r_u = √(X_c^2 + Y_c^2),
r_d = 1/ωarctan2(2r_u tanω/2, Z_c).
For backward projection of an image point, the FOV model has a closed-form solution given by
π^-1(𝐮_m, 𝐢) =
[ x_dsin(r_d ω)/2 r_d tanω/2 y_dsin(r_d ω)/2 r_d tanω/2 cos(r_d ω) ]
[ x_d; y_d ] = [ (u_m - c_x) / f_x; (v_m - c_y) / f_y ],
r_d = √(x_d^2 + y_d^2).
Despite only one distortion parameter, the FOV model often requires as much computation as the KB-8 model for forward and backward projections
due to the trigonometric functions.
The DS model<cit.> fits well large AOV lenses,
has a closed-form inversion, and does not involve trigonometric functions, thus making it very efficient.
This model contains 6 parameters, 𝐢 = [f_x, f_y, c_x, c_y, ξ, α].
In forward projection, a world point is projected consecutively onto two unit spheres of a center offset ξ,
and lastly projected onto the image plane using a pinhole model.
The projection model is defined by
π(𝐱_c, 𝐢) = [ f_x X_c/α d_2 + (1-α) (ξ d_1 + Z_c) + c_x; f_y Y_c/α d_2 + (1-α) (ξ d_1 + Z_c) + c_y ],
d_1 = √(X_c^2 + Y_c^2 + Z_c^2),
d_2 = √(X_c^2 + Y_c^2 + (ξ d_1 + Z_c)^2).
Its closed-form unprojection is given by
π^-1(𝐮_m, 𝐢) =
z_d ξ + √(z_d^2 + (1-ξ^2)r_d^2)/z_d^2 + r_d^2[ x_d; y_d; z_d ] - [ 0; 0; ξ ],
z_d = 1 - α^2 r_d^2/α√(1 - (2α - 1) r_d^2) + 1 - α,
r_d^2 = x_d^2 + y_d^2.
This model has been implemented in Basalt <cit.> and Kalibr.
§.§.§ Global Omnidirectional Camera Models
An omnidirectional camera has an HAOV ≥ 180^∘ and a DAOV up to 360^∘.
Several models have been developed for such cameras.
The unified camera model (UCM) in <cit.> can deal with both fisheye cameras and central catadioptric cameras, defined by
π(𝐱_c, 𝐢) =
[ γ_x X_c/ξρ + Z_c + c_x γ_y Y_c/ξρ + Z_c + c_y ],
ρ = √(X_c^2 + Y_c^2 + Z_c^2),
with intrinsic parameters
𝐢 = [γ_x, γ_y, c_x, c_y, ξ].
When ξ=0, the above model degenerates to a pinhole model.
The unified model is formulated equivalently in <cit.> for better numeric stability.
The formulation is given by
π(𝐱_c, 𝐢) =
[ f_x X_c/αρ + (1-α) Z_c + c_x; f_y Y_c/αρ + (1-α) Z_c + c_y ],
with intrinsic parameters 𝐢 = [f_x, f_y, c_x, c_y, α] where
α = ξ /(1 + ξ), f_x = γ_x /(1 + ξ),
f_y = γ_y / (1 + ξ) .
The unprojection function for the UCM is given by
π^-1(𝐮, 𝐢) =
ξ + √(1 + (1-ξ^2) r_d^2)/1 + r_d^2[ x_d; y_d; 1 ] - [ 0; 0; ξ ],
x_d = u_m - c_x/f_x (1 + ξ),
y_d = v_m - c_y/f_y (1 + ξ),
r_d^2 = x_d^2 + y_d^2, ξ = α/1 - α.
For better accuracy with the UCM, Mei and Rives <cit.> also consider the lens distortion, the misalignment and the sensor skew.
The Mei model is defined by
[ x_n; y_n ] =
[ X_c / (Z_c + ξρ); Y_c / (Z_c + ξρ) ], r_n = √(x_n^2 + y_n^2),
[ x_d; y_d ] = [ x_n d(r_n) + 2p_1 x_n y_n + p_2(r_n^2 + 2x_n^2); y_n d(r_n) + p_1 (r_n^2 + 2 y_n^2) + 2p_2 x_n y_n ],
d(r_n) = 1+k_1 r_n^2 + k_2 r_n^4 + k_3 r_n^6,
π(𝐱_c, 𝐢) = [ γ_x (x_d + s y_d) + c_x; γ_y y_d + c_y ],
with the intrinsic parameters 𝐢 = [γ_x, γ_y, c_x, c_y, ξ, k_1, k_2, k_3, p_1, p_2, s] where k_1, k_2, and k_3 are for radial distortion,
p_1 and p_2 for misalignment, and s for skew.
This model is adopted in <cit.> and Camodocal <cit.>.
As pointed out in <cit.>, k_1 of the Mei model is redundant with ξ.
The extended unified camera model (EUCM) <cit.> enhances the UCM by a parameter β to deal with the radial distortion.
Its projection model is given by
[ u_m; v_m ] =
[ f_x X_c/αρ + (1-α) Z_c + c_x; f_y Y_c/αρ + (1-α) Z_c + c_y ],
ρ = √(β(X_c^2 + Y_c^2) + Z_c^2),
with parameters 𝐢 = [f_x, f_y, c_x, c_y, α, β],
where α∈ [0, 1], β > 0, and
αρ + (1-α) Z_c > 0.
The unprojection function for the EUCM is given by
π^-1(𝐮, 𝐢) =
1/√(x_d^2 + y_d^2 + z_d^2)[ x_d; y_d; z_d ],
[ x_d; y_d ] =
[ (u_m - c_x) / f_x; (v_m - c_y) / f_y ],
r_d^2 = x_d^2 + y_d^2,
z_d = 1 - βα^2 r_d^2/α√(1-(2α - 1) β r_d^2) + 1 -α.
§.§.§ Local Generic Camera Models
The preceding global camera models are available in a variety of GCC tools possibly for their simplicity, but their accuracy is also limited.
To push the accuracy limit, generic models with thousands of parameters have been proposed, such as
<cit.>.
But loosely speaking, they are still behind the global models in availability among GCC tools and in support by downstream applications.
We briefly describe two generic models implemented in <cit.>, a per-pixel model and a B-spline model.
The per-pixel model of <cit.> associates a ray direction to every pixel for a central camera and a ray direction and a 3D point on the ray to every pixel for a non-central camera.
Furthermore, interpolation between pixels is used to achieve continuous projection.
A B-spline model adopted in <cit.> associates ray parameters to a sparse set of grid points instead of all pixels.
These grid points control the cubic B-spline surface which represents the back projection function.
Notably, this B-spline model is initialized using the relative camera poses computed with the method <cit.> developed for the per-pixel model.
§.§ Calibration Targets
GCC usually depends on passive or active man-made objects, e.g., ground control points in remote sensing or planar targets in close-range calibrations.
Recent self/auto-calibration methods, e.g., <cit.>, use opportunistic environmental features,
whereas infrastructure-based methods <cit.> use a prior landmark map of the environment.
Since artificial targets are still commonly used for better accuracy control, this section surveys the targets supported by GCC tools, as listed in Fig. <ref>.
There are a few 3D targets, such as cubes <cit.> and icosahedrons <cit.>,
each of which is usually a composite of multiple planar targets.
The accuracy requirements of length and orthogonality complicate their manufacturing and hamper their accessibility.
The majority of calibration targets are planar, including surveyed markers on flat walls,
and a variety of coded patterns either displayed on digital screens <cit.> or printed out.
The targets based on digital displays usually have accurate size and good flatness and can deal with defocusing <cit.>,
but such a target usually requires capturing multiple pattern images at each pose and compensating the refraction of the display's glass plate.
So far, the printed boards are the most common targets and are widely supported by GCC tools.
They include the checkerboard, the AprilGrid <cit.>,
the circle grid,
the Charuco <cit.> board, and the recent deltille board <cit.>, etc., as shown in Fig. <ref>.
Their properties are briefly described below.
There are also numerous customized calibration targets tailored for specific algorithms, e.g.,
the random pattern aggregated from noise at multiple scales in <cit.>,
the pattern in <cit.> with dense corners for generic models,
the Ecocheck board <cit.>, the PhotoModeler circle board <cit.>.
A custom board can often be created by combining markers to disambiguate orientations, e.g., the AprilTag,
and corners invariant to perspective and lens distortion, e.g., formed from repeating squares.
Lists of fiducial markers resilient to rotation can be found in <cit.>.
§.§.§ Checkerboard
The checkerboard is probably the most common calibration target.
It is also known as chessboard. We prefer the name checkerboard which is more general than chessboard.
Many checkerboard detection improvements have been proposed, such as <cit.>.
The checkerboard requires that the corners inside the board are fully visible in an image so that their coordinates can be uniquely determined.
Though this weakness is reported to be remedied by a few recent methods <cit.>,
most current tools have not kept up.
To ensure that the pattern does not look the same after a 180^∘ rotation, a checkerboard with odd rows and even columns or even rows and odd columns is usually used.
§.§.§ Circle Grid
A circle grid<cit.> usually consists of an array of circles, symmetrically or asymmetrically distributed (see Fig. <ref>).
The circle centers are target points for calibration, and can be detected from images based on area, circularity, convexity, inertia [https://learnopencv.com/blob-detection-using-opencv-python-c/], etc.
The circle grid has several downsides: first, all circles should be visible in each image;
second, the detected circle centers suffer from the eccentricity error due to the perspective effect and lens distortion <cit.>.
The eccentricity error is worth attention especially for lenses of large distortion.
Moreover, the symmetric circle grid also has the 180^∘ ambiguity and thus asymmetric circle grid is generally preferred.
§.§.§ Charuco
The Charuco board<cit.> combines the checkerboard
and the Aruco tags <cit.> to deal with inaccurate corner positions and occlusions.
As shown in Fig. <ref>(d), the white squares of checkerboards are occupied by uniquely identifiable Aruco tags.
§.§.§ AprilGrid
The Aprilgrid is an array of AprilTag markers <cit.> connected by smaller black squares as shown in Fig. <ref>(e), developed in the Kalibr package <cit.>.
It is resilient to occlusion due to the AprilTag markers, and has accurate positions of corners which are surrounded by two black squares.
§.§.§ Deltille Grid
The Deltille grid is a pattern of adjacent regular triangles filled with alternating colors as shown in Fig. <ref>(f).
It is the only other possible tiling with alternating colors besides the checkerboard tiling.
Its benefits compared to checkerboards are higher corner density and more accurate corner positions.
The wide use of Deltille grids is mainly hindered by the effort to adapt the interfaces of existing calibration tools.
§.§ Calibration Algorithms
This section gives a high-level overview of the calibration algorithms as implemented in GCC tools.
According to the used solver, GCC algorithms can be grouped into traditional geometric and learning-based ones.
Generally speaking, geometric approaches are explainable and accurate, whereas the learning-based approaches are intended to be more robust and flexible, e.g.,
<cit.>.
According to the type of calibration targets, GCC algorithms can be grouped into those based on artificial targets, those based on mapped natural scenes, and
self-calibration algorithms without targets.
Calibration with an artificial target is pretty standard and widely supported in GCC packages.
It is typically offline, and usually involves two phases, linear initialization and iterative nonlinear refinement.
Instances of linear initialization are DLT, <cit.>, <cit.>.
Iterative refinement is exemplified by <cit.>, <cit.>, <cit.>, <cit.>, <cit.>.
We refer to <cit.> for an overview of artificial-target-based methods.
Calibration with natural objects of known geometry includes infrastructure-based calibration methods, such as <cit.>.
Such methods require an accurate 3D reconstruction of the site for calibration and rough values for intrinsic parameters and
are suitable for camera systems with motion constraints.
Broadly speaking, self camera calibration by using observations of opportunistic landmarks includes recursive refinement methods, methods that recover only camera intrinsic parameters,
and methods that recover structure, motion and camera intrinsic parameters.
Methods in the first group recursively refine calibration parameters and have to start from coarse parameter values, e.g., <cit.>.
The second group dates back to <cit.> and is reviewed in <cit.>.
Methods in the last group usually rely on bundle adjustment,
thus, they typically have the best accuracy among self-calibration methods and are commonly supported in SfM packages, e.g., colmap <cit.>.
§ GCC TOOLS
This section reviews tools developed for GCC.
These tools mainly realize algorithms using artificial targets or target-free bundle adjustment.
Several learning-based GCC tools are also cited as examples from this active research field.
Since our focus is on intrinsic calibration, tools solely for extrinsic calibration are left out, e.g.,
<cit.>.
An extensive list of GCC tools to our knowledge is given in Table <ref>.
For brevity, the table only list a few photogrammetric software tools which unanimously allow self-calibration.
This table can serve as a reference in choosing a proper GCC tool and hopefully can help prevent duplication of development effort.
We assess a GCC tool based on characteristics which are grouped into accessibility and quality evaluation.
For accessibility, these characteristics include supported camera models and targets, stereo / multiple camera support, the user interface, source availability, and the coding language.
Usually, a graphical user interface (GUI) is more accessible than a command line interface to an average user.
When a tool is open-source or modular, it is easy to extend it to other camera models and calibration targets.
The coding language usually implies the execution efficiency and the community support.
From quality evaluation, we look at the outlier strategy and the availability of covariance output.
The outlier strategy dictates how to handle outliers in detected corners which may deviate from their true positions by a few pixels.
For quality check, all calibration tools output some metric based on reprojection errors, such as the mean reprojection error and the root mean square (RMS) reprojection error.
However, these metrics are highly dependent on the used image corners, and thus are inadequate to compare results from different methods <cit.>.
The covariance output is an quality indicator besides these metrics, and directly links to the correlation analysis <cit.>.
Next, we describe several popular calibration tools in terms of these characteristics.
§.§ BabelCalib
The monocular camera calibrator, BabelCalib, employs a back-projection model as a proxy for a variety of radial-symmetric forward camera models, including the pinhole radial distortion model (<ref>), DS (<ref>), EUCM (<ref>), FOV (<ref>), KB-8 (<ref>), and UCM (<ref>).
In practice, the back-projection model, a two-parameter division model of even degrees (<ref>), can be obtained by linear solvers,
and then the desired camera models can be regressed from the division model.
BabelCalib is agnostic to the calibration targets,
supports calibration with multiple targets,
and handles outliers with the Huber loss.
§.§ Basalt
The Basalt package <cit.> can carry out monocular camera calibration, supporting camera models including DS, EUCM, FOV, KB-8, and UCM.
Its default calibration target is the AprilGrid.
A Levenberg-Marquardt algorithm is implemented in Basalt for robust calibration with the Huber loss.
With neat use of C++ templates, it is a lean and fast tool.
§.§ calio.io
The commercial calibration tool by calio.io comes with an intuitive GUI, supports a variety of camera models, including the pinhole rational radial tangential model with the thin prism effect (<ref>),
the division model (<ref>), DS, KB-8, FOV, EUCM, and a B-spline camera model, and supports many calibration targets including the checkerboard and the Charuco board.
Moreover, it allows calibrating multiple cameras with multiple targets,
and optimizing the target points to deal with board deformation, and deals with outliers with the Huber loss.
§.§ Camodocal
The Camodocal package supports monocular and stereo GCC with models including the pinhole radial tangential model (<ref>), KB-8, and Mei (<ref>).
By default, it supports the checkerboard, but it is relatively easy to extend to other targets.
It uses the Cauchy loss to deal with outliers.
§.§ Kalibr
Kalibr is a popular GCC tool that can select informative images for calibration <cit.>.
It supports projection models including pinhole projection (<ref>), UCM, EUCM, and DS, and distortion models including radial tangential distortion, equidistant distortion, and FOV.
As mentioned for (<ref>), the KB-8 model in Kalibr discards points of non-positive depth Z_c.
The supported targets include checkerboards and AprilGrids.
Outliers are handled by removing corners of reprojection errors exceeding a certain threshold.
This tool has been extended to deal with the rolling shutter effect <cit.>,
and to better detect corners in images of high distortion lenses <cit.>.
§.§ MATLAB Camera Calibrator
The MATLAB camera calibrator <cit.> supports both monocular and stereo camera calibration with both the pinhole radial tangential model (<ref>) and the Scaramuzza model (<ref>).
It can be seen as a superset of <cit.> and <cit.>.
The supported targets by default are checkerboards, circle grids, and AprilTag grids.
With its modular design, it is easy to use other calibration targets, e.g., the AprilGrid.
The MATLAB calibrator has an easy-to-follow GUI and many visualization functions.
§.§ ROS Camera Calibrator
The OpenCV library provides functions for calibrating monocular and stereo cameras with
the pinhole rational radial tangential model with the thin prism effect (<ref>), the KB-8 model for fisheye cameras, and the Mei model for omnidirectional cameras.
The omnidirectional module in OpenCV also supports a multi-camera setup and can be seen as a reimplementation of the MATLAB tool in <cit.>.
The current KB-8's realization in OpenCV does not support points of non-positive depth.
The calibration functions in OpenCV do not have outlier handling schemes, but its omnidirectional module removes images of large total reprojection errors in calibration.
Several programs have been developed on top of OpenCV, such as the ROS camera calibrator <cit.> and
the MRPT camera calibrator <cit.>.
The ROS camera calibrator is a thin wrap of OpenCV calibration functions, can run in both interactive and batch mode, and supports checkerboards, circle grids, and Charuco boards.
Besides wrapping the OpenCV functions, the MRPT camera calibrator extends the checkerboard detection to support multiple checkerboards.
§.§ Self-Calibration Tools with SfM
Self-calibration is usually based on a SfM pipeline which is realized in commercial software or open source programs.
For space, we limit the discussion to several representatives of the two groups.
Professional photogrammetric packages usually support self-calibration, for instance, the Metashape by Agisoft <cit.>,
the calibrator in PhotoModeler <cit.>, and the Pix4D mapper <cit.>.
The Metashape realizes both checkerboard-based calibration and self-calibration using natural landmarks within its SfM pipeline.
Both methods support the pinhole radial tangential model and a customized fisheye model that is made of
the equidistant projection and the radial tangential distortion.
The calibration tool in PhotoModeler adopts the inverse pinhole radial tangential model (<ref>), and
supports target-based calibration with either multiple boards each of five RAD (Ringed Automatically Detected) tags or
a single board of a circle grid with four non-ringed coded tags.
When the scene to be reconstructed is much larger than the printed targets, a self-calibration of the camera in the field may be conducted with PhotoModeler.
The Pix4D mapper can also estimate the camera intrinsic parameters with a collection of images of natural scenes.
It supports the pinhole radial tangential model and an adapted Scaramuzza model.
The open-source SfM packages also widely support camera self-calibration, such as the popular colmap,
and the recent Self-Calibration package based on the Neural Radiance Field, SCNeRF <cit.>.
Based on geometric bundle adjustment, colmap supports camera models including the pinhole radial tangential model with the thin prism distortion, KB-8, and FOV.
The learning-based SCNeRF considers both geometric and photometric consistency in constructing the implicit scene geometry and estimating the camera parameters.
§ EVALUATION OF TARGET-BASED GCC TOOLS
This section evaluates six popular target-based GCC tools on simulated and real data acquired by cameras of varying AOVs,
to show their extensibility and repeatability.
§.§ Data Acquisition
The real data were captured by an UI-3251LE-M-GL camera of a 1/1.8” sensor
from the IDS Imaging, fitted with six fixed focus lenses listed in Table <ref>, leading to varying camera DAOVs from 90^∘ to 194^∘.
Notably, in focal length, the 90^∘ lens resembles lenses on smartphones whose actual focal lengths are about 4 mm.
Also, empirically, the calibrated focal lengths are close to the physical focal lengths from Table <ref> in pixels.
The camera can capture grayscale images at 25 frames/second and resolution 1600×1200 in global shutter mode.
Prior to data capture, the exposure time was set to 5 ms to reduce motion blur.
For each lens, the camera was gently moved in front of an AprilGrid, passing through a variety of poses.
We chose the AprilGrid since it is accurate <cit.>, widely used, and resilient to occlusions,
among the reviewed calibration targets.
Three sequences each of a minute were recorded for each lens.
From each sequence, three subsequences each of 200 frames were uniformly drawn without replacement.
This resulted in 54 = 6×3×3 calibration sequences for six lenses.
We evaluated six GCC tools on Ubuntu 20.04, including BabelCalib <cit.>, Basalt <cit.>, Camodocal <cit.>,
TartanCalib <cit.> (Kalibr with enhanced corner detection),
the Matlab calibrator <cit.>, and the ROS calibrator <cit.> based on OpenCV.
which were chosen for their wide use and easy extension to an alternative type of target and data input.
Within these tools, we evaluated several camera models, the pinhole model with the radial tangential distortion for wide-angle cameras, KB-8 for fisheye cameras, and Mei / EUCM for omnidirectional cameras,
which were chosen mainly for their wide support by GCC tools and downstream applications.
The test plan is shown in Table <ref> which lists GCC tools and camera models for processing particular data.
In general, the pinhole model with distortion was used for cameras with a DAOV 120^∘,
KB-8 for cameras with a DAOV ≥ 100^∘, and Mei / EUCM for cameras with a DAOV ≥ 120^∘.
The simulation data were generated from the real data with the workflow shown in Fig. <ref> (bottom).
We first processed the real data by the TartanCalib with proper models according to the test plan.
Thus, we obtained the frames of detected corners and their poses,
and the estimated calibration parameters, from TartanCalib.
As an exception, for simulating observations of the KB-8 model on MTV185 sequences,
we first processed them by TartanCalib with the Mei model to get the frame poses, and then estimated the KB-8 parameters by Camodocal on the used corners by TartanCalib.
In any case, these frame poses and camera parameters were then used to simulate the corners in images
by projecting the target landmarks and adding a Gaussian noise of 0.7 px at both x and y-axis.
These camera parameters served as the reference in evaluation.
§.§ Data Processing
For either real or simulated data, the evaluation pipeline is shown in Fig. <ref> (top).
For better comparison, all tools except for the ROS calibrator used the same corners.
Specifically, we first ran TartanCalib on a (real or simulated) sequence,
and save the frames with detected corners and mark the frames used by TartanCalib.
The TartanCalib was chosen to extract corners from AprilGrid images
since it could identify sufficient corners under large distortion <cit.>.
All frames of corners were provided to the ROS calibrator.
But only frames of corners used by TartanCalib were given to the four methods, BabelCalib, Basalt, Camodocal, and the MATLAB calibrator.
For these tools, we wrote necessary data loading functions and adapted the calibration initialization with the AprilGrid if needed.
Note that TartanCalib / Kalibr always failed for the MTV185 sequences, we gave these four tools the corners of TartanCalib with the Mei model for these sequences.
Feeding the four tools by TartanCalib had several other reasons.
First, empirically, Kalibr usually chose ≤ 40 informative frames for calibration.
This coincided the assertion that global camera models were usually well constrained with 40 frames in <cit.>.
Second, BabelCalib often failed to find a solution with too many frames (e.g., ≥ 100), especially for the pinhole model with radial distortion.
Third, the MATLAB calibrator took up to an hour to solve for the Scaramuzza model with 100 frames.
For the ROS calibrator, we ran it five times, each with a sample of 40 randomly chosen frames without replacement,
and kept the run of the minimum RMS reprojection error as the final result.
The exclusive treatment of the ROS calibrator was because
the OpenCV calibration functions hardly dealt with outliers and often gave poor results on corners used by TartanCalib.
Apart from the above, we ran these six calibration tools with their default parameter settings.
A test run was considered failed if no solution was found or
the recovered focal lengths deviated from the nominal values (for real data) or the reference (in simulation) by ≥ 100 px.
Failures in real and simulated tests are marked in Table <ref>, for BabelCalib, Basalt, Kalibr, and the ROS calibrator.
BabelCalib failed once when it converged to a wrong focal length.
Basalt failed for either converging to a wrong focal length or
unable to converge in 100 iterations.
Kalibr's failures were due to the unsuitable pinhole equidistant model for large FOV cameras.
The ROS calibrator was bothered by outliers and largely unsuccessful on MTV185 sequences for the unsuitable pinhole equidistant model.
Oddly, it always aborted with ill-conditioned matrices on sequences of 103^∘ and 127^∘ DAOV cameras, perhaps unable to initialize in such cases.
Next, we evaluated the GCC tools by looking at the consistency of estimated camera parameters and the RMS reprojection errors for both simulated and real data.
The RMS reprojection errors are computed by these tools on all inlier observations.
The RMS values should be viewed lightly when comparing across tools since the inlier sets may vary slightly even for the same data.
§.§ Simulation Results
The simulated data were processed as described above.
The data from cameras with S04525, E1M3518, and BM4218 lenses, were processed by five tools with the pinhole radial tangential model
except Basalt which did not support the model.
The camera parameter errors and the RMS reprojection errors are shown in Fig. <ref>, where failed tests were excluded in drawing the box plots.
The units are specified in parentheses for all box plot figures.
These tools generally gave very similar results close to the reference.
The focal lengths and principal points were usually within (-2, 2) px of the true values.
Since BabelCalib did not consider the tangential distortion, its estimates had larger errors than other methods, especially for BM4218 sequences of 103^∘ DAOV.
The RMS reprojection errors slightly above 0.9 were resulted from the Gaussian noise of σ = √(2)× 0.7 = 0.99.
The ROS calibrator based on OpenCV had slightly larger error dispersions, likely due to corners of large reprojection residuals.
For a BM4218 sequence, the MATLAB calibrator converged to a focal length off by 37 px for no apparent reason.
For cameras with a DAOV ≥100^∘, the KB-8 model was solved for by using five tools except for the MATLAB calibrator which does not support KB-8.
The parameter errors and RMS errors are shown in Fig. <ref>.
The ROS calibrator results for the BM4218, BM4018, and MTV185 lenses, and the MATLAB results for the MTV185 lens,
were excluded for consistent failures explained in Section <ref>.
Among these tools, we see that the Basalt and the OpenCV-based ROS calibrator sometimes converged to focal lengths of large errors >5 px.
Other tools consistently estimated the focal lengths and principal points within (-2, 2) px as well as the distortion parameters.
For sequences with lenses, BM4018, BT2120, and MTV185,
three tools including Kalibr, Camodocal, and the ROS / OpenCV calibrator were used to solve for the Mei parameters.
For comparison, we also solved for the EUCM model by BabelCalib and Basalt.
The parameter errors and reprojection errors are shown in Fig. <ref>.
where we used (f_x, f_y) instead of (γ_x, γ_y) as the latter has large variance caused by that of ξ.
For both the BM4018 and BT2120 sequences, the three methods with the Mei model gave similar results.
Overall, Kalibr gave the best estimates, notably on the MTV185 sequences.
The ROS calibrator tended to have larger variances in focal lengths and principal points but their errors were within (-2, 2) px.
For the MTV185 sequences, the Camodocal and OpenCV results showed about 3 px errors in focal lengths and about 0.7 errors in ξ, but with reasonable RMS errors.
We attribute this to two reasons. First, the UCM model is numerical unstable. Second, k_1 in the Mei model is redundant.
As for the EUCM models, the BabelCalib achieved smaller dispersions in f_x and f_y than Basalt, although the data were simulated with the Mei model.
§.§ Real Data Results
We processed the real data according to Table <ref> and looked at the estimated parameters and RMS reprojection errors.
For clarity, the nominal focal lengths from Table <ref> and the principal point (800, 600) are subtracted from their estimates in plots.
The sequences of S04525, E1M3518, and BM4218 lenses were processed by five tools except for Basalt.
The calibration parameters and the RMS reprojection errors are shown in Fig. <ref>, where the failed cases are not included in the box plots.
These five tools had fairly similar results.
The difference in principal points for BabelCalib was caused by its model that ignored tangential distortion.
The large dispersion in projection parameters of BM4218 sequences was likely because
the pinhole radial tangential model was somewhat improper for the camera with this lens as implied in Fig. <ref>.
For the KB-8 model, the sequences of BM4218, BM4018, BT2120, and MTV185 lenses were processed by five tools except the MATLAB calibrator.
As shown in Fig. <ref>, these tools achieved very similar calibration results in general.
Basalt failed frequently for the BM4218 and BT2120 sequences, leading to apparent small parameter dispersions.
Comparing the dispersion of focal lengths for BM4218 in Fig. <ref> and <ref>, we think that the KB-8 model is more suitable than the pinhole radial tangential model for the BM4218 sequences.
The OpenCV-based ROS calibrator aborted on BM4218 and BM4018 sequences perhaps for failed initialization.
Both Kalibr and the ROS calibrator did not handle MTV185 sequences of a 194^∘ DAOV.
For the BT2120 data, we see that the results by OpenCV were affected by outliers leading to large RMS reprojection errors and parameters slightly deviated from other methods.
For the Mei model, we processed the BM4018, BT2120, and MTV185 sequences using tools including Kalibr, Camodocal, and the ROS calibrator.
For comparison to the EUCM model, these sequences were also processed by BabelCalib and Basalt.
The calibration parameters and the RMS reprojection errors are shown in Fig. <ref>.
With real data, these tools obtained similar values and dispersions for focal lengths and principal points,
more consistent than the simulation shown in <ref> where the data were simulated with the Mei model.
The distortion parameters of the Mei model had large variance despite reasonable RMS reprojection errors, due to its parameter redundancy.
Otherwise, the EUCM model resulted in consistent values for [α, β].
§ CONCLUSIONS AND RESEARCH TRENDS
In view of the ever-evolving GCC, we survey the recent GCC tools from the perspectives of camera models, calibration targets, and algorithms,
providing an overview of the benefits and limitations of these tools.
We also evaluated six well-known calibration tools, including
BabelCalib, Basalt, Camodocal, Kalibr, the MATLAB calibrator, and the OpenCV-based ROS calibrator,
to study their consistency and repeatability on simulated and real data.
From the review and experiments, we summarize several findings.
(1) Outlier handling is crucial for optimization-based camera calibration tools.
These outliers are usually detected corners a few pixels away from their actual image locations, and often occur in somewhat blurry images.
Luckily, most GCC tools can deal with outliers.
(2) The GCC tools, Camodocal, Kalibr, and the MATLAB calibrator, support well the pinhole radial tangential model.
BabelCalib and Camodocal support well the KB-8 model, and TartanCalib supports well the KB-8 model for a camera with a DAOV < 180^∘.
Camodocal, TartanCalib, and OpenCV support well the Mei model, but the model suffers from parameter instability and redundancy.
Moreover, the pinhole radial tangential model may become inadequate for cameras of a DAOV >100^∘.
The KB-8 model is typically preferred for cameras of a large DAOV due to its wide support and good accuracy when a global camera model is to be obtained.
(3) The various failure cases revealed in our tests imply the intricacy in camera model initialization and optimization of a classic GCC tool.
Aside from these failures, these GCC tools in (2) agree well with each other on calibrating conventional, fisheye, and omnidirectional cameras with proper global camera models.
Based on this study, we point out several future research directions.
Interactive Calibration
It is well known that quality data and informative data are essential for GCC.
The opposite are two problems, image blur that may be caused by rapid motion or out of focus, and insufficient data. One way to ensure data quality and information is interactive calibration
which provides quality check, selects the quality data, and gives next-move suggestions in real time,
whether for target-based or target-free calibration.
AprilCal <cit.> is such a tool for target-based calibration.
Static Calibration
Target-based calibration often involves unrepeatable onerous movements
which can be obviated in at least two ways, calibration with a programmed robot arm and static calibration.
Robot arm-based calibration has been studied in <cit.>.
Static calibration usually relies on active targets.
Such methods have been developed in <cit.> with application-specific setups.
We think there is still much room in static calibration to explore.
Reconstruction with Calibration
The setup of the lab calibration is usually different from the in-situ setup, e.g., in focusing distance (depth of field), exposure,
capture mode (snapshot or video), aperture,
and size of the objects of interest.
Some work has been done to mitigate the differences, e.g., out of focus, in <cit.>.
An ultimate solution would be self-calibration or calibration based on prior maps.
These methods depend on a reconstruction engine that supports calibration.
Such an engine based on traditional bundle adjustment is colmap <cit.>.
New engines capable of calibration based on deep learning are on the surge, for instance,
<cit.>.
|
http://arxiv.org/abs/2306.17667v1
|
20230630135555
|
Bias-Free Estimation of Signals on Top of Unknown Backgrounds
|
[
"Johannes Diehl",
"Jakob Knollmüller",
"Oliver Schulz"
] |
astro-ph.IM
|
[
"astro-ph.IM",
"astro-ph.CO",
"hep-ex",
"stat.AP",
"stat.ME"
] |
[email protected]
Max-Planck-Institut für Physik,
Föhringer Ring 6,
80805 München,
Germany
[email protected]
ORIGINS Data Science Lab,
Excellence Cluster ORIGINS,
Boltzmannstr. 2
85748 Garching,
Germany
Technical University of Munich,
TUM School of Natural Sciences,
Boltzmannstr. 2
85748 Garching,
Germany
[email protected]
Max-Planck-Institut für Physik,
Föhringer Ring 6,
80805 München,
Germany
ORIGINS Data Science Lab,
Excellence Cluster ORIGINS,
Boltzmannstr. 2
85748 Garching,
Germany
We present a method for obtaining unbiased signal estimates in the presence of a significant background, eliminating the need for a parametric model for the background itself. Our approach is based on a minimal set of conditions for observation and background estimators, which are typically satisfied in practical scenarios. To showcase the effectiveness of our method, we apply it to simulated data from the planned dielectric axion haloscope MADMAX.
Bias-Free Estimation of Signals on Top of Unknown Backgrounds
Oliver Schulz
,
=============================================================
§ INTRODUCTION
-5mm
Fitting a small-amplitude signal in the presence of a large-amplitude background is both a common and often challenging problem. If one has a valid parametric model for both signal and background, and the response of the experimental apparatus can be accurately modelled as well, then a forward-modelling approach can be employed: With signal parameters θ and background parameters ϕ we can usually construct a tractable and parameterised probability distribution p^_ θ, ϕ(X) that models the probability of observing a specific realisation of X. The combination of such a distribution with some actual observed data results in a likelihood function, and so all the common tools of Frequentist and (assuming signal and noise priors) Bayesian statistics can be brought to bear. If however, a parametric model is only available for the signal, but not for the background, the situation is less straightforward. One can use a parameter-free background filter, equivalent to subtracting a parameter-free background estimate from the observation. Unfortunately, such an estimator is typically affected by the presence of a signal, resp. such a filter does alter the signal to some degree. As a result, a subsequent signal estimate will be biased - unless additional measures are taken to correct for this.
This issue does, for example, arise in the context of axion haloscope experiments. These aim to detect a small, localised axion signal on top of a dominating radio-frequency background. This background is determined by the system response and characteristics of the radio-frequency receiver chain of the haloscope experiment. As the wavelengths of interest are comparable to the size of the system, it is exceedingly difficult to model this background ab-initio.
Often Savitzky-Golay or similar filters are used to non-parametrically subtract the background while simultaneously retaining potential signals <cit.>. But the signal that remains after the background filter is also altered to some degree. This leads to a bias if signal parameters are inferred directly from the filter output.
In this work we demonstrate an approach that can be used to obtain unbiased signal estimates in cases like this. We explain the general principle of our approach in Sec. <ref>. In Sec. <ref> we apply the approach to a physics example in context of the dielectric axion haloscope MADMAX <cit.> using Savitzky-Golay filters as background estimators. We conclude in Sec. <ref>.
§ GENERAL APPROACH
-5mm
While the approach described here is fairly generic, we do require a few condition to be fulfilled in order to gain unbiased signal estimates:
* The expected value E(X) of the experimental observation X can be written as linear combination of a signal S and a background B:
E(X) = S + B.
* The response of the experiment, i.e., the measure of probability p^(X) of observing an outcome X, can be modelled with sufficient accuracy by a tractable probability distribution that is parameterised by its expectation E(X):
p^_E(X)(X) = p^_S+B(X).
If so, then we can also split the observation X into signal plus background S +B and noise N and write p^ (without loss of generality) as
p^_S+B(X) = p^_S+B(N),
with
X = E(X) + N = S + B + N
and E(N) = 0.
* We have a parameter-free and unbiased background estimator f_:
f_(B + N) ≈ B
and
E(f_(N)) = 0
* The background estimator must not completely eliminate a signal:
S ≠ 0 f_(S) ≠ 0.
But crucially, we do not require that the signal is invariant under f_.
* The background estimator is linear:
f_(S + B + N) = f_(S) + f_(B + N)
* The possible shapes of the signal S are known and so S can be parameterised by signal parameters θ and expressed as a tractable S_θ.
The central requirements here are the last four: the existence of an unbiased and linear background estimator and a tractable parameterisation of the signal shape. The first two requirements are usually satisfied in signal-plus-background inference scenarios anyway.
Note that the domain of S and B may be different than the domain of X. If, e.g., p^_S+B is Poissonian, then the domain of S and B would be ℝ^n but the domain of the observation X would be ℕ_0^n. However, addition/subtraction of values B, S and X must be mathematically well-defined, and the background estimator must be applicable to the domain of B and S as well the the domain of X. In practice this is typically this case, though.
Under these conditions, we can now construct a forward model of the experiment without having a parameterised background model. To do this, we make the background estimator f_ a (virtual) part of the experiment. We replace our original observation X by a virtual observation (X', B):
X' ≡ X - f_(X)
B ≡ f_(X).
We also define
S'_θ≡ S_θ - f_(S_θ).
Due to the linearity (Eq. <ref>) and bias-free nature (Eq. <ref>) of the background estimator we can approximate X as
X = X' + f_(X)
= X' + f_(B + N) + f_(S)
≈ X' + B + f_(S)
and so (due to Eq. <ref>) also approximate N as
N = X - S - B
≈ X' + B + f_(S) - S - B
= X' - S'_θ.
Now we can write an approximate but unbiased statistical model p^_θ(Y) for Y that is independent of the unknown background B and that is parameterised only by the signal parameters θ:
p^_θ(X') = p^_S_θ + B(X)
≈ p^_S'_θ + B( X' - S'_θ).
So if our background estimator is accurate enough, then given an actual observation X we also have a good approximation for the likelihood function of the signal parameters:
ℒ_X'(θ) ≈ p^_S'_θ + B( X' - S'_θ).
Now we can apply common statistical tools to infer the signal parameters θ based on observations X.
In the following we demonstrate this approach on a specific use case, with Bayesian inference, but the approach is valid in general under the conditions listed above.
§ APPLICATION TO AN AXION HALOSCOPE
-4mm
In the following, we demonstrate the capability of our approach by applying it to (simulated) example data of the planned axion haloscope MADMAX, using Savitzky-Golay filters as background estimators.
-3mm
§.§ The MADMAX Experiment
-4mm
Axions play a crucial role in the standard Peccei-Quinn solution to the strong CP problem <cit.>. At the same time they can also be produced non-thermally in the early universe in abundances that make them a viable dark matter candidate <cit.>. It is possible to detect them with earth-based experiments probing their couplings to different standard model particles, e.g. <cit.>, most commonly the axion-to-photon couling .
If they make up a sizeable fraction of a homogeneous dark matter halo <cit.>, axion haloscopes like ADMX <cit.>, ORGAN <cit.>, HAYSTAC <cit.> or MADMAX <cit.> have the capability of detecting or excluding axions in certain regions of the axion mass (), parameter space. MADMAX will achieve this by placing a metallic mirror and several movable dielectric disks in a dipole magnetic field. The Primakoff effect leads to the emission of radio-frequency photons at the surfaces of these disks, the energy of which depends on the axion mass. These emissions are coherent due to typically very small masses (MADMAX will be sensitive around ∼ 100 μeV) of the cold axions and correspondingly huge deBroglie wavelengths of a scale bigger than the size of the experiment. The photons can interfere constructively and be resonantly enhanced by strategically placing the disks. Through adjusting disk positions, the signal enhancement can be shifted to a different frequency, i.e. axion mass, and a big parameter space can be covered.
For the MADMAX experiment the expected signal power at a specific frequency adheres to the following formula:
P(ω) ω = /^2^2 B_e^2 A β^2(ω) q_e/ħ×√(2/π)v(ω)/exp( -v(ω)^2 + ^2/2 ^2)sinh( v(ω) /^2) v(ω)
The first part (before the ×) determines height and position of the signal peak. The position depends exclusively on the , which we consider free in the frequency range after background subtraction. The height on multiple theoretical and experimental parameters: is the local axion density, which we fix to the canonical value of = 0.3 GeV cm^-3, effectively assuming homogeneous dark matter made exclusively out of axions. We also fix the experimental parameters external magnetic field B_e = 10 T and disk surface area A = 1 m^2. The power boost factor β^2(ω) = 5× 10^4 generally depends on frequency, for simplicity we set it constant. We expect it to vary only negligibly on the scale of the axion signal width. This leaves us with the axion-photon coupling as the only free parameter determining the integrated axion power. It depends on the anomaly ratio E/N as the only free parameter:
g_aγ
=
α/2π f_a(
E/N
- 1.92
) .
α is the electromagnetic fine structure constant and f_a the axion decay constant, which is linearly related to the axion mass <cit.>. In general one should combine the prior knowledge for all parameters mentioned above, however we will only consider the most general available anomaly ratio expectation for QCD axion models for now <cit.>.
The second part of Eq. <ref> determines the shape of the signal peak. The frequency ω at which the axion can be detected depends on its total energy, so axions with different relative velocities v(ω) with respect to the laboratory can be detected at different frequencies. The dark matter velocities are assumed to follow a Maxwell-Boltzmann distribution with velocity dispersion = 218±6 km s^-1 <cit.>. Earth is moving through the dark matter halo with a relative velocity of = 242 km s^-1 with significant seasonal variation <cit.>. Because we do not want to consider seasonal variations here we move this variation into the error of the dark matter velocity distribution and model it as a Gaussian. Basically, the second part of Eq. <ref> is the probability density function of a Maxwellian velocity distribution boosted by . To obtain the observable integrated power in a frequency bin we have to integrate the above formula over one bin.
This signal sits on top of a dominant background determined by the exact characteristics of the MADMAX receiver chain. The drop off both at low and high frequencies is caused by the bandpass filter employed. The variations seen in grey in Fig. <ref> are exceedingly difficult to model ab-initio: Multiple components in the whole system act as correlated noise sources, the emissions of which interferes with all other sources due to the reflectivity of the system and a big coherence length at the microwave frequencies used. While all of these emissions can in theory be estimated, propagation of uncertainties in every single one of them would introduce errors in the background model of a much bigger scale than the axion signal power or the statistical noise on top of the background. A challenging but possible way to calibrate the background would be to make each measurement with the magnet turned off (no axion peak visible), the magnet turned on (axion signal visible) and subtracting the two afterwards. However this is infeasible due to added costs and increasing the runtime by a factor > 2. Without a background model it is impossible to fit background and signal simultaneously. The only crucial requirement we set on the background is to not include fluctuations of the same width in frequency space as the axion signal.
-3mm
§.§ Savitzky-Golay Filters
-4mm
As stated above, obtaining a parametric background model for MADMAX data is likely not feasible. We thus have to rely on a non-parametric background estimator, suitable candidates being Savitzky-Golay filters.
Savitzky-Golay (SG) filters are a well-established and powerful smoothing technique often used for reducing noise in evenly spaced data while preserving the underlying smooth features <cit.>. The filter works by fitting low-degree polynomials to overlapping windows of data points using the method of least squares. The output of the filter is the value of the fitted polynomial at the central point of each window. The parameters of the filter are the windows length and degree of the polynomials. If these are chosen well for the given data, then SG filters can often suppress higher-frequency noise substantially while preserving the lower-frequency shape of the input. Improved versions and alternatives to SG filters have been proposed <cit.>, but in our example use case we find that classical SG filters perform very well.
An SG filter can be expressed compactly using matrices. Given a data vector X of length N, we can construct a smoothed version f_(X) using a convolution with the filter coefficients C:
f_(X) = X ∗ C.
The filter coefficients C can be found by solving a linear least-squares problem. Let A be a matrix of size M × (n+1), where M is the (odd) window size and n is the polynomial degree. Each element of A is defined as:
A_ij = (i - M - 1/2)^j,
where i = 0, 1, ..., M-1 and j = 0, 1, ..., n. We can find the filter coefficients C by solving the following linear least-squares problem:
C = (A^T A)^-1 A^T
The first row of the resulting matrix C contains the filter coefficients. Note that these coefficients are computed only once for a given window size and polynomial degree and can be used to filter the entire data vector Y, effectively circumventing the fitting problem and resulting in high numerical performance.
SG filters are finite impulse response (FIR) filters and applied by convolution of the coefficients C and the input data vector X. They are therefore inherently linear and satisfy the condition in Eq. <ref>.
By applying the SG filter to the axion haloscope data, we separate the axion signal and noise from the background, which has lower frequency characteristics. We do this without constructing a parametric model for the background. But as the signal, in contrast to the noise, is positive and not zero-symmetric, a fraction of the signal becomes part of the background estimate. This results in a bias that we need to correct for.
-1mm
§.§ Demonstrating the Procedure
-4mm
To test our bias correction approach on the problem stated above, we simulated 1000 MADMAX-like mock-datasets. A few of these are shown as grey lines in Fig. <ref>. We then analysed these datasets with and without bias correction. For our analysis we chose a Bayesian approach, however the bias correction procedure can be applied to (and is indeed also necessary for) other inference methods, e.g., a maximum likelihood estimate of the signal parameters.
The datasets consist of 25001 datapoints with 2 kHz spacing. It has three components (see Fig. <ref>):
* Signal. An axion signal following the shape of Eq. <ref>. We assume fixed and and draw , and E/N from the prior. For E/N we additionally exclude models with 1.06 < E/N < 2.78 to ensure that the signals are detectable at the noise level given below. So we'll consider the scenario where an axion signal has been discovered but its quantitative properties have not been inferred yet.
* Noise. Uncorrelated Gaussian background noise with standard deviation σ = 5 × 10^-24 W, corresponding to a realistic integration time below two weeks assuming noise temperatures below 10 K.
* Background. Correlated, non-thermal background. Its shape is modelled by the following formula:
b(f) = 10^-20[ erf(f - f_0/5 MHz) (f_0/f)^3 .
+ . exp( -(f-25 MHz (1+r/15)/20 MHz (1+r/10))^2 ) ]
+ 4 × 10^-22 (1+r) sin( f + r f_0/2.5 MHz)
+ 5 × 10^-24[ (1+r) sin( f + r f_0/0.25 MHz) .
+ . (1+r) sin( f + r f_0/0.1 MHz) ],
where f_0 = 4.218 MHz and f are relative frequencies and all r are independent random variables drawn from a Gaussian 𝒩(μ=0,σ=1). The first three lines of Eq. <ref> determine the large-scale shape of the background, but are easy to distinguish from an axion signal with a FWHM of roughly 10 kHz. We therefore introduce two sine-like components with random phase, amplitudes of order of the uncorrelated noise and fixed periods of 100 kHz and 250 kHz.
The relevant parameters for our analysis are summarised in Tab. <ref>. We filter the data consisting of these three components with a fourth-order SG filter of a width of 221 datapoints, cutting away the first and last 110 due to boundary effects of the filter. Subtracting the filtered from the raw data removes the third, correlated background component almost completely, but also slightly distorts the signal shape, as shown below. The parameters of the SG fit were chosen to yield an optimal reduction of the background while leaving signal and noise almost unchanged.
Since we treat the amplitude of the uncorrelated noise as unknown we have to infer it from the data. If we would simply take the standard deviation of the background reduced data, the presence of a signal would lead to a slight bias for the noise towards higher values. To prevent the signal from biasing out noise-level estimation we first use the SG filter to remove a background estimate from the data. Then we partition each spectrum into three frequency regions of equal size: The localised signal can only be present in up to two of the pieces simultaneously. We select the region with the smallest standard deviation for our noise-level estimation. This removes the bias caused by the presence of the signal. In our test case the inferred noise level has a negligible difference compared to the ground truth.
For Monte Carlo analysis we use the reactive nested sampling algorithm <cit.> via the Bayesian Analysis Toolkit in Julia (BAT.jl)<cit.>.
-3mm
§.§ Results
-4mm
Fig. <ref> shows the result for one of the datasets. We demonstrate the effect of the bias correction qualitatively using 68 percentile central posterior intervals. When the uncorrected signal model is used to fit the signal peak (Fig. <ref>, top left), the effect of the background reduction via SG filter cannot be modelled. Due to the presence of the signal, the background around the signal is overestimated, leading to a systematic decrease in signal height and adjacent datapoints below the baseline. A peak-fit is perfectly capable of fitting this modified signal, but cannot fit the surrounding datapoints - and therefore does not retain the true signal parameters. We will see this in the following.
The bottom-left plot in Fig. <ref> shows the same, but with a corrected fit on the signal peak that takes the effect of SG filtering into account. We obtain a good fit over the whole frequency range, which displays the characteristic effect of an SG filter on the signal. Unbiased signal parameters can be obtained based on this fit, as Fig. <ref>, top right shows. The actual, non-filtered and background free signal peak is fitted well by the posterior predictive central interval of the corrected peak-fit on which no SG filter has been applied. The uncorrected peak-fit however underestimates signal height and width. This underestimation is made more visible in Fig. <ref>, bottom right, where the deviation relative to the true signal is shown in comparison with the noise level, both for the corrected and the uncorrected peak-fit. While the 68 percentile of the corrected fit includes the true signal over almost the full frequency range, the uncorrected fit displays significant deviation.
We perform a coverage test to show that the analysis infers the correct parameters. From a Frequentist perspective, repeating an unbiased analysis multiple times should lead to the true signals being uniformly distributed over all marginal posterior quantiles. As we do have access to the ground truth of the signal parameters (using simulated data), and as we have 1000 equivalent mock-datasets at our disposal, we can verify our Bayesian results in this manner. The outcome is shown in Fig. <ref>. The uncorrected peak-fit that does not take the effect of the SG filter into account systematically underestimates E/N and σ_v. It also shifts the axion mass to slightly larger values. The corrected fit, in comparison, shows no significant deviation from the expected uniform distribution and can therefore be considered unbiased.
§ SUMMARY
We presented a method to fit small-amplitude signals on top of an unparameterised background in an unbiased fashion. The method is based on fairly weak assumptions about the problem, making it applicable in a wide variety of scenarios: we mainly require the existence of an unbiased and linear background estimator and an a-priori parameterisation of the signal. This enables us to virtually incorporate the background estimator into the measurement process and reduces to problem to model a background-free signal in the presence of symmetric noise, i.e., noise that has an expectation value of zero.
To evaluate the method in a practical real-world application, we applied it to signal estimation on simulated data for the planned MADMAX axion haloscope. The MADMAX experiment aims to detect a small, peaked axion dark matter signal in the presence of a challenging radio-frequency background that is difficult to model from first principles, so it is a very suitable candidate for the presented approach. For 1000 MADMAX-like mock datasets we used Savitzky-Golay filters as background estimators and inferred the signal parameter in a Bayesian fashion using nested sampling. The results verify the bias-correction approach presented here empirically and show that the signal parameter estimates are indeed unbiased. The true signal parameters are within the 68% central posterior predictive limits almost everywhere and the deviations of inferred parameters from the ground truth fell within the range of expected statistical fluctuations.
For comparison, we performed the signal parameter estimation without bias correction. Here the results do indeed show a significant systematic deviations from of the inferred signal parameters from the ground truth. The bias correction method is thus both effective and necessary.
The authors would like to thank Allen Caldwell and Frank Steffen for helpful comments on the manuscript. Jakob Knollmüller acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany´s Excellence Strategy – EXC 2094 – 390783311.
1
utphys86
|
http://arxiv.org/abs/2306.05905v1
|
20230609140126
|
TreeDQN: Learning to minimize Branch-and-Bound tree
|
[
"Dmitry Sorokin",
"Alexander Kostin"
] |
cs.LG
|
[
"cs.LG",
"math.OC"
] |
Single-Image-Based Deep Learning for Segmentation of Early Esophageal Cancer Lesions
Haipeng Li, Student Member, IEEE,
Dingrui Liu,
Yu Zeng,
Shuaicheng Liu, Member, IEEE,
Tao Gan,
Nini Rao,
Jinlin Yang,
and Bing Zeng, Fellow, IEEE
Manuscript submitted on June 09, 2023. This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant No. 61720106004.
Haipeng Li, Dingrui Liu, Shuaicheng Liu, and Bing Zeng are with the School of Information and Communication Engineering, Yu Zeng is with School of Glasgow College, Nini Rao is with School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China.
Tao Gan and Jinlin Yang are with West China Hospital, Sichuan University, Chengdu, China.
Haipeng Li and Dingrui Liu contributed equally.
Corresponding author: Bing Zeng ([email protected])
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Combinatorial optimization problems require an exhaustive search to find the optimal solution. A convenient approach to solving combinatorial optimization tasks in the form of Mixed Integer Linear Programs is Branch-and-Bound. Branch-and-Bound solver splits a task into two parts dividing the domain of an integer variable, then it solves them recursively, producing a tree of nested sub-tasks. The efficiency of the solver depends on the branchning heuristic used to select a variable for splitting. In the present work, we propose a reinforcement learning method that can efficiently learn the branching heuristic. We view the variable selection task as a tree Markov Decision Process, prove that the Bellman operator adapted for the tree Markov Decision Process is contracting in mean, and propose a modified learning objective for the reinforcement learning agent. Our agent requires less training data and produces smaller trees compared to previous reinforcement learning methods.
§ INTRODUCTION
Practical applications in multiple areas such as logistics <cit.>, portfolio management <cit.>, manufacturing <cit.>, and others share a combinatorial structure. Finding the optimal solution for a combinatorial task requires an exhaustive search over all valid combinations of variables. The optimal solution for a combinatorial problem formulated as a Mixed Integer Linear Program <cit.> can be efficiently obtained with Branch-and-Bound algorithm (B&B) <cit.>. B&B algorithm employs divide-and-conquer approach. At each step, it splits the domain of one of the integer variables and eliminates paths that can not lead to a feasible solution. The performance of the B&B algorithm depends on two sequential decision-making processes: variable selection and node selection. Node selection picks the next node in the B&B tree to evaluate, and variable selection chooses the next variable to split on. The variable selection process is the most computationally expensive and crucial for the performance of the whole algorithm. The optimal variable selection method, frequently dubbed as branching rule, will lead to smaller trees and a more efficient solver. Although the optimal branching rule is not known <cit.>, all modern solvers implement human-crafted heuristics, which were designed to perform well on a wide range of tasks <cit.>. At the same time, practitioners frequently solve the same task with different parameters, so the branching rule adapted to a specific distribution of tasks may lead to a significant performance boost and business impact. The branching rule is applied sequentially to minimize the resulting tree size, which resembles the reinforcement learning paradigm in which an agent interacts with the environment to maximize the expected return. Recently reinforcement learning achieved state-of-the-art results in a diverse set of tasks, from beating world champions in the games of Go <cit.> and Dota2 <cit.> to aligning optical interferometer <cit.>, controlling nuclear fusion reactor <cit.>, tuning hyperparameters of simulated quantum annealers <cit.> and optimizing output of large language models <cit.>.
Previous works <cit.> introduced tree Markov Decision Process (tree MDP). In the tree MDP, instead of a single next state agent receives multiple next states — descendant nodes of the current tree node. The value function of the current node is the sum of reward and value functions in the child nodes. Work <cit.> showed that a reinforcement learning agent trained to minimize each sub-tree minimizes the whole B&B tree. We follow this approach and develop a sample efficient off-policy reinforcement learning method adapted for a tree MDP. Tree sizes produced by the B&B algorithm usually have a long-tailed distribution, even for hand-crafted branching heuristics. To overcome this challenge, we adapt the loss function to the distribution of tree sizes. As a result, our agent learns more stable, produces smaller trees, and is more sample efficient than the previous RL methods. Our contribution is the following:
* We prove that the Bellman operator in tree MDP is contracting in mean.
* We modify the learning objective to optimize the geometric mean.
* We propose a novel reinforcement learning algorithm — TreeDQN.
§ BACKGROUND
§.§ Mixed Integer Linear Programming
A Mixed Integer Linear Program (MILP) is a non-convex optimization problem of the form:
min{
c^⊤ x
A x ≤ b
, x ∈[l, u]
, x ∈ℤ^m ×ℝ^n-m},
where objective coefficient vector c ∈ℝ^n, constraint right-hand-side b ∈ℝ^m, constraint matrix A ∈ℝ^m × n, lower and upper bound vectors l,u ∈ℝ^n and parameter m ≥ 1. The method of choice to find the optimal solution of a MILP is Branch-and-Bound <cit.>. B&B algorithm builds a tree of nested MILP sub-problems with non-overlapping feasibility sets. The root of the tree is the original problem. At each node, B&B splits the domain of one of the integer variables into two halves, which produces two sub-problems. The sub-problems differ in their feasibility sets but share the same objective. For each sub-problem, the algorithm computes lower bound as an optimal solution for the relaxed problem with removed integrality constraints and global upper bound, which denotes the best integer feasible solution known so far. B&B uses these bounds to enforce efficiency by pruning the tree. When created, all tree nodes are as marked open. The algorithm visits every open node and either marks it as fathomed if the corresponding MILP is infeasible or the lower bound is higher than the global upper bound or splits it by an integer variable until it finds the optimal solution. Fathoming every open leaf guarantees that the B&B eventually finds the best integer-feasible solution. An example of a B&B tree is shown in Fig. <ref>. The resulting efficiency of the algorithm depends on the node selection strategy, which arranges the open leaves for visiting, and the branching rule, which selects an integer variable for splitting.
Practical implementations of the Branch-and-Bound algorithm in SCIP <cit.> and CPLEX <cit.> solvers rely on heuristics for node selection and variable selection. Straight forward strategy for node selection is Depth-First-Search (DFS), which aims to find any integer feasible solution faster to prune branches that do not contain a better solution. In the SCIP solver, the default node selection heuristic tries to estimate the node with the lowest feasible solution. One of the best-known general heuristics for the variable selection is Strong Branching. It is a tree-size efficient and computationally expensive branching rule <cit.>. For each fractional variable with integrality constraint, Strong Branching computes the lower bounds for the left and right child nodes and uses them to choose the variable.
§.§ Tree MDP
The variable selection process employed by the Branch-and-Bound algorithm can be considered a tree Markov Decision Process with limitations discussed in <cit.>. To enforce Markov property, one may either choose Depth First Search as node selection strategy <cit.>, or set the global upper bound in the root node equal to optimal solution <cit.>. The state of tree MDP is the current node of the Branch-and-Bound tree, action is the fractional variable chosen for splitting, and next states are descendent nodes. The main difference between tree MDP and temporal MDP is that in tree MDP agent receives multiple next states — children nodes of the current node. Value function V(s) for a tree MDP is defined as follows:
V^π(s_t) = r(s_t, a, s_t+1^±) + V^π(s_t+1^+) + V^π(s_t+1^-),
where s_t is the current node of the tree, s_t+1^+ and s_t+1^- denotes its left and right child respectively. The goal of the agent is to find a policy π which would maximize the expected return. For instance, if the reward equals -1, the value function predicts the expected tree size with a negative sign. Hence, the agent maximizing the expected return would minimize the tree size.
During testing, the optimal solution for the task at hand is unknown and can not be used to set the global upper bound, which leads to a gap between training and testing environments. More efficient heuristics for node selection also induce a gap for an agent trained with DFS node selection strategy. This gap is often considered moderate and does not affect performance significantly.
§ RELATED WORK
For the first time statistical approach to learning a branching rule was applied in <cit.>. Authors used SVM <cit.> to predict the variable ranking of an expert for a single task instance.
Later works <cit.> and <cit.> proposed methods based on Graph Convolutional Networks (GCNN) <cit.> to find an approximate solution of combinatorial tasks. In <cit.>, authors used the same neural network architecture to imitate Strong Branching heuristic in sophisticated SCIP solver <cit.>. The imitation learning agent can not produce trees shorter than the expert, however, it solves the variable selection task much faster, especially if running on GPU, thereby speeding up the whole B&B algorithm significantly.
In <cit.>, authors investigate the choice of the model architecture and propose a hybrid model which combines the expressive power of GCNN with the computational efficiency of multi-layer perceptrons. Despite the time performance increase, imitation learning agents can not lead to better heuristics.
A more promising direction is to learn a variable selection rule for the Branch-and-Bound algorithm with reinforcement learning. In this approach, we will keep the guarantees of the B&B method to find an optimal solution and possibly speed up the algorithm significantly by optimal choices of branching variables. A natural minimization target for an agent in the B&B algorithm is the size of the resulting tree. One of the main challenges here is to map the variable selection process to the MDP and preserve Markov property. In the B&B search trees, the local decisions impact previously opened leaves via fathoming due to global upper-bound pruning. Thus the credit assignment in the B&B is biased upward, which renders the learned policies potentially sub-optimal. In work <cit.>, authors propose Fitting for Minimizing the SubTree Size algorithm to learn the branching rule. In their method agent plays an episode until termination and fits the Q-function to the bootstrapped return. They used the DFS node selection strategy to enforce MDP property during training. Following this idea in work <cit.>, authors introduce the tree MDP framework and prove that setting the global upper bound to the optimal solution for a MILP is an alternative method to enforcing MDP property. They derive policy gradient theorem for a tree MDP and evaluate REINFORCE-based agent on a set of challenging tasks similar to <cit.>. In both works <cit.> and <cit.>, to update an agent, one needs to run an episode until the end to obtain a cumulative return. Our work improves their approaches in terms of sample efficiency and agent performance.
§ OUR METHOD
In the present work, we use an open-source implementation of the Branch-and-Bound algorithm in SCIP solver version 8.0.1. Our environment utilizes Ecole <cit.> 0.8.1 package, which provides an interface for learning a variable selection policy. The variable selection environment has three characteristic properties which distinguish it from an ordinary reinforcement learning environment. First of all, finding the optimal solution of a MILP is computationally demanding, which requires sample efficient learning methods. Second, the decision-making process is a tree MDP instead of a temporal MDP. Third, the distribution of tree sizes, even for the Strong Branching heuristic, has a long tail, as shown in Fig. <ref>.
To reliably benchmark the average performance of different branching rules, previous works <cit.> used the geometric mean of the final tree size as a comparison metric. This metric is more stable in the case of long-tailed distributions than the arithmetic mean. Hence, the successful reinforcement learning method should have the following properties:
* Off-policy.
* Work with tree MDP instead of temporal MDP.
* Optimize geometric mean of expected return.
§.§ Contraction in mean
From the theoretical point of view, reinforcement learning methods converge to an optimal policy due to the contraction property of the Bellman operator <cit.>. To apply RL methods for tree MDP, we need to justify the contraction property of the tree Bellman operator.
Theorem 4.1 Tree Bellman operator is contracting in mean.
Bellman operator for a tree MDP is defined similarly to a temporal MDP:
T^π(V(s)) = r(s, π(s)) + γ[ p_+V(s^+) + p_-V(s^-) ]
Contraction in mean was discussed, for example, in <cit.>. Here, we will consider operator T is contracting in mean if:
TV - TU_∞ = p ·V - U_∞,
p < 1,
where the infinity norm is defined by:
V - U_∞ = max_s ∈𝕊|V(s) - U(s)|
We will assume that the probability of having a left (p_+) and a right (p_-) child does not depend on the state. This assumption is close to the B&B tree pruning process, where the pruning decision depends on the global upper bound instead of the parent node. Using the definition of tree Bellman operator (<ref>) and the definition of the infinity norm (<ref>) we derive the following inequality:
T^πV - T^πU_∞ = γ p_+V(s^+) + p_-V(s^-) - p_+U(s^+) + p_-U(s^-)_∞ =
γmax_s ∈𝕊[p_+|V(s^+) - U(s^+)| + p_-|V(s^- - U(s^-))|] ≤
γ (p_+ + p_-)max_x ∈𝕊|V(x) - U(x)|
The proof follows from the above inequality and observation that the tree is finite, i.e., (p_+ + p_-) < 1.
§.§ Loss function
Reinforcement learning methods generally regress the expected return with the mean squared error (MSE) loss function, thereby optimizing the prediction of the arithmetic mean. In the case of vast return distributions, we propose to use mean squared logarithmic error (MSLE) instead. For a variable y and targets t_i loss L(y, t) is defined as follows:
L(y, t) = MSE(log(|y|), log(|t|)) = 1/N∑_i(log(|y|) - log(|t_i|))^2
Since log(|y|) = 1/N∑log(|t_i|) minimizes the MSE function, then the optimal value for y equals to geometric mean |y| = exp(1/N ∑_i = 1^Nlog(|t_i|)). Thus, the agent trained with loss (<ref>) will be optimized to predict the geometric mean of the expected return.
In our experiments, we use Graph Convolution Neural Network with activation f = -exp(·) applied for the output layer, which allows our agent to approximate a wide range of Q-values. For this activation function, we can implement the loss function (<ref>) numerically stable using logits before activation.
Hence, the proposed loss function serves two purposes simultaneously: it optimizes the target value — geometric mean of expected return and stabilizes the learning process.
§.§ TreeDQN
Our method, which we dubbed TreeDQN (<ref>), is based on Double Dueling DQN <cit.> algorithm adapted for a tree MDP process (<ref>). According to Theorem 4.1, the Bellman operator for a tree MDP process is contracting in mean. Hence, we can adapt DQN to minimize tree difference error instead of temporal difference. By sampling previous observations from experience replay, we significantly improve the sample efficiency of our method in contrast to on-policy methods <cit.>. We also use the loss function (<ref>), which highly increases the learning stability.
Observation. We use state representation in the form of a bipartite graph provided by Ecole <cit.>. In this graph, edges correspond to connections between constraints and variables with weight equal to the coefficient of the variable in the constraint. Each variable and constraint node is represented by a vector of 19 and 5 features, respectively.
Actions. The agent selects one of the fractional variables for splitting. Since the number of fractional variables decreases during an episode, we apply a mask to choose only among available variables.
Rewards. At each step, the agent receives a negative reward r = -1. The total cumulative return equals the resulting tree size with a negative sign.
Episode. In each episode, the agent solves a single MILP instance. We limit the solving time for one task instance during training to 10 minutes and terminate the episode if the time is over.
§.§ Training
In our experiments, we use a set of NP-hard tasks, namely Combinatorial Auction <cit.>, Set Cover <cit.>, Maximum Independent Set <cit.>, Facility Location <cit.> and Multiple Knapsack <cit.>. To test the generalization ability of our agent, we evaluate the trained agent twice: (1) on the test instances from the training distribution and (2) on the large instances from the transfer distribution. Tab. <ref> shows the parameters of the test and transfer distributions.
We use the same set of hyperparameters (see Tab. <ref>) to train our agent for each task distribution. Total training time did not exceed 28 hours on an Intel Xeon 6326, NVIDIA A100 machine. To select the best checkpoint for testing, we perform validation using 30 fixed task instances every 5000 updates. The validation plot in Fig. <ref> shows the geometric mean of tree sizes as a function of the number of updates. We see that during training agent learns to solve variable selection tasks better, generating smaller B&B trees.
§ EVALUATION
In both test and transfer settings, we generate 40 task instances and evaluate our agent with five different seeds. We compare the performance of our TreeDQN agent with the Strong Branching rule, Imitation Learning agent (IL), and REINFORCE agent (tMDP+DFS, <cit.>). Our agent is based on the sample-efficient off-policy algorithm and requires much less training data than the REINFORCE agent. The number of episodes it took to reach the best checkpoint for TreeDQN and REINFORCE agents is shown in Tab. <ref>.
The evaluation results are presented in Tab. <ref> and Tab. <ref>. We show the geometric mean of the final tree size and standard deviation computed for the same instance with different seeds and averaged over all task instances. Tab. <ref> shows that the TreeDQN agent significantly exceeds the results of the REINFORCE agent in all test tasks. The TreeDQN agent is close to the Imitation Learning agent in the first four tasks and substantially outperforms the Strong Branching in the Multiple Knapsack task.
The geometric mean is a decent metric to compare the performance of different agents. Despite that, it loses information about the shape of the distributions. To further analyze the performance of our agent, we present distributions of tree sizes in the form of probability-probability plots (P-P plots) in Fig. <ref>. P-P plot allows us to compare different cumulative distribution functions (CDF). For a reference CDF F and a target CDF G P-P plot is constructed similar to the ROC curve: we choose a threshold x, move it along the domain of F and draw points (F(x), G(x)). To show multiple distributions on the same plot, we use Strong Branching as reference CDF for all of them. If one curve is higher than another, then the corresponding CDF is larger, so the associated agent performs better. Looking at P-P plots in Fig. <ref>, we see that in the Combinatorial Auction task, the TreeDQN agent for all task instances performs better than the REINFORCE agent and is close to the Imitation Learning agent. In the Multiple Knapsack task, TreeDQN and REINFORCE agents outperform Strong Branching and Imitation Learning. Maximum Independent Set is a representative example. In this task, TreeDQN is good at solving simple tasks where it performs close to Imitation Learning. When the tasks become more complex, it falls behind Imitation Learning and finally behind REINFORCE. This behavior is the direct consequence of our learning objective. We optimize the geometric mean of expected tree size, so complex task instances may have less influence on the learning process. P-P plots and arithmetic means for all test tasks are shown in Appendix <ref>, Fig. <ref> and Tab. <ref>.
Besides testing the performance of our agent, we also study its abilities to generalize. Table <ref> presents evaluation results for complex transfer tasks solved with five different seeds. Since solving complicated MILP problems is time-consuming, we limit the maximum number of nodes in a B&B tree to 200'000. The number of transfer tasks terminated by this node limit is shown in Appendix <ref>, Tab. <ref>. It is seen from Tab. <ref> that in the Set Cover, Facility Location, and Multiple Knapsack tasks, our TreeDQN agent transfers well and performs significantly better than the REINFORCE agent. In the Combinatorial Auction task both RL agents perform similarly, given the standard deviation. In the Maximum Independent Set task, the TreeDQN agent falls behind the REINFORCE agent since it adapted better for simple task instances, as seen from the P-P plot <ref>.
§ ABLATION STUDY
Our modified learning objective prevents explosions of gradients and significantly stabilizes the training process. In this section, we perform an ablation study and compare the out agent with the TreeDQN agent trained with standard MSE loss. The results are shown in Tab. <ref>. For the majority of the tasks agent trained with a modified loss function achieves a lower geometric mean of the final tree size.
§ LIMITATIONS AND SOCIAL IMPACT
A data-driven approach to learning a variable selection heuristic for combinatorial optimization tasks is a promising direction. High-quality heuristics will speed up combinatorial solvers significantly while keeping the guarantees to obtain the exact solution. Despite that, there are certain limitations to this approach. First, the learned heuristic depends on a concrete implementation of the Branch-and-Bound algorithm. So if the next version of the solver updates the algorithm, the heuristic will need to be retrained to prevent performance degradation. The second limitation is our learning objective. We designed our method to optimize the geometric mean of the distribution of tree sizes, so for another metric, our approach could be less efficient. The final limitation is common for all data-driven methods. If the testing task distribution is different from the training one, the performance of the whole B&B algorithm may fall dramatically.
Combinatorial optimization problems arise in multiple areas of life. A method that could compute an optimal solution faster will have a significant positive impact. However, there can be some side effects. Foremost, learning-based methods are proven to work statistically, but for a concrete task instance, they could fail. Hence, learning-based methods should be used carefully in performance-critical applications. Also, malicious users can apply combinatorial solvers to hack hashes and steal sensitive information. While current hashing algorithms seem to be hard enough, more advanced combinatorial solvers may require the development of stronger hashing algorithms.
§ CONCLUSION
This paper presents a novel data-efficient deep reinforcement learning method to learn a branching rule for the Branch-and-Bound algorithm. The synergy of the exact solving algorithm and data-driven heuristic takes advantage of both worlds: guarantees to compute the optimal solution and the ability to adapt to specific tasks. Our method utilizes tree MDP and contraction property of the tree Bellman operator. It maps MILP solving to an episode for our RL agent and trains the agent to optimize the final metric — the resulting size of the B&B tree. We have proposed a modified learning objective that stabilizes the learning process in the presence of high variance returns. Our approach surpasses previous RL methods at all test tasks. The code is available at https://github.com/dmitrySorokin/treedqnhttps://github.com/dmitrySorokin/treedqn. In the future, we are interested in studying multitask branching agents and the limits of generalization to more complex task instances.
We highly appreciate the help of Ivan Nazarov, who participated during the early stages of the present research. We discussed multiple ideas, some of which further evolved into the present paper. We also thank him for providing the concept of P-P plots. We thank Artyom Sorokin for the fruitful discussion of Bellman operators.
unsrt
§ APPENDIX
|
http://arxiv.org/abs/2306.04011v1
|
20230606205044
|
Direct Observation of Landau Levels in Silicon Photonic Crystals
|
[
"Maria Barsukova",
"Fabien Grisé",
"Zeyu Zhang",
"Sachin Vaidya",
"Jonathan Guglielmon",
"Michael I. Weinstein",
"Li He",
"Bo Zhen",
"Randall McEntaffer",
"Mikael C. Rechtsman"
] |
physics.optics
|
[
"physics.optics",
"cond-mat.mes-hall"
] |
Department of Physics, The Pennsylvania State University, University Park, PA, USA
These authors contributed equally
Department Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA, USA
These authors contributed equally
Department of Physics, The Pennsylvania State University, University Park, PA, USA
These authors contributed equally
Department of Physics, The Pennsylvania State University, University Park, PA, USA
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA
Department of Physics, The Pennsylvania State University, University Park, PA, USA
Department of Applied Physics and Applied Mathematics and Department of Mathematics, Columbia University, New York, NY, USA
Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA
Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA
Department Astronomy & Astrophysics, The Pennsylvania State University, University Park, PA, USA
Department of Physics, The Pennsylvania State University, University Park, PA, USA
[email protected]
We experimentally observe photonic Landau levels that arise due to a strain-induced pseudomagnetic field in a silicon photonic crystal slab. The Landau levels are dispersive (i.e., they are not flat bands) due to the distortion of the unit cell by the strain. We employ an additional strain which induces a pseudoelectric potential to flatten them.
Direct Observation of Landau Levels in Silicon Photonic Crystals
Mikael C. Rechtsman
July 31, 2023
================================================================
When electrons are confined to a two-dimensional plane and are subjected to an out-of-plane magnetic field, they move in circular cyclotron orbits as a result of the Lorentz force. In the quantum domain, this cyclotron motion is quantized, and as a consequence, the electrons' energy spectrum splits into discrete, highly degenerate states called Landau levels. The integer and fractional quantum Hall effects <cit.> arise as a direct result; in the fractional case, it is the high degeneracy of Landau levels (i.e., that they are flat bands) that gives rise to effectively strong electron-electron interactions and leads to the fractionalization of charge.
In free space, photons do not respond to external magnetic fields because they do not carry charge; yet, when propagating in magneto-optical materials, they may respond indirectly as a result of the material's magnetic response. However, this response is weak at optical frequencies. In 2012, an approach was put forward for emulating magnetic behavior in photonic systems by inhomogeneously straining a photonic lattice. <cit.>. This implementation was based on an idea proposed for electrons in graphene, where a strain pattern imposed on the lattice would introduce an effective gauge field at the Dirac point, causing electrons to behave as though there were a strong field present, even in the absence of a real magnetic field <cit.>. The effect was later demonstrated by directly observing Landau levels in graphene bubbles, where a strain corresponding to an enormous `pseudomagnetic' field of 300T was imposed <cit.>. Since the original photonic experiment, Landau levels were also proposed and observed in exciton-polariton condensates <cit.> and in mechanical systems <cit.>. Moreover, there have been a number of theoretical proposals for how Landau levels may be used in the context of photonics that are intrinsically distinct from the electronic case <cit.>.
Here, we directly observe Landau levels in two-dimensional silicon photonic crystal slabs in the nanophotonic domain. Moreover, we go beyond purely pseudomagnetic effects and demonstrate that strains corresponding to pseudoelectric fields act to flatten the Landau levels that inherit dispersion from the form of the pseudomagnetic strain. There are several key differences and advantages of pseudomagnetism in photonic crystals compared to previous realizations of photonic pseudomagnetism. First, photonic crystals have been demonstrated to enhance light-matter interaction via cavity modes and flat bands <cit.>. This enhancement is generated as a result of the lattice. In contrast, for systems composed of individual, isolated guiding or resonant elements (as in Refs. <cit.>), lattice effects are not leveraged because strong enhancement would occur even in a single site. Second, besides having unit cells that are an order of magnitude smaller, photonic crystals can in practice have much larger system sizes compared to previous realizations (millions compared to hundreds of unit cells), and can be realized with smaller loss in the silicon platform. Since Landau level degeneracy scales with system size and the linewidth increases with loss, photonic crystals allow for increased degeneracy and significantly improved spectral resolution of the levels.
Further, since photonic crystals do not have an associated tight-binding theory, the original theoretical framework relating strain to pseudomagnetism is not directly applicable, rendering a new understanding necessary; the appropriate effective Hamiltonians for strain-dependent emergent parameters for two-dimensional photonic crystals were derived in our previous theoretical work <cit.>, and are extended to the slab geometry here (2D slab embedded in 3D space).
Our establishment of a new analytical method of understanding and describing aperiodicity in photonic crystals (i.e., using pseudomagnetic fields) will be useful in their optimization for many different functions; this has traditionally been approached by using direct numerical optimization <cit.>.
Our starting point is a photonic crystal structure consisting of rounded triangular air holes in a silicon slab <cit.> that rests on a silica substrate. The holes form an underlying honeycomb pattern with C_6v-symmetry. As a result, this lattice hosts Dirac points at the 𝐊 and 𝐊^' points in the Brillouin zone <cit.>. As these Dirac points lie below the light line of vacuum, they are not detectable via free-space excitation. To allow radiative coupling from outside the slab, we introduce a small period-doubling perturbation by changing the size of some of the holes (more details can be found in Supplementary Information Section 2). This makes the unit cell of the lattice rectangular, and the band structure is folded such that the Dirac cone resides along the k_x axis and lies above the light line of vacuum. A scanning electron microscope image of the structure is shown in Fig. <ref>(a); the period-doubled unit cell is shaded in purple.
We numerically compute the band structure (in the transverse electric polarization) using the guided mode expansion method as implemented in the open-source software package Legume <cit.>. Fig. <ref>(b) shows the linearly-dispersing transverse electric (TE)-like bands that exhibit a Dirac point at 𝐊, with a frequency ω_ D=0.318 [2π ca^-1]. Here a is the lattice constant of the underlying hexagonal lattice structure and c is the speed of light. The period-doubling procedure very slightly changes the Dirac frequency (see Supplementary Information Section 2).
Next, we introduce a strain pattern in our structure by deforming the lattice as shown in Fig. <ref>(c). Here, the term strain refers not to a strain induced by a physically applied stress, but to the deformation of the dielectric pattern that is directly etched into the silicon. The specific strain pattern is achieved by mapping every point (x, y,z) to (x, y + a (κ x)^2,z), where κ is the strength of the strain. This deformation breaks periodicity in the x direction, but retains periodicity along the y direction. The spatial scale separation ensured by the assumption of small and slowly varying strain, κ a ≪ 1, allows us to develop a multiple scale <cit.> variant of degenerate perturbation theory to expand the eigenstates and eigenvalues of the strained system. The eigenstates are, to leading order in κ, a slow spatial modulation of the degenerate Bloch modes associated with the Dirac point of the unstrained (κ=0) structure.
The resulting effective Hamiltonian, which incorporates the strain, is given by
ℋ_ eff=E_ Dσ_0+v_ D[(-i∂/∂ x)σ_1+(-i∂/∂ y+4ab_*κ^2/v_ D x)σ_2],
where E_ D=(ω_ D/c)^2, σ_0, σ_1 and σ_2 are Pauli matrices, and v_ D=0.915a^-1 and b_*=0.606a^-2 are two parameters calculated from the modes of the unstrained structure at energy E_ D. A detailed derivation can be found in Supplementary Information section 3, where explicit expressions for b_* and v_ D in terms of the eigenstates of the periodic structure are displayed. We note that the effective Hamiltonian displayed in Eq. (1) is derived directly from the continuum theory of photonic crystals; this is fundamentally different from the previous work <cit.> based on the tight-binding approximation. Our approach extends the the methods of Ref. <cit.> to the three-dimensional setting of the slab geometry, where vectorial effects play a role.
Equation (<ref>) corresponds to a two-dimensional Dirac Hamiltonian describing massless spin-1/2 relativistic particles under a constant (pseudo)magnetic field pointing in the out-of-plane direction, where the magnetic field has a strength of B_ eff = 4ab_*κ^2/v_ D and is described by a vector potential in the Landau gauge. The discrete energies that are eigenvalues of the Hamiltonian in Eq. (<ref>) for an electron are known as Landau levels. The energy eigenvalue of the n^th level is proportional to √(|n|), where n is an integer. Analogously, for our photonic crystal slabs, the frequency eigenvalues of the electromagnetic eigenmodes are, to first order in κ, proportional to √(|n|) and can be expressed as ω_n = ω_ D±(c^2v_ D/√(2)ω_ D)√(B_ eff| n|), where n is an integer.
To corroborate our analytical results given in Eq. (<ref>), we also perform numerical simulations of the strained structure using the guided-mode expansion method. The strain is implemented in a dielectric profile which spans 199 period-doubled unit cells in the x-direction. Due to the preservation of lattice periodicity along the y-direction, k_y is conserved and the frequencies of the bands can be plotted as functions of k_y, as shown in Fig. <ref>(d). Here, we observe the splitting of the spectrum near the Dirac point into discrete Landau levels due to the strain-induced pseudomagnetic field, where the spacing of these levels is proportional to √(|n|) for a fixed value of κ.
To demonstrate the formation of Landau levels in such a system, we use electron-beam lithography to fabricate both the periodic and the strained patterns in a silicon slab (ε = 12.11) on top of a silica substrate (ε = 2.25). A detailed description of the fabrication methods can be found in Supplementary Material Section 1. Figures <ref>(a) and (c) show scanning electron microscope (SEM) images of the fabricated structures. The structure in Fig. <ref>(a) has a periodicity along the x direction of 2a=980 nm.
To experimentally characterize the photonic bands of these structures, we perform angle- and frequency-resolved reflection measurements. The samples are illuminated by a tunable continuous wave laser (Keysight 81606A) with a wavelength range of λ = 1.45 - 1.65 μ m (±1.5 pm absolute wavelength resolution accuracy), and a laser linewidth coherence control of 10 kHz.
We measure the iso-frequency contours of the fabricated photonic crystal slabs using back focal plane (BFP) imaging. We then extract the Landau-level band structures by observing the photonic crystal resonances at a fixed k_x corresponding to the location of the Dirac point of the unstrained structure. Details of the experimental setup can be found in Supplementary Material Section 1.
Fig. <ref>(a) shows the bands of the unstrained structure, obtained by BFP imaging, where we clearly observe linearly dispersing bands near the Dirac point. We note that a small gap is observed at the Dirac point - this is due to inevitable fabrication disorder that breaks inversion symmetry.
We show in Supplementary Information Section 4 that the breaking of inversion symmetry affects the zeroth Landau level significantly more than the others.
Next, we measure the bands of the strained photonic crystal slabs described above and find the emergence of discrete Landau levels, as shown in Fig. <ref>(b). While the effective theory predicts that the Landau levels should be flat, we see that they are dispersive in both simulation (Fig. <ref>(d)) and experiment (Fig. <ref>(b)), i.e., the bands are concave-up. This arises due to the fact that, by adding strain, the unit cell is distorted locally as a function of x. This distortion effectively adds a parabolic potential to the Hamiltonian (i.e., H∼ x^2σ_0), which in turn causes the dispersion of the Landau level bands. A detailed explanation can be found in Supplementary Information Section 5.
According to the effective theory (Eq. <ref>), the n=0 level should be at the center of all Landau levels. However, due to the aforementioned inversion-symmetry breaking, this level is slightly shifted away from the center (see Supplementary Information Section 4). As a result, we use a new reference frequency of ω_0^' = 1/2(ω_-1 + ω_1) as the Dirac frequency to calculate the Landau level spacings, defined as ω_n - ω_0^'. In Fig. <ref>(c), we compare the theoretically and experimentally obtained level spacings at k_y=0 [2π a^-1] under different strain strengths (characterized by κ) and observe good agreement between the two. From the experimental data, we also calculate the normalized quantity |ω_n - ω_0^'|/(κ√(|n|)), which should be a constant for all Landau levels. We again observe good agreement between experiment and the theoretically-predicted value of 0.0823 [2π c], as shown in Fig. <ref>(d). In both Figs. <ref>(c) and (d), the theoretical plots (solid lines) are obtained directly from analytical predictions, and have no free parameters.
It is clear from Fig. <ref>(b) that, as n decreases, the range of k_y values over which the n^th Landau level is observed becomes smaller. This can be intuitively understood as arising from the interaction of the Landau level states with other states that reside toward the far left and right sides of the sample. These states rise in energy as one moves away from the sample center along the x direction. We know from Eq. (1) that the Landau level states are harmonic oscillator eigenstates centered at x=k_y/B_ eff, with spatial widths of Δ x_n=√((2|n|+δ_0,n)/2B_ eff). As k_y is increased, the Landau level center translates and the tail of the Landau level eventually interacts with the states mentioned above, leading to an increased linewidth. More details are given in Supplementary Information Section 3.
The fact that the x-position of the Landau level state varies linearly with k_y leads to another clear observable: when the input beam is moved from left to right in real space along the x-direction, the Landau level states at increasing k_y are selectively excited, and therefore appear more clearly in the band structure. We observe this effect directly, as shown in Fig. <ref>(a) through (c): when the input beam is on the left side of the sample (i.e., x<0), we see that the modes on the left side of the band structure (k_y<0) are more strongly excited, but as the input beam is moved rightward, we observe that the modes on the right side of the band structure are increasingly excited.
To further study the relationship between x and k_y, we extract the boundary in k_y-space between the modes that are excited and those that are not excited. For input beams positioned left of center, we extract the right boundary, and for input beams positioned right of center, we extract the left boundary. The boundary values differ from the excitation centers by an overall offset, which we remove by fitting the data to a line and subtracting the intercept (one for the left-boundary data and one for the right-boundary data). Using this procedure, we obtain the relationship between the Landau level horizontal position and the average vertical momentum, k_y, of the excited modes. The linear relationship between these, as shown in Fig. <ref>(d), evidences the direct proportionality between the Landau level positions and k_y.
We next turn our attention to the mitigation of the Landau level dispersion. As explained earlier, Eq. <ref> predicts flat Landau levels. However, in the simulations and experiments, the Landau levels exhibit quadratic dispersion as k_y is varied. As shown in Ref. <cit.>, it is possible to mitigate this dispersion by introducing an additional strain profile, which induces a pseudo-electric potential. Specifically, we add a cubic term to the deformation such that the point (x, y, z) is mapped to (x + aβ(κ x)^3, y + a(κ x)^2,z). The parameter β controls the strength of this additional strain in the x-direction. A schematic of the strained structure, which induces both pseudomagnetic and pseudoelectric fields, is shown in Fig. <ref>(a) (further details are given in Supplementary Information Section 5). The reason why the pseudoelectric field counters the Landau level dispersion, to leading order, can be explained as follows. To leading order, the form of the pseudoelectric field gives rise to a potential V_ eff=3aβ m κ^2 x^2σ_0 (to be added to (<ref>)) which is similar to that which creates the dispersion in the first place (here m=-3.28a^-2 is a parameter calculated entirely from the states of the periodic structure). Since the spatial positions of the Landau level eigenstates grow linearly with k_y, a quadratic potential in x is equivalent to a parabolic dispersion in k_y. An appropriate choice of the field strength (and sign) will then counteract the original dispersion induced by the strain associated with the pseudomagnetic field.
By choosing β appropriately, the quadratic dispersion of the Landau levels can be mitigated, leading to nearly flat bands. We note that each Landau level requires a different value of β to counteract its dispersion. More details are given in Supplementary Information Section 5. Fig. <ref>(b) shows numerical simulations of the flattened Landau levels for a structure with pseudomagnetic and pseudoelectric fields induced by a strain with κ=0.0632a^-1 and β=0.0364. Here, the n=0 level is targeted, but other levels are also evidently flatter. Fig. <ref> (c) shows the experimental data for a strained structure with the same values of κ and β given above, where a good agreement is observed between theory and experiment.
In conclusion, we have directly observed Landau levels in the spectra of two-dimensional silicon photonic crystal slabs. As in graphene, the Landau level energies are proportional to √(|n|), where n is an integer. The Landau level bands are found to be dispersive, which can be explained by a distortion of the unit cell as a result of the strain. We further showed that this dispersion can be mitigated by adding an additional strain that induces a position-dependent pseudoelectric field (i.e., a potential). Landau levels constitute a new methodology for enhancing light-matter interaction which is distinct from standard slow light or cavity enhancement, because a flat band acts essentially as a `cavity everywhere in space'. The realization of optical pseudomagnetism prompts several new questions and directions of inquiry, including: whether Landau-level flat bands can be used to enhance light-matter coupling more efficiently than conventional photonic crystal flat bands or other points of high degeneracy (such as van Hove singularities); the question of the nature of wave mixing processes such as four-wave mixing among Landau levels; and whether the square-root structure of the eigenvalue spacing can lead to different properties associated with entangled pair or frequency comb generation. More broadly, the framework of pseudomagnetism gives an analytical handle on aperiodic photonic structures, allowing for a new approach to designing devices and better understanding their behavior.
§ ACKNOWLEDGEMENTS
We gratefully acknowledge funding support from the Office of Naval Research MURI program under agreement number N00014-20-1-2325, the Air Force Office of Scientific Research MURI program under agreement number FA9550-22-1-0339, as well as the Kaufman and Packard foundations under grant numbers KA2020-114794 and 2017-66821, respectively. This research was also supported in part by National Science Foundation grants DMS-1620422 (MCR), DMS-1620418 (MIW), DMS-1908657 (MIW) and DMS-1937254 (MIW), as well as Simons Foundation Math + X Investigator Award #376319 (MIW). The authors acknowledge the Nanofabrication Lab within the Materials Research Institute at Penn State and the help of Michael Labella, as well as seed funding from the Center for Nanofabricated Optics at Penn State University. F.G. thanks GenISys and, in particular, Roger McCay for his help in optimizing the fracturing of the electron-beam patterns. M.B. thanks Sebabrata Mukherjee and Alexander Cerjan for fruitful discussions in the early stages of the project and help with numerical optimization.
We would like to note that the group of Ewold Verhagen has concurrently posted a similar work on the observation of Landau levels in photonic crystals.
|
http://arxiv.org/abs/2306.01969v1
|
20230603003344
|
Individual Causal Inference Using Panel Data With Multiple Outcomes
|
[
"Wei Tian"
] |
econ.EM
|
[
"econ.EM"
] |
arrows
dfnDefinition
assumeAssumption
assumealt[1]
assume-1
lemmaLemma
propProposition
propalt[1]
prop-1
theoremTheorem
theoremalt[1]
theorem-1
remarkRemark
Individual Causal Inference
Using Panel Data With Multiple Outcomes
Wei Tian
UNSW
August 24, 2021
===================================================================
Policy evaluation in empirical microeconomics has been focusing on estimating the average treatment effect and more recently the heterogeneous treatment effects, often relying on the unconfoundedness assumption. We propose a method based on the interactive fixed effects model to estimate treatment effects at the individual level, which allows both the treatment assignment and the potential outcomes to be correlated with the unobserved individual characteristics. This method is suitable for panel datasets where multiple related outcomes are observed for a large number of individuals over a small number of time periods. Monte Carlo simulations show that our method outperforms related methods. To illustrate our method, we provide an example of estimating the effect of health insurance coverage on individual usage of hospital emergency departments using the Oregon Health Insurance Experiment data.
We find heterogeneous treatment effects in the sample. Comparisons between different groups show that the individuals who would have fewer emergency-department visits if covered by health insurance were younger and not in very bad physical conditions. However, their access to primary care were limited due to being in much more disadvantaged positions financially, which made them resort to using the emergency department as the usual place for medical care.
Health insurance coverage might have decreased emergency-department use among this group by increasing access to primary care and possibly leading to improved health.
In contrast, the individuals who would have more emergency-department visits if covered by health insurance were more likely to be older and in poor health. So even with access to primary care, they still used emergency departments more often for severe conditions, although sometimes for primary care treatable and non-emergent conditions as well.
Health insurance coverage might have increased their emergency-department use by reducing the out-of-pocket cost of the visits.
§ INTRODUCTION
The main focus of the policy evaluation literature has been the average treatment effect and more recently the heterogeneous treatment effects or conditional average treatment effects, which are the average treatment effects for heterogeneous subgroups defined by the observed covariates (for reviews of these methods, see ).
Ubiquitous in these studies is the unconfoundedness assumption, or the strong ignorability assumption, which requires all the covariates correlated with both the potential outcomes and the treatment assignment to be observed <cit.>.[This is also known as selection on observables or the conditional independence assumption.]
Under this assumption, the potential outcomes and the treatment status are independent conditional on the observed covariates, and the difference between the mean outcomes of the treated and the untreated groups with the same values of the observed covariates is an unbiased estimator of the average treatment effect for the units in the groups.
The unconfoundedness assumption is satisfied in randomised controlled experiments, but may not be plausible otherwise even with a rich set of covariates, since the access to certain essential individual characteristics remains limited for the researchers due to privacy or ethical concerns, despite the explosive growth of data availability in the big data era.
One popular method to circumvent the unconfoundedness assumption is difference-in-differences (DID), which assumes that the effect of the unobserved confounder on the untreated potential outcome is constant over time, so that the average outcomes of the treated and untreated units would follow parallel trends in the absence of the treatment.[Alternative methods that do not rely on the unconfoundedness assumption include the instrumental variables method and the regression discontinuity design, which estimate the average treatment effect for specific subpopulations (the compliers or those with values of the running variable near the cutoff).] This is also a strong assumption, and in many cases is not supported by data.
The interactive fixed effects model relaxes the “parallel trends” assumption and allows the unobserved confounders to have time-varying effects on the outcomes, by modeling them using an interactive fixed effects term, which incorporates the additive unit and time fixed effects model or difference-in-differences as a special case <cit.>.
Several methods have been developed based on the interactive fixed effects model to estimate the treatment effect on a single or several treated units, where the units are observed over an extended period of time before the treatment <cit.>. These methods exploit the cross-sectional correlations attributed to the unobserved common factors to predict the counterfactual outcomes for the treated units, and are mainly used in macroeconomic settings with a large number of pretreatment periods, which is crucial for the results to be credible. For example, <cit.> point out that “the applicability of the method requires a sizable number of preintervention periods” and that “we do not recommend using this method when the pretreatment fit is poor or the number of pretreatment periods is small”, while <cit.> states that users should be cautious when there are fewer than 10 pretreatment periods.
As a consequence, despite the potential to estimate individual treatment effects without imposing the unconfoundedness assumption, these methods have not seen much use in empirical microeconomics, since the individuals are rarely tracked for more than a few periods that justify the use of these methods.
The main contribution of this paper is that we propose a method for estimating the individual treatment effects in applied microeconomic settings, characterised by multiple related outcomes being observed for a large number of individuals over a small number of time periods.
The method is based on the interactive fixed effects model, which assumes that an outcome of interest can be well approximated by a linear combination of a small number of observed and unobserved individual characteristics.
Analogous to <cit.> who predict the posttreatment outcomes using pretreatment outcomes in lieu of the unobserved time factors, we use a subsample of the pretreatment outcomes to replace the unobserved individual characteristics in the models, and use the remaining pretreatment outcomes as instrumental variables.
Although our method does not require a large number of pretreatment periods, the number of pretreatment outcomes needs to be at least as large as the number of unobserved individual characteristics, which may still be difficult to satisfy in microeconomic datasets if we use only a single outcome, especially if the treatment assignment took place in the early stages of the survey or if the study subjects are children or youths.
Utilising multiple related outcomes allows our method to be applicable in cases where there is only a single period before the treatment.
Under the assumption that these outcomes depend on roughly the same set of observed covariates and unobserved individual characteristics with time-varying and outcome-specific coefficients shared by all individuals, our method exploits the correlations across related outcomes and over time, which are induced by the unobserved individual characteristics, to predict the counterfactual outcomes and estimate the treatment effects for each individual in the posttreatment periods.
Our method has several advantages.
First, with the assumption on the model specification, it relaxes the arguably much stronger unconfoundedness assumption, and allows the treatment assignment to be correlated with the unobserved individual characteristics.
Second, it enables the estimation of treatment effects on the individual level, which may be helpful for designing more individualised policies to maximize social welfare, as well as for other fields such as precision medicine and individualised marketing. It also has the potential to be combined with more flexible machine learning methods to work with big datasets and more general nonlinear function forms.
Third, it is intuitive. In real life, we may never know a person through and through, and a viable approach to predicting the outcome of a person is using his or her related outcomes in the past, assuming that the outcomes are affected by the underlying individual characteristics and that these characteristics are stable over time, at least within the study period. For example, past academic performance is an important consideration when recruiting a student into college, as it is believed that a student that excelled in the past is likely to continue to have outstanding performance. To the extent that we may never observe all the confounders, this is perhaps the only way to predict potential outcomes and estimate treatment effects on the individual level in social sciences without going deeper to the levels of neuroscience or biology.
Fourth, our method has wide applicability, as it is common to have multiple related outcomes collected in microeconomics data. For example, we may observe several health related outcomes such as health facility usage, health related cost, general health, etc.
The rest of the study is organised as follows.
Section <ref> presents the theoretical framework.
Section <ref> examines the small sample performance of our method using Monte Carlo simulation, and compares it with related methods.
Section <ref> provides an empirical example of estimating the effect of health insurance coverage on individual usage of hospital emergency departments using the Oregon Health Insurance Experiment data.
Section <ref> concludes and discusses potential directions for future research. The proofs are collected in the appendix.
§ THEORY
§.§ Set Up
Suppose that we observe K outcomes in domain 𝒦={1,2,…,K} for N individuals or units over T≥ 2 time periods, where a domain refers to a collection of related outcomes that depend on the same set of observed covariates and unobserved characteristics. For example, health-related outcomes may be affected by observed covariates such as age, education, occupation and income, as well as unobserved individual characteristics such as genetic inheritance, health habits and risk preferences.
Assume that the N_1 individuals in the treated group 𝒯 receive the treatment at period T_0+1≤ T and remain treated afterwards, while the N_0=N-N_1 individuals in the control group 𝒞 remain untreated throughout the T periods.
Denoting the binary treatment status for individual i at time t as D_it, we have D_it=1 for i∈𝒯 and t>T_0, and D_it=0 otherwise.
Following the “Rubin Causal Model” <cit.>, the treatment effect on outcome k∈𝒦 for individual i at time t is given by the difference between the treated and untreated potential outcomes
τ_it,k=Y_it,k^1-Y_it,k^0,
where Y_it,k^1 is the treated potential outcome, the outcome that we would observe for individual i at time t if D_it=1, and Y_it,k^0 is the untreated potential outcome, the outcome that we would observe if D_it=0.
Instead of assuming the unconfoundedness condition, we characterise the two potential outcomes for individual i at time t and k∈𝒦 using the interactive fixed effects models:
Y_it,k^1= X_it'β_t,k^1+μ_i'λ_t,k^1+ε_it,k^1,
Y_it,k^0= X_it'β_t,k^0+μ_i'λ_t,k^0+ε_it,k^0,
where X_it is the r× 1 vector of observed covariates unaffected by the treatment, μ_i is the f× 1 vector of unobserved individual characteristics, β_t,k^1 and λ_t,k^1 are the r× 1 and f× 1 vectors of coefficients of X_it and μ_i respectively for the treated potential outcome, β_t,k^0 and λ_t,k^0 are the coefficients for the untreated potential outcome, and ε_it,k^1 and ε_it,k^0 are the idiosyncratic shocks.
Our models for the potential outcomes are quite general, and incorporate the models in <cit.>, <cit.> and <cit.>, as well as the the additive fixed effects model for difference-in-differences as special cases.[Specifically, if we assume β_t,k^0=β_k^0, model (<ref>) reduces to the model in <cit.> and <cit.>; if we assume X_it=X_i and the first element of μ_i is 1, model (<ref>) reduces to the model in <cit.>; if we assume X_it=X_i are unobserved and the first element of λ_t,k^0 is 1, then model (<ref>) reduces to the model in <cit.>; if we assume μ_i=[1 a_i]' and λ_t,k^0=[b_t 1]', then model (<ref>) reduces to the additive fixed effects model for difference-in-differences.]
Note that the related outcomes need not depend on exactly the same set of observed covariates and unobserved individual characteristics. The vectors of outcome-specific and time-varying coefficients may contain 0, so that outcome k may be affected by some of the observed covariates and unobserved individual characteristics in some periods, but not necessarily by all of them in all periods, as long as there is enough variation in the coefficients over time or across the outcomes. The potential outcomes are also allowed to depend on predictors not included in X_it or μ_i, as long as they are not correlated with the included predictors and the treatment status so that they can be treated as part of the idiosyncratic shock.
The regularity conditions on the observed covariates and the unobserved individual characteristics are stated in Assumption <ref>, and the assumptions on the idiosyncratic shocks are given in Assumption <ref>.
[]
* X_it, μ_i are independent for all i, and are identically distributed for all i∈𝒯 and all i∈𝒞 respectively;
* There exists M∈[0,∞) such that 𝔼‖X_it‖^4<M and 𝔼‖μ_i‖^4<M.
[]
For d∈{0,1}, we have
* 𝔼(ε_it,k^d|X_js,μ_j,D_js)=0 for all i, j, t, s and k;
* ε_it,k^d are independent across i and t;
* 𝔼(ε_it,k^d,ε_it,l^d)=σ_t,kl^d for all i, t, k, l;
* There exists M∈[0,∞) such that 𝔼|ε_it,k^d|^4<M for all i, t, k.
The distributions of the observed covariates and the unobserved individual characteristics are allowed to differ for the treated and untreated individuals, i.e., selection on unobservables is allowed, which is a great advantage over the policy evaluation methods that rely on the unconfoundedness condition.
The idiosyncratic shocks are assumed to have zero mean conditional on the observed covariates, unobserved individual characteristics and the treatment status. They are also assumed to be independent across individuals and time periods, as the unobserved interactive fixed effects that account for the cross-sectional and time-serial correlations have been separated out.[The idiosyncratic shocks may be allowed to be correlated both over time and across outcomes, as long as they can be modelled parametrically and removed using a quasi-differencing approach. This is left for future research.] Furthermore, they are assumed to be homoskedastic across individuals for our inference method to be valid. The last part in Assumption <ref> is a regularity condition which, together with the conditions in Assumption <ref>, ensures the weak law of large numbers and the central limit theorem hold.
Given the models for Y_it,k^1 and Y_it,k^0 in (<ref>) and (<ref>), the individual treatment effect is identified by the observed covariates and the unobserved individual characteristics, i.e., two persons with the same values for these underlying predictors have identical individual treatment effect. Denote the set of observed covariates and unobserved individual characteristics as H_it=[X_it'μ_i']',
then the individual treatment effect for individual i with H_it=h_it is given by
τ̅_it,k≡𝔼(Y_it,k^1-Y_it,k^0|H_it=h_it),
which may appear similar to the conditional average treatment effect, but is different by conditioning not only on the observed covariates, but also on the unobserved individual characteristics.[As we assume the parametric models for the potential outcomes in (<ref>) and (<ref>) for all individuals, there is no need to impose additional assumptions on the propensity distribution for the individual treatment effect to be identified on its full support.]
Our goal is to estimate the individual treatment effects τ̅_it,k, i=1,…,N. Once we have estimated the individual treatment effects, the estimates of the average treatment effects for heterogeneous subgroups defined by some observed covariates, also known as the conditional average treatment effects, and the estimate for the average treatment effect for the sample or the population are also readily available using the average of the estimated individual treatment effects in the corresponding groups.
As μ_i is not observed, a direct application of least squares estimation to estimate the models in (<ref>) and (<ref>) would suffer from omitted variables bias. Since we have multiple outcomes that depend on the same set of underlying predictors, and we observe the untreated potential outcomes for all individuals prior to the treatment, we can use these pretreatment outcomes to replace μ_i in the models.[This is analogous to the first step of the approach in <cit.>, who predict the posttreatment outcomes using pretreatment outcomes in lieu of the unobserved time factors in a small N, big T environment.]
Stacking the K outcomes observed in t≤ T_0, we have
Y_it^0=β_t^0X_it+λ_t^0μ_i+ε_it^0,
where Y_it^0 and ε_it^0 are K× 1, β_t^0 is K× r, and λ_t^0 is K× f.
Let 𝒫⊆{1,⋯,T_0} be a set of P pretreatment periods. We can further stack the outcomes over these periods to get
Y_i^𝒫=δ_i^𝒫+λ^𝒫μ_i+ε_i^𝒫,
where δ_i^𝒫=[⋯(β_s^0X_is)' ⋯]' with s∈𝒫 is KP× 1, λ^𝒫 is KP× f, and ε_i^𝒫 is KP× 1.
To be able to recover μ_i from the covariates and outcomes observed in 𝒫, we need the following full rank condition, which ensures that there is enough variation in the effects of the unobserved individual characteristics over time or across different outcomes.
[]
λ^𝒫'λ^𝒫 has rank f.
Although we do not require the number of pretreatment outcomes to be large, Assumption <ref> implies that KP needs to be at least as large as f.
As T_0 (and thus P) is usually small in empirical microeconomics, this assumption is made more plausible by having K>1, i.e., using multiple related outcomes.
The number of factors f is generally not observed. To determine f, one may use the method in <cit.> when both N and T are large. One may also adopt a cross-validation procedure to choose f that minimises the out-of-sample mean squared prediction error, as in <cit.>. Although we do not estimate the interactive fixed effects term directly, we may choose the number of pretreatment outcomes that best accommodates f using cross-validation as well, which will be discussed in more details later.
Under Assumption <ref>, we can pre-multiply both sides of equation (<ref>) by (λ^𝒫'λ^𝒫)^-1λ^𝒫' to obtain
μ_i=(λ^𝒫'λ^𝒫)^-1λ^𝒫'(Y_i^𝒫-δ_i^𝒫-ε_i^𝒫).
Substituting (<ref>) into Y_it,k^0=X_it'β_t,k^0+μ_i'λ_t,k^0+ε_it,k^0, t>T_0, and with a little abuse on the notation by omitting the superscript 𝒫 on the new coefficients and error term, we have
Y_it,k^0 =X_it'β_t,k^0-⋯-X_is'α_st,k^0-⋯_P terms+Y_i^𝒫'γ_t,k^0+e_it,k^0,
where
α_st,k^0 =β_s^0'λ^0_s(λ^𝒫'λ^𝒫)^-1λ^0_t,k, s∈𝒫,
γ_t,k^0 =λ^𝒫(λ^𝒫'λ^𝒫)^-1λ^0_t,k,
e_it,k^0 =ε_it,k^0-γ_t,k^0'ε_i^𝒫.
Let Z=r(P+1)+KP. If we denote the Z× 1 vector of observables [X_it' ⋯X_is' ⋯Y_i^𝒫']' as Z_it, and the Z× 1 vector of coefficients [β_t,k^0' ⋯α_st,k^0' ⋯γ_t,k^0']' as θ_t,k^0, then equation (<ref>) can be abbreviated as
Y_it,k^0=Z_it'θ_t,k^0+e_it,k^0.
Similarly, substituting (<ref>) into Y_it,k^1=X_it'β_t,k^1+μ_i'λ_t,k^1+ε_it,k^1, t>T_0, we have
Y_it,k^1=Z_it'θ_t,k^1+e_it,k^1,
where
θ_t,k^1 =[β_t,k^1' ⋯α_st,k^1' ⋯γ_t,k^1']',
α_st,k^1 =β_s^0'λ^0_s(λ^𝒫'λ^𝒫)^-1λ_t,k^1, s∈𝒫,
γ_t,k^1 =λ^𝒫(λ^𝒫'λ^𝒫)^-1λ_t,k^1,
e_it,k^1 =ε_it,k^1-γ_t,k^1'ε_i^𝒫.
§.§ Estimation
Under Assumption <ref>, we have 𝔼(e_it,k^1|H_it)=0 and 𝔼(e_it,k^0|H_it)=0. This suggests using τ_it,k=Z_it'(θ_t,k^1-θ_t,k^0), where θ_t,k^1 and θ_t,k^0 are some estimators of θ_t,k^1 and θ_t,k^0, to estimate τ̅_it,k.
Note, however, that the error terms e_it,k^1 and e_it,k^0 are correlated with the regressors, since Z_it contains Y_i^𝒫 which is correlated with ε_i^𝒫.
This renders the OLS estimators biased and inconsistent, which can be seen as a classical measurement errors in variables problem.[This is noted in <cit.> as well, who also suggested using pre-treatment outcomes as instrumental variables to deal with the problem. Our method is also related to the quasi-differencing approach in <cit.> and the GMM approach in <cit.>. While these studies focus on estimating the coefficients on the observed covariates, our focus is on estimating the individual treatment effects.] We thus use the remaining outcomes as instrumental variables for Y_i^𝒫 to consistently estimate θ_t,k^1 and θ_t,k^0 in each period, which would then allow us to obtain asymptotically unbiased estimates for the individual treatment effects.[We may construct the vectors of regressors and instruments differently under alternative assumptions on the dependence structure of the idiosyncratic shocks. For example, if the idiosyncratic shocks are correlated across time but are independent across outcomes, then we can split different outcomes into regressors and instruments. This would be similar to using the characteristics of similar products <cit.> or trading countries (see the Trade-weighted World Income instrument in ) as instrumental variables. Incorporating more complex structures of the idiosyncratic shocks in the model is left for future research.]
Since the outcomes depend on about the same set of observed and unobserved individual characteristics, the remaining outcomes are strongly correlated with the outcomes included in Y_i^𝒫. Additionally, given that the idiosyncratic shocks are independent across time, the remaining outcomes are not correlated with e_it,k^1 or e_it,k^0. Thus, both the relevance and exogeneity conditions are satisfied, and the remaining outcomes can serve as valid instrumental variables.
Let R_it=[X_it' ⋯X_is' ⋯Y_i^-𝒫']' be the R×1 vector of instruments, where the (KT-KP-1)×1 vector Y_i^-𝒫 comprises the remaining pretreatment outcomes as well as the posttreatment outcomes other than Y_it,k.[In the special case of T_1=1 and T_0=1, we can include K-1 pretreatment outcomes as regressors, and use the posttreatment outcomes other than Y_it,k as instruments so that R≥ Z.]
Stacking Z_it, R_it and Y_it,k^0 respectively over the N_0 untreated individuals, we obtain the N_0× Z matrix of regressors Z_t^0, the N_0× R matrix of instruments R_t^0 and the N_0× 1 matrix of outcomes Y_t,k^0 for the untreated individuals. We can obtain Z_t^1, R_t^1 and Y_t,k^1 similarly for the N_1 treated individuals.
The GMM estimator for the individual treatment effect τ̅_it,k can then be constructed as
τ_it,k =Z_it'(θ_t,k^1-θ_t,k^0),
where
θ_t,k^1 =(Z_t^1'R_t^1W^1R_t^1'Z_t^1)^-1Z_t^1'R_t^1W^1R_t^1'Y_t,k^1,
θ_t,k^0 =(Z_t^0'R_t^0W^0R_t^0'Z_t^0)^-1Z_t^0'R_t^0W^0R_t^0'Y_t,k^0,
with W^1 and W^0 being some R× R positive definite matrices.
Using the residuals e_t,k^1=Y_t,k^1-Z_t^1θ_t,k^1 and e_t,k^0=Y_t,k^0-Z_t^0θ_t,k^0, we can further construct the two-step efficient GMM estimator by replacing W^1 and W^0 in equations (<ref>) and (<ref>) with N_1(R_t^1'U_t^1R_t^1)^-1 and N_0(R_t^0'U_t^0R_t^0)^-1, where U_t^1 and U_t^0 are diagonal matrices with the squared elements of e_t,k^1 and e_t,k^0 on the diagonals.
One may also construct the estimators for the individual treatment effects using authentic predicted outcomes obtained from a leave-one-out procedure, where θ_t,k^1 and θ_t,k^0 are estimated for each individual using the sample that excludes that individual. This procedure may be computationally expensive though, as there are no simple linear expressions for the leave-one-out coefficients estimates and residuals as for those in linear regression <cit.>.
The following result shows that the bias of the GMM estimator for the individual treatment effect in (<ref>) goes away as both the number of treated individuals and the number of untreated individuals become larger.
Under Assumptions <ref>-<ref>,
𝔼(τ_it,k-τ_it,k|H_it=h_it)→0 as N_1,N_0→∞.
Once we have the estimates for the individual treatment effects, the average treatment effect τ_t,k=𝔼(τ_it,k) can be conveniently estimated using the average of the estimated individual treatment effects τ_t,k=1/N∑_i=1^Nτ_it,k, which can be shown to be consistent.
Under Assumptions <ref>-<ref>,
τ_t,k-τ_t,kp→0 as N_0,N_1→∞, and τ_t,k-τ_t,k=O_p(N_1^-1/2)+O_p(N_0^-1/2).
§.§ Model Selection
To satisfy Assumption <ref>, we need the number of pretreatment outcomes that we include as regressors in the model to be at least as large as f.
Including more pretreatment outcomes may increase the variance of the estimator by increasing the variances of θ_t,k^1 and θ_t,k^0, but may also reduce the variance of the estimator when the sample is large and the variances of θ_t,k^1 and θ_t,k^0 are small, since
(γ_t,k^1-γ_t,k^0)'ε_i^𝒫=1/KP∑_q∈𝒦∑_s∈𝒫(λ^1_t,k-λ^0_t,k)'(1/KP∑_l∈𝒦∑_n∈𝒫λ_n,l^0λ_n,l^0')^-1λ_s,q^0ε_is,q^0
in the prediction error converges in probability to 0 as KP grows.[Consistency of the individual treatment effect estimator may also be shown by allowing both N and KP to grow, with restrictions on the relative growth rate, e.g., KP/min(√(N_1),√(N_0))→0. We do not pursue this path in this study, as the number of pretreatment outcomes in empirical microeconomics that we focus on is usually not large.]
To select the number of pretreatment outcomes to include in the model, we follow a model selection procedure similar to that in <cit.>, where for each usable number of pretreatment outcomes, we construct many different models by including a random subset of the pretreatment outcomes as regressors and the remaining outcomes as instruments. We then estimate the models using GMM and obtain the leave-one-out prediction errors for all or a subsample of the individuals.
The best set of pretreatment outcomes is chosen as the one that minimises the mean squared leave-one-out prediction error.[An alternative way to select the best set of pretreatment outcomes is to use information criteria such as GMM-BIC and GMM-AIC <cit.>. To avoid the potential problem of post-selection inference, we may also randomly split the sample into two parts, where we select the best model on one part, and conduct inference on the other.]
In addition to the models using only a subset of the pretreatment outcomes, we also consider averaging different models that use the same number of pretreatment outcomes. Since the estimators constructed using only a subset of the pretreatment outcomes are asymptotically unbiased, as long as the number of pretreatment outcomes is larger than f, this property is passed on to the averaged estimator. The averaged estimator may also be more efficient as it uses more information in the sample and reduces uncertainty caused by a small number of sample splits.[We stick with simple averaging in this paper. More flexible averaging scheme, e.g., with larger weights on those with smaller out of sample prediction errors, would be an interesting direction for future research.]
The leave-one-out prediction errors are also averaged over the models, and the best number of pretreatment outcomes to be used for the averaged estimator is similarly determined by minimising the mean squared leave-one-out prediction error.
§.§ Related methods
§.§.§ Linear conditional mean
An alternative approach to estimating the treatment effects is to follow <cit.> and assume that
𝔼(ε_i^𝒫|Z_it)=C'Z_it,
where C=𝔼(Z_itZ_it')^-1𝔼(Z_itε_i^𝒫') is Z× KT_0.[This assumption holds in special cases, e.g., when the unobserved predictors and the idiosyncratic shocks all follow the normal distribution <cit.>. In more general cases, this assumption may be considered to hold approximately.]
We can then separate the error term into a part correlated with the regressors and a part that has zero conditional mean, and rewrite the untreated potential outcome Y_it,k^0 as
Y_it,k^0 =𝔼(Y_it,k^0|Z_it)+u_it,k^0
=(θ_t,k^0'-γ_t,k^0'C')Z_it+u_it,k^0
=Z_it'θ_t,k^*0+u_it,k^0,
where u_it,k^0=ε_it,k^0-γ_t,k^0'ε_i^𝒫+γ_t,k^0'C'Z_it.
Similarly, the treated potential outcome Y_it,k^1 can be rewritten as
Y_it,k^1 =Z_it'θ_t,k^*1+u_it,k^1,
where θ_t,k^*1=θ_t,k^1-Cγ_t,k^1, and u_it,k^1=ε_it,k^1-γ_t,k^1'ε_i^𝒫+γ_t,k^1'C'Z_it.
Since 𝔼(u_it,k^1|Z_it)=𝔼[e_it,k^1-𝔼(e_it,k^1|Z_it)|Z_it]=0 and 𝔼(u_it,k^0|Z_it)=0, it is straightforward to show that the least squares estimators θ_t,k^*1=(Z_t^1'Z_t^1)^-1Z_t^1'Y_t,k^1 and θ_t,k^*0=(Z_t^0'Z_t^0)^-1Z_t^0'Y_t,k^0 are the unbiased estimators of θ_t,k^*1 and θ_t,k^*0 respectively.[The linear conditional mean assumption also implies that the unconfoundedness assumption is satisfied, as 𝔼(Y_it,k^0|Z_it,D_it=1)=𝔼(Y_it,k^0|Z_it,D_it=0) and 𝔼(Y_it,k^1|Z_it,D_it=1)=𝔼(Y_it,k^1|Z_it,D_it=0).]
We can then construct an estimator as
τ̃_it,k =Z_it'(θ_t,k^*1-θ_t,k^*0),
which is an unbiased estimator for the average treatment effect for individuals with the same values of Z_it, or the conditional average treatment effect.
It follows that the average of the conditional average treatment effects estimators τ̃_t,k=1/N∑_i=1^Nτ̃_it,k is an unbiased estimator for the average treatment effect τ_t,k.
In addition, it can also be shown that τ̃_t,k is a consistent estimator without imposing the linear conditional mean assumption <cit.>.
Under Assumptions <ref>-<ref>,
* if 𝔼(ε_i^𝒫|Z_it)=C'Z_it, then 𝔼(τ̃_it,k-τ_it,k|Z_it=z_it)=0 and 𝔼(τ̃_t,k-τ_t,k)=0;
* τ̃_t,k-τ_t,k=O_p(N_1^-1/2)+O_p(N_0^-1/2).
Note that 𝔼(τ_it,k|Z_it=z_it) is the average treatment effect for individuals with Z_it=z_it, or the conditional average treatment effect, whereas the individual treatment effect is 𝔼(τ_it,k|H_it=h_it) as given in (<ref>). The two are generally not the same since C'Z_it≠ 0.
§.§.§ Interactive fixed effects model
Instead of replacing the unobserved confounders with the observed pretreatment outcomes, <cit.> models the unobserved fixed effects directly by iterating between estimating the coefficients on the observed covariates and estimating the unobserved factors and factor loadings using the principal component analysis, given some initial values.
This approach allows more general structures in the error terms, but requires both N and T to be large, and is also more restrictive on the model specification: the observed covariates need to be time-varying, while the coefficients are assumed constant over time.
<cit.> adapts this method to the potential outcomes framework to estimate the average treatment effects on the treated, assuming that the untreated potential outcomes for both the treated and untreated units follow the interactive fixed effects model, and proposes a cross-validation procedure to choose the number of unobserved factors and a parametric bootstrap procedure for inference.
This approach has the desired feature of being less computationally expensive compared with repeated pretreatment set splitting and averaging, and is potentially more efficient compared with using only the best set of pretreatment outcomes and discarding the remaining information when all outcomes are related. However, its potential to be adapted to our settings is limited by the restrictions discussed above. In particular, we may assume the coefficients to be constant over time, but it would be unrealistic to assume that they are the same across different outcomes, if we were to use multiple related outcomes.
Another closely related study is <cit.>, which generalises the results from the matrix completion literature in computer science to impute the missing elements of the untreated potential outcome matrix for the treated units in the posttreatment periods, where the matrix is assumed to have a low rank structure, similar to that of the interactive fixed effects model. The bias of the estimator is shown to have an upper bound that goes to 0 as both N and T grow. This method allows staggered adoption of the treatment, i.e., the treated units receive the treatment at different time periods.
§.§.§ Synthetic control method
<cit.> estimate the treatment effect on a treated unit by predicting its untreated potential outcome using a synthetic control constructed as a weighted average of the control units.
The synthetic control method applies to cases where the pretreatment characteristics of the treated unit can be closely approximated by the synthetic control constructed using a small number of control units over an extended period of time before the treatment, which may not generally hold.
In terms of implementation, the objective function for the synthetic control method is similar to that of the linear regression approach in <cit.>. However, the weights on the control units in the synthetic control method are restricted to be nonnegative to avoid extrapolation. This reduces the risk of overfitting, but may also limit its applicability by making it difficult to find a set of weights that satisfy the restrictions.
§.§ Inference
To measure the conditional variance of the individual treatment effect estimator, Var(τ_it,k|H,D), where H is the matrix of observed covariates and unobserved individual characteristics and D is the matrix of the treatment status for all individuals and all time periods in the sample, we follow <cit.> and employ a parametric bootstrap procedure.
First, we apply our method to all outcomes in all periods to obtain Y_it,k^1 and e_it,k^1 for the treated individuals in the posttreatment periods, and Y_it,k^0 and e_it,k^0 for the untreated individuals in the posttreatment periods and for all individuals in the pretreatment periods.
Note that the residuals e_it,k^1 and e_it,k^0 are estimates for ε_it,k^1-γ_t,k^1'ε_i^𝒫 and ε_it,k^0-γ_t,k^0'ε_i^𝒫, respectively, rather than the idiosyncratic shocks in the original model, ε_it,k^1 and ε_it,k^0. Thus, the variance of the individual treatment effect estimator tends to be overestimated using the parametric bootstrap by resampling these residuals, especially when the number of pretreatment outcomes is small.[See the discussion on equation (<ref>).] Correcting for this bias would be a necessary step for future research.
These fitted values of the outcomes can be stacked into a TK× 1 vector Y_i for each individual, where Y_i for i∈𝒯 contains Y_it,k^1 in the posttreatment periods and Y_it,k^0 in the pretreatment periods, and Y_i for i∈𝒞 contains Y_it,k^0 in all periods.
The TK× 1 vector of residuals e_i can be obtained similarly.
We then start bootstrapping for B rounds:
* In round b∈{1,…,B}, generate a bootstrapped sample as
Y_i^(b) = Y_i + e_i^(b), for all i,
where e_i^(b) is randomly drawn from {e_i}_i∈𝒯 for i∈𝒯, and from {e_i}_i∈𝒞 for i∈𝒞.[Since the entire series of residuals over the T periods and K outcomes are resampled, correlation and heteroskedasticity across time and outcomes are preserved <cit.>.]
* Construct τ_it,k^(b) for each i using the above bootstrapped sample.
The variance for the individual treatment effect estimator is computed using the bootstrap estimates as
Var(τ_it,k|H,D)=1/B∑_b=1^B(τ_it,k^(b)-1/B∑_a=1^Bτ_it,k^(a)), i=1,…,N,
and the 100(1-α)% confidence intervals for τ̅_it,k, i=1,…,N can be constructed as
[ τ_it,k^[α/2B] , τ_it,k^[(1-α/2)B]],
where the superscript denotes the index of the bootstrap estimates in ascending order.
Alternatively, we can use a normal approximation and construct the confidence intervals as
[ τ_it,k+Φ^-1(α/2)σ_it,k , τ_it,k+Φ^-1(1-α/2)σ_it,k],
where Φ(·) is the cumulative distribution function for the standard normal distribution, and σ_it,k=√(Var(τ_it,k|H,D)).
The variance for the average treatment effect estimator τ_t,k=1/N∑_i=1^Nτ_it,k and the confidence interval for the average treatment effect τ_t,k can be obtained in similar manners using the bootstrap estimates τ_t,k^(b)=1/N∑_i=1^Nτ_it,k^(b), b=1,…,B.
§ MONTE CARLO SIMULATIONS
In this section, we conduct Monte Carlo simulations to assess the performance of our estimator in small samples, and compare it with related methods in relevant settings.
The number of posttreatment period T_1 is fixed at 1, and the number of related outcomes K is fixed at 5 in all settings.
The untreated potential outcomes are generated from
Y_it,k^0=X_it'β_t,k^0+μ_i'λ_t,k^0+ε_it,k^0, k∈𝒦,
where X_it contains 2 observed covariates, and μ_i contains 2 unobserved individual characteristics as well as the constant 1.
The 2 observed covariates are i.i.d. N(0,1) in period 1, and then follow an AR(1) process, X_it=0.9X_i,t-1+ξ_it, where ξ_it are i.i.d. N(0,√(1-0.9^2)), so that the observed covariates are correlated across time and the variances stay 1.
The 2 unobserved individual characteristics are also i.i.d. N(0,1).
The coefficients β_t,k^0 and λ_t,k^0 are i.i.d. N(ω_k,1) with ω_k∼ N(1,1), for k∈𝒦, so that the means of the coefficients differ across outcomes, and the idiosyncratic shocks ε_it,k^0 are i.i.d. N(0,1).
The individual treatment effect in the posttreatment period τ̅_iT_0+1,k is a deterministic function of X_it and μ_i with the coefficients being i.i.d. N(0.5,0.5), for k∈𝒦.
And the observed outcomes Y_it,k, k∈𝒦 are equal to Y_it,k^0-ε_it,k^0+τ̅_iT_0+1,k+ε_it,k^1, where ε_it,k^1 are i.i.d. N(0,1), for the treated individuals in the posttreatment period, and Y_it,k^0 otherwise.
X_it and μ_i as well as their coefficients for the untreated potential outcomes and the treatment effects are drawn 5 times, and for each set of {X_it,μ_i} and their coefficients drawn, ε_it,k^0 and ε_it,k^1 are drawn 1000 times, which allows us to compute the bias and variance of the estimator conditional on the observed covariates and the unobserved individual characteristics.
To measure the performances of the estimators, we compute the biases and standard deviations for the estimates of the individual treatment effects and the average treatment effect for outcome K in the posttreatment period.
Specifically, the bias of the individual treatment effect estimator τ_iT_0+1,K is measured by 1/N∑_i=1^N1/5∑_d=1^5|𝔼(τ_iT_0+1,K^(d,s))-τ̅_iT_0+1,K^(d)|, where the superscript d denotes the dth draw of {X_it,μ_i} and s denotes the sth draw of ε_it,k^0 and ε_it,k^1, and the standard deviation is constructed as 1/N∑_i=1^N1/5∑_d=1^5√(𝔼(τ_iT_0+1,K^(d,s)-𝔼τ_iT_0+1,K^(d,s))^2).
Similarly, the bias of the average treatment effect estimator τ_T_0+1,K is measured by 1/5∑_d=1^5|𝔼(τ_T_0+1,K^(d,s))-τ̅_T_0+1,K^(d)|, and the standard deviation is constructed as 1/5∑_d=1^5√(𝔼(τ_T_0+1,K^(d,s)-𝔼τ_T_0+1,K^(d,s))^2).[The performance of the estimators can also be measured using RMSE, which is computed as 1/5∑_d=1^5√(𝔼(τ_iT_0+1,K^(d,s)-𝔼τ̅_iT_0+1,K^(d,s))^2) for τ_iT_0+1,K and 1/5∑_d=1^5√(𝔼(τ_T_0+1,K^(d,s)-𝔼τ̅_T_0+1,K^(d,s))^2) for τ_T_0+1,K. Since the biases of our estimators are small, these measures are quite similar to SD and are thus omitted from reporting.]
13cm!
[!htbp]
Simulation Results on Model Selection
4c 5cBest Set 1c 5cModel Averaging
(lr)5-9 (lr)11-15
5c 2cITE 2cATE 2c 2cITE 2cATE
(lr)6-7 (lr)8-9(lr)12-13 (lr)14-15
N_1 N_0 T_0 P Bias SD Bias SD P Bias SD Bias SD
50 50 1 2.2 0.151 1.225 0.082 0.384 2.3 0.231 1.163 0.120 0.336
100 100 1 2.2 0.076 0.764 0.032 0.212 2.4 0.065 0.836 0.024 0.205
200 200 1 2.1 0.038 0.476 0.004 0.127 2.6 0.046 0.712 0.011 0.133
[1ex]
50 50 2 2.6 0.062 0.875 0.014 0.253 2.9 0.150 0.758 0.040 0.232
100 100 2 2.7 0.035 0.685 0.003 0.165 2.9 0.073 0.563 0.014 0.159
200 200 2 3.2 0.038 0.729 0.003 0.137 4.0 0.031 0.702 0.003 0.131
* Note: This table compares the estimator using only the best set of pretreatment outcomes and the estimator constructed from model averaging, in terms of the optimal number of pretreatment outcomes selected by LOO cross-validation, as well as the bias and SD for the ITE and ATE estimates, with varying sample size and number of pretreatment periods, based on 5000 simulations for each setting.
Table <ref> compares the GMM estimator constructed using only the best set of pretreatment outcomes with that constructed by averaging estimators from different models with the same number of pretreatment outcomes.
We see that the best number of pretreatment outcomes, P, is slightly larger than the number of unobserved individual characteristics (f=2) for both estimators, and increases when the sample size is larger and when there are more pretreatment outcomes available, which is in line with our discussions in section <ref>. The estimators constructed by model averaging also tends to select a slightly larger P than the estimator using only the best set of pretreatment outcomes.
In almost all settings, the estimator using only the best set of pretreatment outcomes tends to have a smaller bias, whereas the estimator constructed from model averaging tends to have a smaller variance, except for estimating the individual treatment effects when the number of pretreatment outcomes is very small. The bias and SD also become smaller for both estimators when the sample size as well as the number of pretreatment outcomes grow.
In the following simulations, we fix P at 2 when T_0=1, and 3 when T_0=2, and construct the GMM estimator using only the best set of pretreatment outcomes, with the best set of pretreatment outcomes selected at the first simulation and used for the remaining simulations for each setting.[This is mainly to save computing time and does not fundamentally change the conclusions.]
11cm!
[!htbp]
Simulation Results for the GMM estimator
4c 3cITE 1c 3cATE
(lr)5-7 (lr)9-11
N_1 N_0 T_0 Bias SD Coverage Bias SD Coverage
[2ex]
11cPanel A: ε_it,k^0 uncorrelated across t and k
[1ex]
50 50 1 0.096 1.422 0.997 0.040 0.443 0.995
100 100 1 0.043 0.856 0.992 0.005 0.226 0.984
50 50 2 0.043 1.163 0.973 0.011 0.288 0.959
100 100 2 0.025 0.900 0.953 0.005 0.181 0.956
[1ex]
11cPanel B: ε_it,k^0 correlated across t and k
[1ex]
50 50 1 0.134 1.419 0.996 0.045 0.431 0.993
100 100 1 0.065 0.851 0.991 0.018 0.230 0.982
50 50 2 0.063 1.162 0.976 0.008 0.294 0.961
100 100 2 0.037 0.906 0.964 0.009 0.183 0.967
* Note: This table compares the bias and SD of the GMM estimator, as well as the coverage probability of the 95% confidence interval, with varying sample size and number of pretreatment periods, based on 5000 simulations for each setting.
Table <ref> reports the bias and SD of the GMM estimator, as well as the coverage probability of the 95% confidence interval, for estimating the individual treatment effects and the average treatment effect.
Panel A shows that the bias and SD for the estimators are small even with a small sample size and a small number of pretreatment outcomes. However, the 95% confidence intervals tend to have larger coverage probabilities, especially when the number of pretreatment outcomes is small. This distortion is alleviated as more pretreatment outcomes are available.
Since the validity of the GMM estimator relies on the assumption that ε_it,k are uncorrelated across time or outcomes, we examine the performance of the estimator when this assumption is violated in Panel B, where the idiosyncratic shocks follow an AR(1) process over time with the autoregression coefficient being 0.1, and are correlated across outcomes by sharing a common component for different outcomes in the same period. This slightly increases the biases and SD's of the estimators, but the performance of the estimators are still quite good, especially in comparison with related methods as shown in the following tables.
11cm!
[!htbp]
Simulation
4c 4cOLS 1c 4cGMM
(lr)5-8 (lr)10-13
4c 2cITE 2cATE 1c 2cITE 2cATE
(lr)5-6 (lr)7-8(lr)10-11 (lr)12-13
N_1 N_0 T_0 Bias SD Bias SD Bias SD Bias SD
[2ex]
13cPanel A: Linear conditional mean
[1ex]
100 100 1 0.099 0.622 0.041 0.186 0.113 1.297 0.086 0.257
200 200 1 0.059 0.415 0.015 0.119 0.011 0.430 0.003 0.129
100 100 2 0.046 0.677 0.012 0.158 0.026 0.901 0.003 0.179
200 200 2 0.064 0.547 0.012 0.121 0.028 0.937 0.003 0.142
[1ex]
13cPanel B: Nonlinear conditional mean
[1ex]
100 100 1 0.097 0.687 0.057 0.199 0.025 0.806 0.011 0.244
200 200 1 0.140 0.456 0.040 0.122 0.016 0.552 0.003 0.149
100 100 2 0.077 0.734 0.015 0.173 0.115 1.118 0.007 0.213
200 200 2 0.093 0.551 0.004 0.116 0.025 0.910 0.003 0.142
* Note: This table compares the bias and SD for the OLS estimator and the GMM estimator, with varying sample size and number of pretreatment periods, based on 5000 simulations for each setting.
Table <ref> compares our method with the OLS approach in <cit.>. In panel A, both the unobserved individual characteristics and the idiosyncratic shocks are normally distributed so that the linear conditional mean assumption is satisfied. The results show that the GMM estimator outperforms the OLS estimator by having a smaller bias in estimating both the individual treatment effects and the average treatment effect, although the variance of the GMM estimator is also larger.
In panel B, the unobserved individual characteristics are drawn from the uniform distribution, and the linear conditional mean assumption is no longer satisfied <cit.>. We see that the results are virtually unchanged for the GMM estimator, while the OLS estimator performs slightly worse by having larger biases and SD's, which is more pronounced in estimating the individual treatment effects.
The results indicate that the linear conditional mean assumption is not a very strong one. Indeed, the distribution of the sum of several random variables would become more bell-shaped like the normal distribution under fairly general conditions, as a result of the central limit theorem. The simulation results are very similar when the unobserved individual characteristics are drawn from a mix of other distributions.
11cm!
[!htbp]
Simulation
4c 4cIFE 1c 4cGMM
(lr)5-8 (lr)10-13
4c 2cITT 2cATT 1c 2cITT 2cATT
(lr)5-6 (lr)7-8(lr)10-11 (lr)12-13
N_1 N_0 T_0 Bias SD Bias SD Bias SD Bias SD
[2ex]
13cPanel A: X_it constant across t
[1ex]
5 100 1 1.203 1.357 0.657 0.610 0.047 1.499 0.016 0.687
5 200 1 1.264 1.229 0.492 0.548 0.089 1.624 0.023 0.730
5 100 2 0.922 1.140 0.263 0.518 0.034 1.265 0.015 0.585
5 200 2 0.982 1.147 0.378 0.520 0.040 1.387 0.018 0.634
[1ex]
13cPanel B: β_t,k^0 constant across t
[1ex]
5 100 1 1.289 1.220 0.773 0.579 0.029 1.349 0.016 0.616
5 200 1 1.681 1.370 0.836 0.620 0.034 1.563 0.020 0.713
5 100 2 0.930 1.070 0.440 0.486 0.026 1.261 0.017 0.577
5 200 2 1.417 1.083 1.015 0.489 0.031 1.250 0.011 0.564
* Note: This table compares the bias and SD for the IFE estimator and the GMM estimator, with varying sample size and number of pretreatment periods, based on 5000 simulations for each setting.
Table <ref> compares our method with the method of estimating the interactive fixed effects model directly, which was first developed in <cit.> and then adapted into the potential outcomes framework by <cit.> to allow heterogeneous treatment effects. We fix the number of treated individuals at 5, and compare the performance of the two methods in estimating the individual treatment effect on the treated and the average treatment effect on the treated.
We consider two scenarios that are relevant in the context of empirical microeconomics. In panel A, the observed covariates are constant over time. This is plausible for covariates such as gender, race or education level, which are likely to be stable over time.
Since the IFE method requires the observed covariates to be time-varying, the covariates that are constant over time are dropped from the estimation and become part of the unobserved individual characteristics, which makes the model equivalent to a pure factor model with 4 unobserved factors. As we have 5 related outcomes, this model should still be estimable by the IFE method. However, we see that IFE method perform poorly when there are only a small number of pretreatment outcomes to recover the unobserved individual characteristics. The bias and SD of the IFE estimator become smaller as more pretreatment outcomes are available, but are still quite large compared with our method.
To accommodate the restrictive model specification for the IFE method, we allow the covariates to be time-varying while keeping the coefficients constant over time in panel B, although the coefficients are allowed to vary across outcomes since it is unlikely that the coefficients for different outcomes would be the same in practice.
We see that the IFE estimator has poor performance since the model is still misspecified in their method, whereas the results for our method are virtually unchanged.
11cm!
[!htbp]
Simulation
4c 4cSCM 1c 4cGMM
(lr)5-8 (lr)10-13
4c 2cITT 2cATT 1c 2cITT 2cATT
(lr)5-6 (lr)7-8(lr)10-11 (lr)12-13
N_1 N_0 T_0 Bias SD Bias SD Bias SD Bias SD
[2ex]
13cPanel A: distributions of μ_i same for treated and control
[1ex]
5 100 1 0.376 1.247 0.245 0.577 0.029 1.349 0.016 0.617
5 200 1 0.883 1.376 0.698 0.628 0.035 1.563 0.020 0.713
5 100 2 0.930 1.191 0.526 0.547 0.023 1.243 0.014 0.565
5 200 2 0.469 1.186 0.126 0.531 0.031 1.250 0.012 0.564
[1ex]
13cPanel B: distributions of μ_i different for treated and control
[1ex]
5 100 1 0.763 1.253 0.634 0.605 0.036 1.368 0.022 0.658
5 200 1 1.412 1.413 1.269 0.656 0.037 1.573 0.024 0.735
5 100 2 0.982 1.204 0.513 0.558 0.027 1.249 0.020 0.578
5 200 2 0.781 1.203 0.613 0.551 0.032 1.256 0.014 0.577
* Note: This table compares the bias and SD for the SCM estimator and the GMM estimator, with varying sample size and number of pretreatment periods, based on 5000 simulations for each setting.
Table <ref> compares our method with the synthetic control method <cit.>. In panel A, the unobserved individual characteristics for both the treated individuals and the untreated individuals are drawn from N(0,1), while in panel B, the unobserved individual characteristics for the treated individuals are drawn from N(1,1).
Since the synthetic control method requires the treated units to be in the convex hull of the control units by restricting the weights assigned to the control units to be nonnegative, their method may perform poorly when the support of the unobserved individual characteristics are different for the treated and untreated individuals.
While our method should be unaffected by the degree of overlapping in the distributions of the unobserved individual characteristics for the two treatment groups.
The simulation results show that indeed the synthetic control estimator performs worse in panel B. Perhaps somewhat surprising is that its performance is also poor compared with our method in panel A. This is because the coefficients are outcome-specific, so that the levels of the outcomes are also likely to vary across outcomes, which makes it more difficult to obtain a good pretreatment fit under the nonnegativity restriction. In comparison, our method has good performance in both panels.
Overall, the simulation results show that our method has good performance in terms of the bias and SD in estimating the individual treatment effects and the average treatment effect under various settings, and has superior performance than related methods. The shortcoming of our method is that the confidence intervals tend to be too wide, especially when the number of pretreatment outcomes is small.
§ EMPIRICAL APPLICATION
We illustrate our method by estimating the effect of health insurance coverage on the individual usage of hospital emergency departments.
Although the usage of emergency departments applies to only a small proportion of the population, it imposes great financial pressure on the health care system. In addition, it is not clear ex ante what the direction of the effect should be.
E.g., <cit.> argues that health insurance coverage could either increase emergency-department use by reducing its cost for the patients, or decrease emergency-department use by encouraging primary care use or improving health.
The findings on emergency-department use have been mixed.
Using survey data collected from the participants of the Oregon Health Insurance Experiment (OHIE) about a year after they were notified of the selection results, <cit.> find no discernible impact of health insurance coverage on emergency-department use.[The Oregon Health Insurance Experiment (OHIE) was initiated in 2008, targeting at low-income adults in Oregon who had been without health insurance for at least 6 months. Among the 89,824 individuals who signed up, 35,169 individuals were randomly selected by the lottery and were eligible to apply for the Oregon Health Plan (OHP) Standard program, which provided relatively comprehensive medical benefits with no consumer cost sharing, and the monthly premiums was only between $0 and $20 depending on the income. As a randomised controlled experiment, the OHIE offers an opportunity for researchers to study the effect of health insurance coverage on various health outcomes without confounding factors.]
While using the visit-level data for all emergency-department visits to twelve hospitals in the Portland area probabilistically matched to the OHIE study population on the basis of name, date of birth, and gender, <cit.> find that health insurance coverage significantly increases emergency-department use by 0.41 visits per person, from an average of 1.02 visits per person in the control group in the first 15 months of the experiment. They also examine whether the effect differs across heterogeneous groups, and find statistically significant increases in emergency-department use across most subgroups in terms of the number of pre-experiment emergency-department visits, hospital admission (inpatient or outpatient visits), timing (on-hours or off-hours visits), the type of visits (emergent and not preventable, emergent and preventable, primary care treatable, and non-emergent), as well as gender, age, and health condition.
In this application, we wish to estimate the effect of health insurance coverage on emergency-department use for each individual in the sample.
This would potentially help us better understand whether and how health insurance coverage affects emergency-department use, compared with using only the average treatment effect for the whole sample or for some preassigned subgroups (conditional average treatment effects).
Our data combines both the hospital emergency-department visit-level data and the survey data. There are two time periods, one before the randomisation and one after.[The pre-randomisation period in the hospital visit-level data was from January 2007 to March 2008, and the post-randomisation period was from March 2008 to September 2009. The two surveys were collected shortly after the randomisation and about a year after randomisation, respectively, each covering a 6-month period before the survey.]
To estimate the individual treatment effects, we include 3 observed covariates including gender, birth year, and household income as a percentage of the federal poverty line, and 10 related outcomes including different types of emergency-department visits and medical charges.
We also consider a rich list of variables on which we make comparisons for individuals with different estimated treatment effects.
There are 2154 individuals with complete information on these variables.[Note that our sample size is significantly smaller than the other studies using the OHIE data, due to the inclusion of the extensive list of variables. For example, the sample size in <cit.> is 74,922, and the sample size in <cit.> is 24,646. So our sample may not be representative of the OHIE sample and the results in different studies may not be directly comparable.]
!
Sample Selection
Selected Not-selected Difference Insured Not-insured Difference
(1) (2) (3) (4) (5) (6)
Female 0.59 0.60 -0.01 0.63 0.58 0.05*
Birth year 1966.24 1966.44 -0.19 1967.03 1966.09 0.95
Household income as percent of federal poverty line 79.77 75.67 4.10 53.73 86.57 -32.84***
# ED visits 0.32 0.43 -0.11** 0.48 0.33 0.15**
# outpatient ED visits 0.27 0.35 -0.09** 0.40 0.28 0.13**
# weekday daytime ED visits 0.18 0.24 -0.06** 0.27 0.19 0.08**
# emergent non-preventable ED visits 0.07 0.09 -0.02 0.11 0.07 0.04**
# emergent preventable ED visits 0.03 0.03 0.00 0.03 0.02 0.01
# primary care treatable ED visits 0.10 0.15 -0.05*** 0.15 0.12 0.04
Total charges 859.71 1276.90 -417.19* 1379.58 947.54 432.04
Total ED charges 345.48 504.64 -159.16** 494.95 396.86 98.09
# ED visits to a high uninsured volume hospital 0.17 0.22 -0.05 0.25 0.17 0.08**
# ED visits (survey) 0.24 0.30 -0.06* 0.38 0.23 0.15***
N 1103 1051 577 1577
* 1) This table compares the mean values of the covariates and related outcomes in the pretreatment period for individuals selected/not-selected by the lottery, and individuals insured/not-insured.
* 2) Significance levels of the two-sample t-test: * 10%, ** 5%, *** 1%.
The first 3 columns in Table <ref> present the mean values of the covariates and outcomes in the pretreatment period for individuals selected by the lottery and for individuals not selected by the lottery, as well as the difference between the two groups.
Since a considerable number of observations with incomplete information are dropped, being selected by the lottery is negatively correlated with different types of emergency-department visits in the pretreatment period in our sample, which suggests that the lottery assignment is not likely to be a valid instrument for health insurance coverage.
Table <ref> also compares the mean pretreatment characteristics for individuals covered by health insurance and those not covered, which shows that individuals who were covered were poorer and used emergency-department more frequently in the pretreatment period than people who were not covered by health insurance.
Figure <ref> shows the distribution of the estimated individual treatment effects using our method.[As mentioned earlier, since only individuals with complete information on the variables are selected into our sample, the distribution may not be representative of the OHIE participants. The distribution of the estimated individual treatment effects may be more spread out than the distribution of the true effects due to noise, or less spread out since the estimates are based on the parametric models for the potential outcomes, which may be over-simplifying compared with the true models.] The mean of estimated individual treatment effects, or the estimated average treatment effect is 0.33, which is significant at 1% level. 114 individuals have treatment effects that are significant at 10% level, among which 23 are negative and 91 are positive.[Note that if we were to adjust for multiple testing, e.g., using the Benjamini–Hochberg procedure to control the false discovery rate (FDR) at 10% level, then we would be left with only one individual whose treatment effect is significant. Although the small number of individuals with significant treatment effects may also be attributed to the overestimation of the variance of the individual treatment effect estimator.]
We then move on to compare the characteristics of the individuals based on their estimated treatment effects, which are presented in Table <ref>-<ref>.
Column (1) shows the mean characteristics of individuals whose treatment effects are not significant at 10% level, column (2) shows the mean characteristics of individuals whose treatment effects are significantly negative, column (3) shows the differences between column (2) and column (1), column (4) shows the mean characteristics of individuals whose treatment effects are significantly positive, and column (5) shows the differences between column (4) and column (1).
Compared with individuals who would not be significantly affected by the treatment, individuals who would significantly decrease or increase their emergency-department visits if covered by health insurance both had more emergency-department visits and more medical charges in the pretreatment period. However, these two groups were also distinct in some characteristics.
The individuals who would have fewer emergency-department visits if covered by health insurance were on average 7 years younger than individuals in the control group and 10 years younger than the positive group, more likely to be female with less education, and in particular, were much poorer than individuals in the other groups. They were more likely to be diagnosed with depression but not other conditions. Importantly, they were less likely to have any primary care visits, and more likely to use emergency department as the place for medical care.
In terms of emergency-department use in the pretreatment period, they had fewer visits resulting in hospitalisation, more outpatient visits, more preventable and non-emergent visits, more visits to hospitals with a low fraction of uninsured patients, fewer visits for chronic conditions, and more visits for injury. Although their medical charges were not as high as those for individuals in the positive group, they owed more money for medical expenses.
In comparison, individuals who would have more emergency-department visits if covered by health insurance were more likely to be older, male, and with household income right above the federal poverty line, which means that they were not as poor as the individuals in the other groups.
They were in worse health conditions, more likely to be diagnosed with diabetes and high blood pressure, and were taking more prescription medications.
They also had more emergency-department visits of all types in the pretreatment period, including visits resulting in hospitalisation and visits for more severe conditions such as chronic conditions, chest pain and psychological conditions, and they incurred more medical charges.
Overall, these comparisons suggest that the individuals who would have fewer emergency-department visits if covered by health insurance were younger and not in very bad physical conditions. However, their access to primary care were limited due to being in much more disadvantaged positions financially, which made them resort to using the emergency department as the usual place for medical care. In contrast, the individuals who would have more emergency-department visits if covered by health insurance were more likely to be older and in poor health. So even with access to primary care, they still used emergency departments more often for severe conditions, although sometimes for primary care treatable and non-emergent conditions as well.
All in all, it seems that both mechanisms discussed by <cit.> are playing a role. For people who used emergency department for medical care because they did not have access to primary care service, health insurance coverage decreases emergency-department use because it increases access to primary care and may also lead to improved health. Whereas for people who had access to primary care and still used emergency department due to worse physical conditions, health insurance coverage increases emergency-department use because it reduces the out-of-pocket cost of the visits.
This application shows the potential value of estimating individual treatment effects for policy evaluation. Our findings would not have been possible by only estimating conditional average treatment effects, as we would not be able to distinguish individuals with positive or negative treatment effects at the first place.
!
Comparison of Characteristics
Same Fewer Difference More Difference
(1) (2) (3) (4) (5)
Birth year 1966.42 1973.90 7.48* 1964.09 -2.33**
Female 0.61 1.00 0.39*** 0.37 -0.24***
Education 2.52 1.90 -0.62** 2.30 -0.22**
English 0.89 0.90 0.01 0.95 0.07***
Race:
White 0.75 0.50 -0.25 0.79 0.04
Hispanic 0.10 0.10 0.00 0.12 0.02
Black 0.06 0.30 0.24 0.06 0.00
Asian 0.10 0.10 0.00 0.06 -0.05*
American Indian or Alaska Native 0.04 0.10 0.06 0.07 0.03
Native Hawaiian or Pacific Islander 0.01 0.00 -0.01*** 0.01 0.00
Other races 0.07 0.10 0.03 0.06 -0.01
Employed 0.53 0.60 0.07 0.48 -0.05
Average hours worked per week 2.25 2.40 0.15 2.20 -0.05
Household income as percent of federal poverty line 76.12 28.02 -48.09*** 114.98 38.86***
Household Size (adults and children) 2.99 3.50 0.51 2.61 -0.39**
Number of family members under 19 living in house 0.88 1.00 0.12 0.56 -0.33***
Overall health 3.01 2.40 -0.61 2.84 -0.17
Health change -0.09 -0.10 -0.01 -0.08 0.01
# days physical health not good 6.97 7.60 0.63 9.49 2.52**
# days mental health not good 8.43 13.90 5.47 10.30 1.87
# days poor health impaired regular activities 6.09 7.90 1.81 8.36 2.27**
Diabetes 0.10 0.20 0.10 0.20 0.10**
Asthma 0.13 0.20 0.07 0.13 0.01
High blood pressure 0.24 0.20 -0.04 0.38 0.15***
Depression 0.37 0.70 0.33* 0.47 0.10**
Any primary care visits 0.55 0.20 -0.35** 0.57 0.02
# primary care visits 1.64 1.80 0.16 1.86 0.22
Any hospital visits 0.04 0.10 0.06 0.12 0.07**
# hospital visits 0.05 0.10 0.05 0.16 0.11**
Usual place for medical care:
Private clinic 0.18 0.00 -0.18*** 0.28 0.10**
Public clinic 0.17 0.10 -0.07 0.18 0.01
Hospital-based clinic 0.07 0.00 -0.07*** 0.08 0.01
Hospital ER 0.03 0.50 0.47** 0.06 0.03
Urgent care clinic 0.03 0.10 0.07 0.02 -0.01
Other places 0.06 0.00 -0.06*** 0.07 0.00
Don't have usual place 0.46 0.30 -0.16 0.32 -0.14***
* 1) This table shows the mean characteristics for individuals whose treatment effects are not statistically significant at 10% level (column 1), whose treatment effects are significantly negative (column 2), and whose treatment effects are significantly positive (column 4).
Column (3) contains the differences between column (2) and column (1), and column (5) contains the differences between column (4) and column (1).
* 2) Significance levels of the two-sample t-test: * 10%, ** 5%, *** 1%.
!
Comparison of Characteristics
Same Fewer Difference More Difference
(1) (2) (3) (4) (5)
Needed medical care 0.68 1.00 0.32*** 0.76 0.08*
Got all needed medical care 0.58 0.40 -0.18 0.56 -0.02
Reason went without care:
Cost too much 0.33 0.20 -0.13 0.31 -0.02
No insurance 0.36 0.50 0.14 0.35 -0.01
Doc wouldn't take insurance 0.01 0.00 -0.01*** 0.00 -0.01***
Owed money to provider 0.05 0.00 -0.05*** 0.10 0.05*
Couldn't get an appointment 0.03 0.00 -0.03*** 0.01 -0.02*
Office wasn't open 0.01 0.00 -0.01*** 0.00 -0.01***
Didn't have a doctor 0.11 0.20 0.09 0.10 -0.01
Other reasons 0.03 0.00 -0.03*** 0.04 0.01
Don't know 0.00 0.00 0.00*** 0.00 0.00***
Needed prescription medications 0.61 0.80 0.19 0.77 0.16***
Got all needed prescriptions 0.74 0.70 -0.04 0.64 -0.09*
Currently taking any prescription medications 0.44 0.30 -0.14 0.68 0.25***
# prescription medications taking 1.37 1.30 -0.07 2.56 1.18***
Reason went without prescription medication:
Cost too much 0.21 0.20 -0.01 0.27 0.06
No insurance 0.20 0.10 -0.10 0.20 0.00
Didn't have doctor 0.08 0.10 0.02 0.10 0.01
Couldn't get prescription 0.08 0.10 0.02 0.07 -0.01
Couldn't get to pharmacy 0.01 0.10 0.09 0.00 -0.01***
Other reasons 0.02 0.00 -0.02*** 0.04 0.02
Don't know 0.00 0.00 0.00 0.00 0.00
Needed dental care 0.70 1.00 0.30*** 0.70 0.00
Got all needed dental care 0.41 0.30 -0.11 0.37 -0.05
Any ER visits 0.14 0.90 0.76*** 0.26 0.12***
# of ER visits 0.25 2.40 2.15*** 0.45 0.20**
Used emergency room for non-emergency care 0.02 0.10 0.08 0.06 0.03
Reason went to ER:
Needed emergency care 0.05 0.80 0.75*** 0.06 0.01
Clinics closed 0.01 0.30 0.29* 0.02 0.00
Couldn't get doctor's appointment 0.02 0.20 0.18 0.02 0.00
Didn't have personal doctor 0.02 0.30 0.28 0.03 0.01
Couldn't afford copay to see a doctor 0.01 0.20 0.19 0.02 0.00
Didn't know where else to go 0.02 0.20 0.18 0.04 0.02
Other reason 0.01 0.10 0.09 0.02 0.01
Needed prescription drug 0.01 0.10 0.09 0.00 -0.01***
Don't know 0.00 0.00 0.00 0.00 0.00
Any out of pocket costs for medical care 0.65 0.80 0.15 0.71 0.06
Total out of pocket costs for medical care 5195.07 1257.00 -3938.07 1136.44 -4058.62
Borrowed money/skipped bills to pay health care bills 0.34 0.50 0.16 0.47 0.13**
Currently owe money for medical expenses 0.46 0.80 0.34** 0.71 0.25***
Total amount currently owed for medical expenses 1559.40 7354.00 5794.60** 5694.17 4134.77***
* 1) This table shows the mean characteristics for individuals whose treatment effects are not statistically significant at 10% level (column 1), whose treatment effects are significantly negative (column 2), and whose treatment effects are significantly positive (column 4).
Column (3) contains the differences between column (2) and column (1), and column (5) contains the differences between column (4) and column (1).
* 2) Significance levels of the two-sample t-test: * 10%, ** 5%, *** 1%.
!
Comparison of Characteristics
Same Fewer Difference More Difference
(1) (2) (3) (4) (5)
Any ED visits 0.16 0.90 0.74*** 0.84 0.68***
# ED visits 0.28 3.10 2.82*** 1.94 1.67***
Any ED visits resulting in hospitalization 0.03 0.00 -0.03*** 0.37 0.34***
# ED visits resulting in hospitalization 0.03 0.00 -0.03*** 0.62 0.59***
Any outpatient ED visits 0.15 0.90 0.75*** 0.67 0.52***
# outpatient ED visits 0.25 3.10 2.85*** 1.32 1.07***
Any weekday daytime ED visits 0.10 0.60 0.50** 0.62 0.52***
# weekday daytime ED visits 0.15 1.50 1.35** 1.14 0.99***
Any off-time ED visits 0.09 0.90 0.81*** 0.51 0.42***
# off-time ED visits 0.12 1.60 1.48*** 0.83 0.71***
# emergent non-preventable ED visits 0.05 0.17 0.12 0.66 0.61***
# emergent preventable ED visits 0.02 0.14 0.12** 0.15 0.12***
# primary care treatable ED visits 0.10 1.37 1.27*** 0.46 0.36***
# non-emergent ED visits 0.05 1.22 1.16*** 0.35 0.29***
# unclassified ED visits 0.05 0.20 0.15 0.36 0.30***
Any ambulatory case sensitive ED visits 0.01 0.00 -0.01*** 0.15 0.14***
# ambulatory case sensitive ED visits 0.02 0.00 -0.02*** 0.15 0.13***
Any ED visits to a high uninsured volume hospital 0.08 0.20 0.12 0.76 0.68***
# ED visits to a high uninsured volume hospital 0.12 0.30 0.18 1.61 1.49***
Any ED visits to a low uninsured volume hospital 0.09 0.90 0.81*** 0.19 0.10**
# ED visits to a low uninsured volume hospital 0.15 2.80 2.65*** 0.33 0.17**
Any ED visits for chronic conditions 0.03 0.30 0.27 0.36 0.32***
# ED visits for chronic conditions 0.05 0.50 0.45 0.62 0.57***
Any ED visits for injury 0.06 0.40 0.34* 0.32 0.26***
# ED visits for injury 0.07 0.40 0.33* 0.43 0.37***
Any ED visits for skin conditions 0.01 0.10 0.09 0.02 0.01
# ED visits for skin conditions 0.02 0.10 0.08 0.03 0.01
Any ED visits for abdominal pain 0.01 0.10 0.09 0.06 0.05**
# ED visits for abdominal pain 0.01 0.10 0.09 0.10 0.09**
Any ED visits for back pain 0.01 0.30 0.29* 0.04 0.03
# ED visits for back pain 0.01 0.40 0.39 0.06 0.04
Any ED visits for chest pain 0.01 0.00 -0.01*** 0.05 0.04*
# ED visits for chest pain 0.01 0.00 -0.01*** 0.05 0.04*
Any ED visits for headache 0.01 0.00 -0.01*** 0.01 0.00
# ED visits for headache 0.01 0.00 -0.01*** 0.01 0.00
Any ED visits for mood disorders 0.00 0.00 0.00** 0.09 0.08***
# ED visits for mood disorders 0.00 0.00 0.00** 0.15 0.15**
Any ED visits for psych conditions/substance abuse 0.01 0.00 -0.01*** 0.17 0.17***
# ED visits for psych conditions/substance abuse 0.01 0.00 -0.01*** 0.36 0.34***
Total ED charges 274.98 1818.44 1543.46** 3195.22 2920.24***
Total charges 639.70 2223.85 1584.16** 9260.22 8620.52***
* 1) This table shows the mean characteristics for individuals whose treatment effects are not statistically significant at 10% level (column 1), whose treatment effects are significantly negative (column 2), and whose treatment effects are significantly positive (column 4).
Column (3) contains the differences between column (2) and column (1), and column (5) contains the differences between column (4) and column (1).
* 2) Significance levels of the two-sample t-test: * 10%, ** 5%, *** 1%.
§ CONCLUSION
In this paper, we propose a method for estimating the individual treatment effects using panel data, where multiple related outcomes are observed for a large number of individuals over a small number of pretreatment periods. The method is based on the interactive fixed effects model, and allows both the treatment assignment and the potential outcomes to be correlated with the unobserved individual characteristics. Monte Carlo simulations show that our method outperforms related methods. We also provide an example of estimating the effect of health insurance coverage on individual usage of hospital emergency departments using the Oregon Health Insurance Experiment data.
There are several directions for future research.
First, our method requires the idiosyncratic shocks in the pretreatment outcomes to be uncorrelated either over time or across outcomes. It would be a valuable addition to allow (or detect and adjust for) more general dependence structure in the idiosyncratic shocks.
Second, since the residuals of the rearranged models are not estimates of the idiosyncratic shocks, the variance of our estimator may be over-estimated, especially when the number of pretreatment outcomes is small. A necessary step for future research is to correct for this bias.
Third, the repeated pretreatment set splitting and averaging approach in our method is computationally expensive. It would be an interesting direction for future research to find better ways to select related outcomes or use more flexible averaging scheme.
Fourth, the linear model specification may be restrictive. There is potential to extend our method, perhaps in combination with more flexible machine learning methods, to work with more general nonlinear outcomes.
§ PROOFS
τ_it,k-τ_it,k
= Y_it,k^1-Y_it,k^0-(Y_it,k^1-Y_it,k^0)
= Z_it'(θ_t,k^1-θ_t,k^1)-Z_it'(θ_t,k^0-θ_t,k^0)-e_it,k^1+e_it,k^0.
Given the assumptions and our models in (<ref>) and (<ref>), we have that Y_it,k, t≤ T_0, k∈𝒦 are i.i.d. for all i∈𝒯 and all i∈𝒞, and 𝔼|Y_it,k|^2<∞, so that Z_it and R_it are i.i.d. for all i∈𝒯 and all i∈𝒞, 𝔼‖Z_it‖^2<∞, and 𝔼‖R_it‖^2<∞. In addition, 𝔼(Z_itR_it') has full rank due to the observed covariates and the unobserved individual characteristics, so that 𝔼(Z_itR_it')W^1𝔼(R_itZ_it') is invertible.
Therefore, the weak law of large numbers and the continuous mapping theorem hold, and
θ_t,k^1-θ_t,k^1 =(Z_t^1'R_t^1W^1R_t^1'Z_t^1)^-1Z_t^1'R_t^1W^1R_t^1'e_t,k^1
=(1/N_1Z_t^1'R_t^1W^11/N_1R_t^1'Z_t^1)^-11/N_1Z_t^1'R_t^1W^11/N_1R_t^1'e_t,k^1
p→[𝔼(Z_itR_it')W^1𝔼(R_itZ_it')]^-1𝔼(Z_itR_it')W^1𝔼(R_ite_it,k^1)
=0,
as N_1→∞.
Similarly, it can be shown that θ_t,k^0-θ_t,k^0p→0 as N_0→∞.
Since Z_it=O_p(1), we have Z_it'(θ_t,k^1-θ_t,k^1)=o_p(1) and Z_it'(θ_t,k^0-θ_t,k^0)=o_p(1).
We also have 𝔼(e_it,k^1|H_it=h_it)=0 and 𝔼(e_it,k^0|H_it=h_it)=0 under Assumption <ref>.
Under the assumptions and by the Cauchy-Schwarz inequality, there exists M^*∈[0,∞) such that 𝔼|Z_it'(θ_t,k^1-θ_t,k^1)|≤(𝔼‖Z_it‖^2)^1/2(𝔼‖θ_t,k^1-θ_t,k^1‖^2)^1/2<M^*,
𝔼|Z_it'(θ_t,k^0-θ_t,k^0)|<M^*,
and 𝔼| e_it,k^0-e_it,k^1|<M^*.
By the triangle inequality, 𝔼|τ_it,k-τ_it,k|≤𝔼|Z_it'(θ_t,k^1-θ_t,k^1)|+𝔼|Z_it'(θ_t,k^0-θ_t,k^0)|+𝔼| e_it,k^0-e_it,k^1|<3M^*, which implies that τ_it,k-τ_it,k is uniformly integrable. Then by Lebesgue's Dominated Convergence Theorem, convergence in probability implies convergence in means, i.e.,
N_1,N_0→∞lim𝔼(τ_it,k-τ_it,k|H_it=h_it)
= 𝔼[N_1,N_0→∞plim(τ_it,k-τ_it,k)|H_it=h_it]
= 𝔼(e_it,k^0-e_it,k^1|H_it=h_it)
= 0.
Under Assumptions <ref>-<ref>, central limit theorem applies, and we have
1/N∑_i=1^N(τ_it,k-τ_it,k)
= 1/N∑_i=1^NZ_it'(θ_t,k^1-θ_t,k^1)-1/N∑_i=1^NZ_it'(θ_t,k^0-θ_t,k^0)-1/N∑_i=1^N(e_it,k^1-e_it,k^0)
= O_p(N_1^-1/2)+O_p(N_0^-1/2)+O_p((N_1+N_0)^-1/2)
= O_p(N_1^-1/2)+O_p(N_0^-1/2).
Since 1/N∑_i=1^Nτ_it,k-𝔼(τ_it,k)=O_p((N_1+N_0)^-1/2).
We have that 1/N∑_i=1^Nτ_it,k-τ_t,k=1/N∑_i=1^Nτ_it,k-1/N∑_i=1^Nτ_it,k+1/N∑_i=1^Nτ_it,k-τ_t,k=O_p(N_1^-1/2)+O_p(N_0^-1/2).
(i)
τ̃_it,k-τ_it,k= Z_it'(θ_t,k^*1-θ_t,k^*1)-Z_it'(θ_t,k^*0-θ_t,k^*0)-(u_it,k^1-u_it,k^0).
Since 𝔼(u_it,k^1|Z_it)=0 and under Assumption <ref>, we have
𝔼(θ_t,k^*1-θ_t,k^*1|Z_t)=0, and
𝔼(θ_t,k^*1-θ_t,k^*1|Z_it)=𝔼_-i[𝔼(θ_t,k^*1-θ_t,k^*1|Z_t)]=0, where 𝔼_-i(·) denotes the expectation taken with respect to Z_jt, j≠ i.
Similarly, we have 𝔼(θ_t,k^*0-θ_t,k^*0|Z_it)=0.
Thus, 𝔼(τ̃_it,k-τ_it,k|Z_it=z_it)=0. It follows that 𝔼(τ̃_t,k-τ_t,k)=0 using the law of iterated expectations.
(ii)
1/N∑_i=1^N(τ̃_it,k-τ_it,k)
= 1/N∑_i=1^N[Z_it'(θ_t,k^*1-θ_t,k^1)+γ_t,k^1'ε_i^𝒫-ε_it,k^1] - 1/N∑_i=1^N[Z_it'(θ_t,k^*0-θ_t,k^0)+γ_t,k^0'ε_i^𝒫-ε_it,k^0]
= 1/N∑_i=1^N[Z_it'(∑_j∈𝒯Z_jtZ_jt')^-1∑_j∈𝒯Z_jt(ε_jt,k^1-γ_t,k^1'ε_j^𝒫)+γ_t,k^1'ε_i^𝒫-ε_it,k^1]
-1/N∑_i=1^N[Z_it'(∑_j∈𝒞Z_jtZ_jt')^-1∑_j∈𝒞Z_jt(ε_jt,k^0-γ_t,k^0'ε_j^𝒫)+γ_t,k^0'ε_i^𝒫-ε_it,k^0].
The following two statements hold:
1/N∑_i=1^NZ_it'(∑_j∈𝒯Z_jtZ_jt')^-1∑_j∈𝒯Z_jtε_jt,k^1 =O_p(N_1^-1/2),
1/N∑_i=1^NZ_it'(∑_j∈𝒞Z_jtZ_jt')^-1∑_j∈𝒞Z_jtε_jt,k^0 =O_p(N_0^-1/2).
Following similar arguments as in <cit.>, we denote Δ̃_i^1=γ_t,k^1'ε_i^𝒫-ε_it,k^1-Z_it'(∑_j∈𝒯Z_jtZ_jt')^-1∑_j∈𝒯Z_jtγ_t,k^1'ε_j^𝒫, and Δ_i^1=γ_t,k^1'ε_i^𝒫-ε_it,k^1-Z_it'𝔼(Z_itZ_it')^-1𝔼(Z_itγ_t,k^1'ε_i^𝒫).
We have that 𝔼(Z_itΔ_i^1)=0.
Since Z_it contains constant 1, it follows that
𝔼(Δ_i^1)=0.
Thus 1/N∑_i=1^NΔ̃_i^1p→𝔼(Δ_i^1)=0 as N_1→∞, and 1/N∑_i=1^NΔ̃_i^1=O_p(N_1^-1/2).
Similarly, we have 1/N∑_i=1^NΔ̃_i^0=O_p(N_0^-1/2).
Thus, 1/N∑_i=1^Nτ̃_it,k-1/N∑_i=1^Nτ_it,k=O_p(N_1^-1/2)+O_p(N_0^-1/2), and 1/N∑_i=1^Nτ̃_it,k-τ_t,k=O_p(N_1^-1/2)+O_p(N_0^-1/2).
apalike
|
http://arxiv.org/abs/2306.07159v1
|
20230612144621
|
On the Computation-Communication Trade-Off with A Flexible Gradient Tracking Approach
|
[
"Yan Huang",
"Jinming Xu"
] |
math.OC
|
[
"math.OC",
"cs.DC",
"cs.LG"
] |
Looking Around Corners: Generative Methods in Terrain Extension
Alec Reed
Department of Computer Science
University of Colorado Boulder
[email protected]
Christoffer Heckman
Department of Computer Science
University of Colorado Boulder
[email protected]
July 31, 2023
=========================================================================================================================================================================================================================
We propose a flexible gradient tracking approach with adjustable computation and communication steps for solving distributed stochastic optimization problem over networks. The proposed method allows each node to perform multiple local gradient updates and multiple inter-node communications in each round, aiming to strike a balance between computation and communication costs according to the properties of objective functions and network topology in non-i.i.d. settings.
Leveraging a properly designed Lyapunov function, we derive both the computation and communication complexities for achieving arbitrary accuracy on smooth and strongly convex objective functions. Our analysis demonstrates sharp dependence of the convergence performance on graph topology and properties of objective functions, highlighting the trade-off between computation and communication. Numerical experiments are conducted to validate our theoretical findings.
§ INTRODUCTION
With the proliferation of individual computing devices and local collected user data <cit.>, distributed optimization methods have become increasingly popular in recent years due to their wide applications in various fields such as cooperative control <cit.>, distributed sensing <cit.>, large-scale machine learning <cit.>, and just to name a few. In this paper, we consider the standard distributed stochastic optimization problem jointly solved by a number of n nodes over a network:
x∈ℝ ^pminf( x ) =1/n∑_i=1^n:=f_i( x )𝔼 _ξ _i∼𝒟 _i[ f_i( x;ξ _i ) ] ,
where x∈ℝ ^p is the global decision variable and the objective function f: ℝ ^p→ℝ is the sum-utility of n local objective function f_i conditioned on the local data sample ξ _i with distribution 𝒟 _i. As a promising approach, parallel/decentralized stochastic gradient decent <cit.> is shown to be a simple yet efficient algorithm for solving the above distributed stochastic optimization problem (<ref>) under certain scenarios. However, parallel/decentralized SGD may not ensure good performance in the presence of node-specific heterogeneity arising from imbalanced data sets, communication and computing resources <cit.>.
To avoid high communication burden among nodes, plenty of communication-efficient methods have been studied in the optimization and machine learning community.
Particularly, Federated Averaging (FedAvg) <cit.> (a.k.a. Local SGD <cit.>), as a variant of parallel SGD with parameter server (PS) architecture <cit.>, has been widely used in federated learning, which executes multiple local updates between two consecutive communication steps with partial or full node participation to save communication cost. The effectiveness of FedAvg/Local SGD for independent and identically distributed (i.i.d.) datasets has been extensively studied in the literature <cit.>. For instance, it has been shown in <cit.> that Local SGD can outperform centralized mini-batch SGD <cit.> for quadratic objectives and certain convex cases. We note that these methods with PS architecture all require a central server for data aggregation, which may suffer from single-point failure and communication bottleneck <cit.>. To address this issue, many decentralized SGD methods have been proposed for solving Problem (<ref>) over peer-to-peer networks <cit.>. In general, gossip-based communication protocols <cit.> are popular choices for distributed algorithms. For example, Lian et al. <cit.> proposed D-PSGD where each node communicates only with its neighbouring nodes for reaching consensus on optimization process, and, in <cit.>, only a subset of the nodes are activated in each round for exchanging information, thus reducing communication costs.
While communication cost is a key concern in distributed optimization, it is equally important to ensure that the accuracy of the algorithm is not significantly compromised in practical scenarios. In particular, when it comes to non-i.i.d. settings where data distribution of nodes are heterogeneous, the adoption of local updates, partial participation and gossip protocols in parallel/decentralized SGD methods will introduce more degrees of data heterogeneity yielding poor algorithmic performance <cit.>, and thus many variants have been proposed to address this issue. For instance, gradient estimation techniques <cit.> and primal-dual-like methods <cit.> have been shown to be effective for tackling data heterogeneity among nodes. In particular, Pu et al. <cit.> proposed a distributed stochastic gradient tracking (DSGT) method by introducing an auxiliary variable for each node to track the average gradient of local functions. To further reduce the communication cost, Nguyen et al. <cit.> proposed a variant of DSGT, termed LU-GT, employing multiple local updates, and they provided the communication complexity matching Local SGD for non-convex objective functions. Building on this, Liu et al. <cit.> proposed another variant adopting gradient-sum-tracking that further reduces the communication complexity with reduced stochastic gradient variance.
However, both the results in <cit.> ignore the side effect of local updates that will amplify stochastic gradient noise on computation complexity.
The readers are referred to the recent survey papers <cit.> for many other efforts to improve communication efficiency.
However, most existing distributed algorithms mainly focus on reducing communication costs without taking into account the acceleration of computation processes. To account for both computation and communication complexity, Berahas et al. <cit.> proposed a variant of deterministic gradient decent method with multiple communication steps at each iteration, named NEAR-DGD, and they evaluated the performance of the algorithm via a new metric accounting for both communication and computation complexity, and showed that employing multiple consensus steps is desirable when communication is relatively cheap. Building on this, a stochastic variant called S-NEAR-DGD is proposed in <cit.> to accelerate the computation process. More recently, Liu et al. <cit.> proposed a decentralized federated learning algorithm, named DFL, that employs multiple local updates and inter-node communication during each round and analyzed the impact of communication and computation on the performance of the algorithm separately.
Although these aforementioned algorithms are theoretically guaranteed for certain scenarios, their applications are limited by the assumption of uniformly bounded gradients. Moreover, they tend to be effective only with i.i.d. datasets and may face challenges with data heterogeneity in non-i.i.d. settings.
In this paper, we propose and analyze a flexible gradient tracking approach that employs adjustable communication and computation protocols for solving problem (<ref>) in non-i.i.d. scenarios.
The main contributions are summarized as: i) We develop a flexible gradient tracking method (termed FlexGT) employing multiple local updates and multiple inter-node communication, which enables it to deal with data heterogeneity and design customized protocols to balance communication and computation costs according to the properties of objective functions and network topology; ii) Our proposed algorithm comes with theoretical guarantees, including both communication and computation complexity analysis for strongly convex and smooth objective functions. In particular, we show that the proposed FlexGT algorithm converges linearly to a neighborhood of the optimal solution regardless of data heterogeneity, which recovers the best known result of DSGT <cit.> as a special case under our settings.
More importantly, the complexity results shed lights on adjusting the communication and computation frequency according to the problem characteristics to achieve a better trade-off. This is in contrast to the existing works <cit.> that merely focus on communication or computation.
Notations. Throughout this paper, we adopt the following notations: | ·| represents the Frobenius norm, 𝔼[ ·] denotes the expectation of a matrix or vector, 1 represents the all-ones vector, 𝐈 denotes the identity matrix, and 𝐉=11^T/n denotes the averaging matrix.
§ PROBLEM FORMULATION AND ALGORITHM DESIGN
We consider solving the distributed stochastic optimization problem (<ref>) where n agents are connected over a graph 𝒢=(𝒱,ℰ), Here, 𝒱={1,2,...,n} represents the set of agents, and ℰ⊆𝒱×𝒱 denotes the set of edges consisting of ordered pairs (i,j) representing the communication link from node j to node i. For node i, we define 𝒩_i= { j,|, ( i, j ) ∈ℰ} as the set of its neighboring nodes. We then make the following blanket assumptions on the objective function and graph.
(Convexity and smoothness)
Each f_i( x) is μ-strongly convex and L-smooth in x.
Then, we assume each agent i obtains an unbiased noisy gradient of the form ∇ f_i( x; ξ _i ) by querying a stochastic oracle (𝒮𝒪), which satisfies the following assumption.
(Bounded variance)
For ∀ x, x^'∈ℝ^p, there exist σ⩾ 0 such that
𝔼[ ∇ f_i( x;ξ _i ) -∇ f_i( x ) ^2 ] ⩽σ ^2,
where the random sample ξ_i is generated from 𝒮𝒪.
(Graph connectivity)
The weight matrix W induced by graph 𝒢 is doubly stochastic, i.e., W1=1,1^TW=1^T and ρ _W:= W-𝐉_2 ^2 <1.
Now, we proceed to present the proposed FlexGT algorithm for solving problem (<ref>), which is given in Algorithm <ref>.
For simplicity, we introduce the following notations:
X_t :=[ x_1,t,x_2,t,⋯ ,x_n,t] ^T∈ℝ^n× p,
Y_t :=[ y_1,t,y_2,t,⋯ ,y_n,t] ^T∈ℝ^n× p,
∇ G_t :=[ ⋯ ,∇ f_i( x_i,t;ξ _i,t) ,⋯] ^T∈ℝ ^n× p,
∇ F_t :=𝔼[ ∇ G_t ] =[ ⋯ ,∇ f_i( x_i,t) ,⋯] ^T∈ℝ ^n× p.
Then, Algorithm <ref> can be rewritten in a compact form:
X_d_2( k+1 ) =W^d_1( X_d_2k-γ∑_j=0^d_2-1Y_d_2k+j) ,
Y_d_2( k+1 ) =W^d_1( Y_d_2k+∇ G_d_2( k+1 )-∇ G_d_2k),
where the integers d_1, d_2 ∈[ 1,∞) denote the communication and computation frequency in each round respectively.
The flexibility of the proposed FlexGT algorithm consists in the adjustable communication and computation protocol with respect to d_1 and d_2, which allows for customized multiple local updates and message exchange over the network in each round according to the properties of objective functions and network topology. This algorithm can also eliminate the effect of data heterogeneity among nodes thanks to the gradient tracking scheme. It should be also noted that FlexGT recovers the standard DSGT <cit.> algorithm (d_1=d_2=1) and LU-GT <cit.> (d_1=1, d_2⩾1) as special cases, and has the potential to recover S-NEAR-DGD <cit.> and DFL <cit.> as well if independent weight matrix for the update of Y is employed.
§ MAIN RESULTS
In this section, we present the main convergence results of
the proposed FlexGT algorithm for strongly-convex and smooth objective functions. To this end, we first define the following Lyapunov function consisting of optimality gap, consensus error and gradient tracking error:
V_t:=x̅_t-x^* ^2+c_1 X_t-1x̅_t ^2+c_2 Y_t-1y̅_t ^2,
where t=d_2k, c_1 and c_2 are coefficients to be properly designed later, and
x̅_t:=1^T/nX_t, y̅_t:=1^T/nY_t.
With these definitions, we are ready to present the main convergence results of the proposed FlexGT algorithm.
Suppose the Assumptions <ref>, <ref> and <ref> hold. Let the stepsize satisfy
γ⩽min{1/10d_2L,1-ρ _W^d_1/37d_2L√(ρ _W^d_1), ( 1-ρ _W^d_1) ^2/153d_2L√(ρ _W^d_1)}.
Then, we have for all k⩾0,
𝔼[ V_d_2( k+1 )]
⩽( 1-min{μ d_2 γ/4, 1-ρ _W^d_1/8}) 𝔼[ V_d_2k]
+d_2γ ^2σ ^2/n+d_2^3γ ^3L/( 1-ρ _W^d_1) ^3M_σ,
where M_σ can be found in (<ref>).
See Section <ref>.
Under the same setting of Theorem <ref>, the number of computation steps needed to achieve an accuracy of arbitrary small ε⩾ 0 is
𝒪̃( d_2L/( 1-ρ _W^d_1) ^2μ+σ ^2/μ ^2 n ε+d_2√(Lρ _W^d_1σ ^2)/√(μ ^3( 1-ρ _W^d_1) ^3ε)) ,
and the number of communication steps needed is
𝒪̃( d_1L/( 1-ρ _W^d_1) ^2μ+d_1σ ^2/μ ^2 n d_2ε+d_1√(Lρ _W^d_1σ ^2)/√(μ ^3( 1-ρ _W^d_1) ^3ε)),
where the notation 𝒪̃(·) hides the logarithmic factors.
We omit the proof due to the space limitation. The techniques used to adapt Theorem <ref> to the complexity results are common and can be found in <cit.>.
Theorem <ref> shows that the proposed FlexGT algorithm converges linearly to a neighborhood of the optimal solution of Problem (<ref>), depending on the problem characteristics as well as the frequency of communication and computation. Particularly, this result implies that
increasing d_2 can result in a smaller stepsize and smaller steady-state error, while increasing d_1 can lead to a faster linear rate but also a significant increase in communication overhead.
To further illustrate the insight, we derive the computation and communication complexity of FlexGT in Corollary <ref>. Notably, this result demonstrates the existence of a trade-off between the computation and communication complexity, that is, as the computation (resp. communication) frequency d_2 (resp. d_1) increases, the computation complexity will increase (resp. decrease), while the communication complexity will decrease (resp. increase or decrease depending on ρ_W), thereby calling for a careful design of d_1 and d_2 to balance communication and computation costs based on the problem setting. Moreover, compared to the existing results in <cit.>, the convergence of FlexGT is independent of data heterogeneity among nodes captured as ζ _f:=𝐬𝐮𝐩{1/n∑_i=1^n∇ f_i( x ) -∇ f( x ) ^2} , ∀ x <cit.>, without the assumption of uniformly bounded stochastic gradient, and is thus more robust in non-i.i.d. settings.
§ CONVERGENCE ANALYSIS
In this section, we carry out the convergence analysis for the main results. We begin by introducing the following key lemmas that are crucial for the proof of Theorem <ref>.
§.§ Key Lemmas
[Bounding optimality gap]
Suppose Assumptions <ref>, <ref> and <ref> hold. Let the setp-size satisfy γ⩽1/10d_2L. Then, we have for ∀ k ⩾ 0,
𝔼[ x̅_d_2( k+1 )-x^* ^2 ]
⩽( 1-d_2μγ/2) 𝔼[ x̅_d_2k-x^* ^2 ]
+12d_2γ L/n𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+12d_2^3γ ^3L/n𝔼[ Y_d_2k-1y̅_d_2k ^2 ]
-d_2γ𝔼[ f( x̅_d_2k) -f( x^* ) ] +d_2γ ^2σ ^2/n+36d_2^3γ ^3Lσ ^2.
See Appendix <ref>.
[Bounding consensus error]
Suppose Assumptions <ref>, <ref> and <ref> hold. Let the stepsize satisfy
γ⩽min{1/8d_2L, 1-ρ _W^d_1/12d_2L√(ρ _W^d_1)}.
Then, we have for ∀ k ⩾ 0,
𝔼[ X_d_2( k+1 )-1x̅_d_2( k+1 ) ^2 ]
⩽3+ρ _W^d_1/4𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+192nd_2^4γ ^4L^3ρ _W^d_1/1-ρ _W^d_1𝔼[ f( x̅_d_2k) -f( x^* ) ]
+6d_2^2γ ^2ρ _W^d_1/1-ρ _W^d_1𝔼[ Y_d_2k-1y̅_d_2k ^2 ] +18nd_2^2γ ^2ρ _W^d_1/1-ρ _W^d_1σ ^2.
See Appendix <ref>.
[Bounding tracking error]
Suppose Assumptions <ref>, <ref> and <ref> hold. Let the stepsize satisfy
γ⩽{1/8d_2L, 1-ρ _W^d_1/10d_2L√(ρ _W^d_1)}.
Then, we have for ∀ k ⩾ 0,
𝔼[ Y_d_2( k+1 )-1y̅_d_2( k+1 ) ^2 ]
⩽3+ρ _W^d_1/4𝔼[ Y_d_2k-1y̅_d_2k ^2 ]
+30ρ _W^d_1L^2/1-ρ _W^d_1𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+96nρ _W^d_1d_2^2γ ^2L^3/1-ρ _W^d_1𝔼[ f( x̅_d_2k) -f( x^* ) ] +6nρ _W^d_1σ ^2.
See Appendix <ref>.
§.§ Proofs of Main Results
§.§.§ Proof of Theorem <ref>
Recalling the defined Lyapunov function (<ref>) with
c_1=192d_2γ L/n( 1-ρ _W^d_1), c_2=9312d_2^3γ ^3L/n( 1-ρ _W^d_1) ^3.
Then, with the help of Lemma <ref>, <ref> and <ref>, we can bound the Lyapunov function as follows:
𝔼[ V_d_2( k+1 )]
⩽( 1-min{γμ d_2/4,1-ρ _W^d_1/8}) 𝔼[ V_d_2k]
⩽d_2γ ^2σ ^2/n+d_2^3γ ^3L/( 1-ρ _W^d_1) ^3M_σ+e_1𝔼[ f( x̅_d_2k) -f( x^* ) ]
+e_2𝔼[ X_d_2k-1x̅_d_2k ^2 ] +e_3𝔼[ Y_d_2k-1y̅_k ^2 ] ,
where
e_1 :=c_1192nd_2^4γ ^4L^3ρ _W^d_1/1-ρ _W^d_1+c_296nρ _W^d_1d_2^2γ ^2L^3/1-ρ _W^d_1-d_2γ ,
e_2 :=12d_2γ L/n+c_230ρ _W^d_1L^2/1-ρ _W^d_1-c_11-ρ _W^d_1/8,
e_3 :=12d_2^3γ ^3L/n+c_16d_2^2γ ^2ρ _W^d_1/1-ρ _W^d_1-c_21-ρ _W^d_1/8,
and
M_σ :=36( 1-ρ _W^d_1) ^3σ ^2+3456( 1-ρ _W^d_1) ρ _W^d_1σ ^2
+55872ρ _W^d_1σ ^2.
Letting e_1, e_2, e_3⩽ 0 with the stepsize γ satisfying (<ref>), we obtain
the result in (<ref>), which completes the proof.
The proof of the main results relies on double-loop analysis and a properly designed Lyapunov function as depicted in (<ref>). Specifically, we first derive the upper bounds of consensus error and gradient tracking error within a period (inner loop), respectively (c.f., Lemmas <ref> and <ref>). These bounds are then used to establish the contraction properties of the consensus error and gradient tracking error at each round (outer loop) (c.f., Lemmas <ref> and <ref>). Finally, we combine these obtained error terms to construct the Lyapunov function with properly designed coefficients (<ref>), which allows us to obtain a rate result that shows the sharp dependence of convergence performance on the properties of objective functions, network topology as well as the frequency of communication and computation.
It should be noted that this result also matches the best-known rate of the DSGT <cit.> (a special case with d_1=d_2=1) under the same setting, i.e., p=c=1-ρ _W.
§ NUMERICAL EXPERIMENTS
In this section, we report a series of numerical experiments to verify the theoretical findings of the proposed FlexGT algorithm by means of a synthetic example. Specifically, we consider the following quadratic function:
xminf( x ) =1/n∑_i=1^n=:f_i( 𝔼 _v_i[ ( h_i^Tx-ν _i ) ^2+μ/2 x ^2 ] ) ,
where μ⩾0 is the regularization parameter, h_i∈[ 0,1 ] ^p denotes the feature parameters of node i with dimension p=10, and v_i∼𝒩( v̅_i,σ ^2 ) with v̅_i∈[ 0,1 ]. Therefore, the algorithm can obtain an unbiased noisy gradient g_i( x_i,t) :=∇ f_i( x_i,t) +δ _i,t with δ _i,t∼𝒩( 0,σ^2 ) at each iteration t.
Moreover, we set the stepsize according to the choice of d_1 and d_2, i.e., γ =c( 1-ρ _W^d_1) ^2/( d_2L ), where c=10 is a constant and Lipschitz constant is set to L=1.
Communication and computation trade-off.
To balance the communication and computation costs in practice, we attempt to minimize the weighted-sum of the obtained communication and computation complexity results:
d_1, d_2min ( ω _1𝒞 _1+ω _2𝒞 _2 ),
where 𝒞 _1 and 𝒞 _2 represent the obtained communication and computation complexity respectively, ω _1 and ω _2 are the corresponding weights. We note that 𝒞 _2=𝒞 _1d_1/d_2, and thus plot the heat-map of weighted complexity of FlexGT to achieve ε = 10^-5 accuracy with different d_1 and d_2 under specific settings in Fig. <ref>. It can be observed that the overal complexity of FlexGT varies with the increase of d_1 or d_2 and achieves the best performance at d_1=3 and d_2=2 (see green box on the left). Moreover, if we keep the radio of d_2 and d_1 fixed, i.e., d_2/d_1=d, as shown on the right, there exists an optimal ratio to minimizing the weighted-sum complexity (<ref>). These observations illustrate the trade-offs between communication and computation conditioned on the properties of objective functions and graph topology.
Eliminating the effects of node heterogeneity.
In Fig. <ref>, we compare the convergence performance of the proposed FlexGT algorithm with DFL <cit.> and their special cases DSGT (FlexGT with d_1=d_2=1) and D-PSGD (DFL with d_1=d_2=1) in terms of computation and communication steps. We note that there is heterogeneity among the objectives f_i of nodes due to differences in { h_i } _i=1^n. In this scenario, it can be observed that FlexGT has a significant advantage over DFL in terms of steady-state error thanks to the gradient tracking method. Moreover, the choice of d_1=3 and d_2=2 makes FlexGT achieve better computation and communication complexity to reach an accuracy of ε=10^-5.
§ CONCLUSIONS
In this paper, we proposed a flexible and efficient distributed stochastic gradient tracking method FlexGT for solving distributed stochastic optimization problems under non-i.i.d settings. Our approach is able to handle data heterogeneity and design adjustable protocols to balance communication and computation costs according to the properties of the objective functions and network topology. We also provided theoretical guarantees for the proposed algorithm, including communication and computation complexity analysis for strongly convex and smooth objective functions, regardless of heterogeneity among nodes. These results provided an intuitive way to tune communication and computation protocols, highlighting their trade-offs. It will also be important to extend these results into more general settings, such as non-convex and time-varying cases, in the future work.
§ APPENDIX
In this section, we provide the missing proofs for the lemmas and theorem in the main text. To this end, we first provide several supporting lemmas for the analysis.
§.§ Supporting Lemmas
[Bernoulli's inequality]
For constants β⩾ 1 and λ>0, we have
( 1+λ/β) ^β⩽ e^λ.
[Bounding client divergence]
Suppose Assumptions <ref>, <ref> and <ref> hold. Let the stepsize satisfy γ⩽1/8d_2L. Then, we have for k⩾ 0 and integer t∈[ 1,d_2-1 ], d_2⩾2,
𝔼[ X_d_2k+t-1x̅_d_2k ^2 ]
⩽ 4𝔼[ X_d_2k-1x̅_d_2k ^2 ] +4d_2^2γ ^2𝔼[ Y_d_2k-1y̅_k ^2 ]
+16nd_2^2γ ^2L𝔼[ f( x̅_d_2k) -f( x^* ) ] +d_2^2γ ^2( 4σ ^2+8nσ ^2 ).
By the x-update of FlexGT (<ref>), noticing that Y_d_2k+t=Y_d_2k+∇ G_d_2k+t-∇ G_d_2k, we have
𝔼[ X_d_2k+t-1x̅_d_2k ^2 ]
( a )⩽( 1+β) 𝔼[ X_d_2k+t-1-1x̅_d_2k ^2 ]
+( 1+1/β) γ ^2𝔼[ Y_d_2k+∇ G_d_2k+t-1-∇ G_d_2k ^2 ]
( b )⩽( 1+β) 𝔼[ X_d_2k+t-1-1x̅_d_2k ^2 ] +8( 1+1/β) γ ^2nσ ^2
+4( 1+1/β) γ ^2( 𝔼[ Y_d_2k-1y̅_k ^2 ] +𝔼[ 1y̅_k ^2 ] )
+4( 1+1/β) γ ^2𝔼[ ∇ F_d_2k+t-1-∇ F_d_2k ^2 ]
( c )⩽( 1+β +4( 1+1/β) γ ^2L^2 ) ^t𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+4t( 1+1/β) γ ^2𝔼[ Y_d_2k-1y̅_k ^2 ]
+4t( 1+1/β) nγ ^2𝔼[ y̅_k ^2 ] +8t( 1+1/β) γ ^2nσ ^2,
where we have used Young's inequality with parameter β in (a), the bounded stochastic variance of Assumption <ref> in (b) and the smoothness of f_i of Assumption <ref> in (c). Then, by Lemma <ref> and letting β =1/(d_2-1), γ⩽ 1/(8d_2L), we get
( 1+1/d_2-1+4d_2L^2γ ^2 ) ^t⩽ e^1+1/16<3.
Noticing that
𝔼[ y̅_k ^2 ] ⩽2L^2/n[ X_d_2k-1x̅_d_2k ^2 ]
+4L𝔼[ f( x̅_d_2k) -f( x^* ) ] +σ ^2/n,
and t < d_2, we complete the proof.
[Bounding tracking error within a period]
Suppose Assumptions <ref>, <ref> and <ref> hold. Let the stepsize satisfy γ⩽1/8d_2L. Then,
we have for k⩾ 0 and integer t∈[ 1,d_2-1 ], d_2⩾2,
𝔼[ Y_d_2k+t-1y̅_d_2k+t ^2 ]
⩽ 3𝔼[ Y_d_2k-1y̅_d_2k ^2 ] +16L^2𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+96nd_2^2γ ^2L^3𝔼[ f( x̅_d_2k) -f( x^* ) ] +9nσ ^2.
By the y-update of FlexGT (<ref>), noticing that Y_d_2k+t=Y_d_2k+∇ G_d_2k+t-∇ G_d_2k, we have
𝔼[ Y_d_2k+t-1y̅_d_2k+t ^2 ]
⩽ 2𝔼[ Y_d_2k-1y̅_d_2k ^2 ] +4𝔼[ ∇ F_d_2k+t-∇ F_d_2k ^2 ]
+4𝔼[ ∇ G_d_2k+t-∇ F_d_2k+t+∇ F_d_2k-∇ G_d_2k ^2 ]
⩽ 2𝔼[ Y_d_2k-1y̅_d_2k ^2 ]
+4L^2=:S_1𝔼[ X_d_2k+t-1x̅_d_2k ^2 ] +8nσ ^2.
Using Lemma <ref> to further bound S_1,
and letting the stepsize γ⩽1/8d_2L, we complete the proof.
[Bounding consensus error within a period]
Suppose Assumptions <ref>, <ref> and <ref> hold. Let the stepsize satisfy γ⩽1/8d_2L. Then,
we have for k⩾ 0 and integer t ∈[ 1,d_2-1 ], d_2⩾2,
𝔼[ X_d_2k+t-1x̅_d_2k+t ^2 ]
⩽ 3𝔼[ X_d_2k-1x̅_d_2k ^2 ] +3d_2^2γ ^2𝔼[ Y_d_2k-1y̅_d_2k ^2 ]
+128nd_2^4γ ^4L^3𝔼[ f( x̅_d_2k) -f( x^* ) ] +9d_2^2γ ^2nσ ^2.
Using Young's inequality, we obtain
𝔼[ X_d_2k+t-1x̅_d_2k+t ^2 ]
⩽( 1+1/d_2-1) 𝔼[ X_d_2k+t-1-1x̅_d_2k+t-1 ^2 ]
+d_2γ ^2𝔼[ Y_d_2k+t-1-1y̅_d_2k+t-1 ^2 ]
( a )⩽( ( 1+1/d_2-1) ^t+16d_2^2γ ^2L^2 ) 𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+3d_2^2γ ^2𝔼[ Y_d_2k-1y̅_d_2k ^2 ] +9d_2^2γ ^2nσ ^2
+96nd_2^4γ ^4L^3𝔼[ f( x̅_d_2k) -f( x^* ) ],
wherein inequality (a) we have used Lemma <ref>. Then, using Lemma <ref> and letting γ⩽1/8d_2L, we complete the proof.
§.§ Missing Proofs in the Main-text
§.§.§ Proof of Lemma <ref>
Using the x-update (<ref>), we have
𝔼[ x̅_d_2( k+1 )-x^* ^2 ]
=𝔼[ x̅_d_2k-x^* ^2 ]
+γ ^2=:S_2𝔼[ ∑_j=0^d_2-11^T/nY_d_2k+j ^2 ]
-2γ=:S_3𝔼[ < x̅_d_2k-x^*,∑_j=0^d_2-11^T/n∇ F_d_2k+j> ] .
Next, we bound the terms S_2 and S_3 in the above equation respectively.
For S_2, noticing that 1^TY_d_2k+j=1^T∇ G_d_2k+j,
we have
𝔼[ ∑_j=0^d_2-11^T/nY_d_2k+j ^2 ]
( a )⩽d_2σ ^2/n+𝔼[ ∑_j=0^d_2-11^T/n∇ F_d_2k+j ^2 ]
( b )⩽2d_2L^2/n=:S_4∑_j=0^d_2-1𝔼[ X_d_2k+j-1x̅_d_2k ^2 ]
+2d_2^2𝔼[ ∇ f( x̅_d_2k) ^2 ] +d_2σ ^2/n
⩽2d_2L^2/nS_4+4d_2^2L( f( x̅_d_2k) -f( x^* ) ) +d_2σ ^2/n,
wherein the inequality (a) we have used Assumption <ref>, and (b) used the smoothness of f_i.
For S_3, using the convexity and smoothness of f_i in Assumption <ref>, we have
𝔼[ < x̅_d_2k-x^*,∑_j=0^d_2-11^T/n∇ F_d_2k+j> ]
=∑_j=0^d_2-11/n∑_i=1^n𝔼[ < x_i,d_2k+j-x^*,∇ f_i( x_i,d_2k+j) > ]
-∑_j=0^d_2-11/n∑_i=1^n𝔼[ < x_i,d_2k+j-x̅_d_2k,∇ f_i( x_i,d_2k+j) > ]
⩾ d_2𝔼[ f( x̅_d_2k) -f( x^* ) ]
+μ/2n∑_j=0^d_2-1𝔼[ X_d_2k+j-1x^* ^2 ]-L/2nS_4.
Substituting S_2 and S_3 in (<ref>), and noticing that
-∑_j=0^d_2-1𝔼[ X_d_2k+j-1x^* ^2 ]
=-∑_j=0^d_2-1𝔼[ X_d_2k+j-1x̅_d_2k ^2 ]-n𝔼[ x̅_d_2k-x^* ^2 ]
-2∑_j=0^d_2-1∑_i=1^n( 𝔼[ < x_i,d_2k+j-x̅_d_2k,x̅_d_2k-x^* > ] )
⩽ S_4-nd_2/2𝔼[ x̅_d_2k-x^* ^2 ] ,
we get
𝔼[ x̅_d_2( k+1 )-x^* ^2 ]
⩽( 1-d_2μγ/2) 𝔼[ x̅_d_2k-x^* ^2 ]
+2γ L( 1+d_2γ L )/nS_4+d_2γ ^2σ ^2/n
-( 2d_2γ -4d_2^2γ ^2L ) 𝔼[ f( x̅_d_2k) -f( x^* ) ].
We note that it is only necessary to bound S_4 with Lemma <ref> if d_2⩾2. Then, letting the stepsize satisfy γ⩽1/8d_2L, we complete the proof.
§.§.§ Proof of Lemma <ref>
Using the x-update (<ref>), we have
𝔼[ X_d_2( k+1 )-1x̅_d_2( k+1 ) ^2 ]
⩽1+ρ _W^d_1/2𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+γ ^2d_2( 1+ρ _W^d_1) ρ _W^d_1/1-ρ _W^d_1∑_t=0^d_2-1𝔼[ Y_d_2k+t-1y̅_d_2k+t ^2 ],
where we have used Young's inequality. By Lemma <ref> with d_2⩾2, letting the stepsize satisfy (<ref>), we complete the proof.
§.§.§ Proof of Lemma <ref>
Applying the recursion of FlexGT (<ref>), by Assumption <ref>, we have
𝔼[ Y_d_2( k+1 )-1y̅_d_2( k+1 ) ^2 ]
( a )⩽1+ρ _W^d_1/2𝔼[ Y_d_2k-1y̅_d_2k ^2 ] +2nρ _W^d_1σ ^2
+( 1+ρ _W^d_1) ρ _W^d_1/1-ρ _W^d_1𝔼[ ∇ F_d_2( k+1 )-∇ F_d_2k ^2 ]
+2ρ _W^d_1𝔼[ < Y_d_2k-1y̅_d_2k, ∇ F_d_2k-∇ G_d_2k> ]
+2ρ _W^d_1𝔼[ < ∇ F_d_2k-∇ G_d_2k, ∇ F_d_2( k+1 )> ]
( b )⩽1+ρ _W^d_1/2𝔼[ Y_d_2k-1y̅_d_2k ^2 ]
+6ρ _W^d_1L^2/1-ρ _W^d_1𝔼[ X_d_2k-1x̅_d_2k ^2 ]
+6ρ _W^d_1L^2/1-ρ _W^d_1𝔼[ X_d_2( k+1 )-1x̅_d_2k ^2 ] +5nρ _W^d_1σ ^2,
wherein the inequality (a) we have used Young's inequality and 𝔼[ ∇ G_d_2k] =∇ F_d_2k, and (b) we have used Assumption <ref> and the fact 𝔼[ < Y_d_2k-1y̅_d_2k,∇ F_d_2k-∇ G_d_2k> ] ⩽ nσ ^2.
Then, adapting Lemma <ref> to the case t=d_2⩾2,
and letting the stepsize satisfy (<ref>), we complete the proof.
ieeetr
|
http://arxiv.org/abs/2306.08517v1
|
20230614140337
|
Phenomenological study of $J/ψ\toΞ^0(Λπ^0) \barΞ^0(\barΛ γ)$ decays
|
[
"Peng-Cheng Hong",
"Rong-Gang Ping",
"Tao Luo",
"Xiao-Rong Zhou",
"He Li"
] |
hep-ph
|
[
"hep-ph"
] |
APS/123-QED
Institute of Modern Physics, Fudan University, Shanghai 200433, People's Republic of China
[][email protected]
Institute of High Energy Physics, Beijing 100049, People's Republic of China
Department of Modern Physics, University of Science and Technology of China,
No.96, Jinzhai Road, Hefei, China
Department of Modern Physics, University of Science and Technology of China,
No.96, Jinzhai Road, Hefei, China
[][email protected]
Institute of Modern Physics, Fudan University, Shanghai 200433, People's Republic of China
The measurement of decay parameters is one of the important goals of particle physics experiments, and the measurement serves as a probe to search for evidence of CP violation in baryonic decays.
The experimental results will help advance existing theoretical research and establish new experimental objectives.
In this paper, we formulate the asymmetric parameters that characterize parity violation, and then derive formulas for the measurement of CP violation. The formulae for the joint angular distribution of the full decay chain as well as the polarization observable of Ξ ^ 0, Ξ̅ ^ 0, Λ and Λ̅ are also provided for experiments. Lastly, we evaluated the sensitivity of two asymmetric parameters: α _ Ξ ^ 0 →Λπ ^ 0 (abbreviated as α _ Ξ ^ 0) and α _ Ξ̅ ^ 0 →Λ̅γ (abbreviated as α_ Ξ̅^0) for future experimental measurement.
Phenomenological study of J/ψ→Ξ^0(Λπ^0) Ξ̅^0(Λ̅γ) decays
Tao Luo
========================================================
§ INTRODUCTION
The decay parameters are the key to connect theoretical models with experimental studies. Two-body decays can provide a clean environment to study the properties of baryons, such as polarization and decay parameters, so that theoretical models such as perturbative QCD can be verified. CP violation (CPV) is observed in the K^0, B^0and D^0meson decays <cit.> and the experimental results are all consistent with the Standard Model predictions.
In the baryonic decay, the magnitude of CPV is predicted only 10 ^ -4 ∼ 10 ^ -5 with Standard Model(SM), but can be 10 ^ -3 in some new physics models, such as CP in Refs. <cit.>. However, it is still not large enough to understand the asymmetry of the matter and anti-matter in the universe. Therefore, it is important to expand the sources of CPV, especially in the baryonic sector.
The BESIII at BEPCII has accumulated about 10 billion J/ψ mesons, and a large statistic of hyperon-antihyperon pairs produced from Jpsi decays.
collision experiment has a natural advantage over pp collision or target experiments in measuring high accuracy due to its lower background.
An important work related to our analysis has been done and published in Nature <cit.> with a much more accuracy improvement in measurement.
Searching for evidence of CPV at BESIII experiments remains promising and deserves us to dig deeper in data analysis.
Particle Data Group(PDG) provides an evaluation of α_Ξ^0 →Λπ^0=-0.349 ± 0.009 via dividing α(Ξ^0)α_-(Λ) by a current average α_-(Λ) according to the measurements in recent years <cit.>. And α_Ξ^0→Λγ is equal to -0.704 ±0.019_stat± 0.064_syst based on the latest result meausred by NA481 Collaboration with a 52000 events data sample <cit.>, from which a 2.70% statistical uncertainty can be reached in the radiative decay Ξ^0→Λγ.
Once more data sample is available, and simultaneous measurements can be made on its conjugate decay channel, more precise asymmetric parameters can be obtained.
This severs a probe to search for the evidence of CPV in these decays and can help us better understand the mechanism of CPV in baryons.
In this paper, we formulate the observables of parity violation and CPV as proposed in Ref <cit.> in Sec. <ref>.
Different from traditional definition raised by T. D. Lee and C. N. Yang <cit.> with partial wave amplitudes, we use helicity formalism to present these asymmetric parameters.
The method is friendly for experimental physicists to estimate or predict these properties.
We formulate the joint spin density matrix(SDM) of baryon pairs Ξ^0Ξ̅^0 and ΛΛ̅ in Sec. <ref>.
An sensitivity estimation on the asymmetric parameters of parity violation is performed in Sec. <ref>, which provides a reference for accurate measurement on these decay channels in future experiments with high statistic.
§ ASYMMETRIC PARAMETERS
In the two-body decays with parity conservation, the helicity amplitudes satisfy the following symmetry,
A^J_λ_1,λ_2=ηη_1η_2(-1)^J-s_1-s_2 A^J_-λ_1, -λ_2,
where J, s_1 and s_2 are the spins of the mother particle and the two daughter particles, respectively. λ, λ_1, and λ_2 are their helicity values and η, η_1 and η_2 are their intrinsic parity values, respectively. Assuming that the decays listed in Table <ref> is parity conserved, the corresponding helicity amplitudes, A, B, F, G and H, satisfy
A_-1/2,-1/2 = A_1/2, 1/2,
A_-1/2,1/2=A_1/2, -1/2,
B_1/2 = -B_-1/2,
F_1, 1/2=F_-1, -1/2,
H_1/2 = -H_-1/2,
G_1/2=-G_-1/2,
where we write the amplitude F as F_λ_5,λ_4 rather than F_λ_4,λ_5 to maintain consistency with its definition in Ref. <cit.>.
However, the parity violation in weak decay renders the above equations invalid. Therefore, we define four asymmetric parameters to describe the parity violation,
α_Ξ^0→Λπ^0 = |B_1/2|^2-|B_-1/2|^2/|B_1/2|^2+|B_-1/2|^2,
α_Ξ̅^0→Λ̅γ = |F_-1,-1/2|^2-|F_1,1/2|^2/|F_-1,-1/2|^2+|F_1,1/2|^2,
α_Λ→ p π^- = |H_1/2|^2-|H_-1/2|^2/|H_1/2|^2+|H_-1/2|^2,
α_Λ̅→p̅π^+ = |G_1/2|^2-|G_-1/2|^2/|G_1/2|^2+|G_-1/2|^2.
The four parameters defined in this paper are numerically equivalent to the partial-wave amplitudes and are consistent with the parameter values provided in the PDG convention.
Further, if CP is conserved in charge conjugate decays, then the parameters of the conjugate decays have the same absolute values but having inverse sign as the four corresponding parameters above, i.e. α_Ξ̅^0→Λ̅π^0=-α_Ξ^0→Λπ^0, α_Ξ^0→Λγ=-α_Ξ̅^0→Λ̅γ, α_Λ→ p π^-=-α_Λ̅→p̅π^+.
Thus, we can define three observables characterizing the degree of CPV as
A_CP^1 = α_Ξ̅^0→Λ̅π^0+α_Ξ^0→Λπ^0/α_Ξ̅^0→Λ̅π^0-α_Ξ^0→Λπ^0,
A_CP^2 = α_Ξ^0→Λγ+α_Ξ̅^0→Λ̅γ/α_Ξ^0→Λγ-α_Ξ̅^0→Λ̅γ,
A_CP^3 = α_Λ→ p π^-+α_Λ̅→p̅π^+/α_Λ→ p π^--α_Λ̅→p̅π^+
The non-zero value of the asymmetric parameters in Eq. <ref> and <ref> indicates that there is CPV in the decay.
Experimentally, by measuring these conjugate decays separately, we can obtain the corresponding CP violated information. We shorten α_Ξ^0→Λπ^0, α_Ξ̅^0→Λ̅γ, α_Λ→ p π^-, α_Λ̅→p̅π^+ as α_Ξ^0, α_Ξ̅^0, α_Λ, α_Λ̅ in the following narrative.
When describing parity violation, the helicity amplitudes are more straightforward compared to the covariant amplitude.
The helicity formalism is widely used in experimental measurements <cit.>.
Helicity amplitudes are also used to form the SDM of particles in a decay, and the SDM contains all the dynamical information of the decay.
The angular distributions and polarization are also derived from SDM easily.
Experimentally, the values of these parameters can be determined by fitting the joint angular distribution to the data <cit.>. The helicity amplitude can be expanded into the L-S coupling of the partial wave amplitude through the Clebsch-Gordan coefficient. In view of the convenience of using the helicity amplitude, we use it to analyze the cascade decay.
§ HELICITY SYSTEM
In this analysis, we use the helicity reference frame to describe the full decay chain. The properties of helicity amplitude can be found in Ref. <cit.>. The helicity angles of the various levels of decay are shown in Fig. <ref>, Fig. <ref> and Fig. <ref>. The corresponding amplitudes are listed in Table <ref>.
In this section, we specify that the momentum p with the superscript L represents the momentum in the laboratory system and the momentum without the superscript represents the momentum after the boost operation to the rest frame of its mother particel.
In experiments, the momenta p⃗^L_Λ by p⃗^L_p and p⃗^L_π^- are reconstructed from the detection information.
Then we boost p⃗^L_p and p⃗^L_π^- to the Λ rest frame, θ_3 describe the angle between p⃗_p(in the Λ rest frame) and z_4 axis.
The angle between Λ production plane and its decay plane is defined as ϕ_3.
As for other helicity angles (θ_i, ϕ_i), (i=0,1,2,4), they can be calculated by the same operation as illustration Fig. <ref> and Fig. <ref>.
Notice that the z_5 axis along the opposite direction of p⃗_Λ̅ and θ_2 describe the angle between p⃗_γ(in the Ξ̅^0 rest frame) and z_3 axis.
Here, we list all the helicity angle expressions,
θ_0 = arccos(p⃗_Ξ^0·p⃗_e^+/|p⃗_Ξ^0| · |p⃗_e^+|), ϕ_0 = 0,
θ_1 = arccos(p⃗_Ξ^0·p⃗_Λ/|p⃗_Ξ^0| · |p⃗_Λ|),
ϕ_1 =arccos( |n⃗_J/ψ·n⃗_Ξ^0|),
θ_2 = arccos(p⃗_Ξ̅^0·p⃗_γ/|p⃗_Ξ̅^0| · |p⃗_γ|),
ϕ_2 = arccos( |n⃗_J/ψ·n⃗_Ξ̅^0|),
θ_3 = arccos(p⃗_Λ·p⃗_p/|p⃗_Λ| · |p⃗_p|),
ϕ_3 = arccos(|n⃗_Ξ^0·n⃗_Λ|),
θ_4 = arccos(p⃗_Λ̅·p⃗_p̅/|p⃗_Λ̅| · |p⃗_p̅|),
ϕ_4 = arccos(|n⃗_Ξ̅^0·n⃗_Λ̅|),
where the unit vectors n⃗_m in the rest frame of m decay plane are defined with the momenta of those particles as
n⃗_J/ψ = p⃗_e^+×p⃗_Ξ^0/|p⃗_e^+| · |p⃗_Ξ^0| ·sinθ_0,
n⃗_Ξ^0 = p⃗_Ξ^0×p⃗_Λ/|p⃗_Ξ^0| · |p⃗_Λ| ·sinθ_1,
n⃗_Ξ̅^̅0̅ = p⃗_Ξ̅^0×p⃗_γ/|p⃗_Ξ̅^0| · |p⃗_γ| ·sinθ_2,
n⃗_Λ = p⃗_Λ×p⃗_p/|p⃗_Λ| · |p⃗_p| ·sinθ_3,
n⃗_Λ̅ = p⃗_Λ̅×p⃗_p̅/|p⃗_Λ̅| · |p⃗_p̅| ·sinθ_4.
§ SPIN DENSITY MATRIX AND ANGULAR DISTRIBUTION
Since the SDM contains all the dynamical information in the decay, we first calculate the SDM of baryons in each step of decay, and then derive the angular distributions and the expression to present baryon polarization <cit.>.
§.§ J/ψ→Ξ^0 Ξ̅^0
For a spin-1/2 particle like Ξ^0, the SDM can be expressed as
ρ^Ξ^0=
(
[ ρ^Ξ^0_1/2, 1/2 ρ^Ξ^0_1/2, -1/2; ρ^Ξ^0_-1/2, 1/2 ρ^Ξ^0_-1/2, -1/2 ]).
The joint SDM of Ξ^0 Ξ̅^0 can be constructed in the form of ρ^Ξ^0⊗ρ^Ξ̅^0, and its elements can be directly calculated as
ρ^Ξ^0Ξ̅^0_λ_1,λ_2,λ_1',λ_2' ∝ ∑_λ,λ'ρ_λ,λ'^ψD^J*_λ,λ_1-λ_2(ϕ_0,θ_0,0)
× D^J_λ',λ_1'-λ_2'(ϕ_0,θ_0,0)A_λ_1,λ_2A^*_λ'_1,λ'_2,
where the SDM of J/ψ produced from e^+e^- annihilation can be described as ρ_λ,λ'^ψ=1/2diag{1,0,1} <cit.>, and D^J_λ_i,λ_k(ϕ_0,θ_0,0) is the Wigner-D function.
Since the J/ψ decaying into Ξ^0Ξ̅^0 via strong interactions conserves the parity, the helicity amplitudes satisfy the equations listed in Eq. (<ref>) i.e. A_-1/2,-1/2=A_1/2, 1/2, A_-1/2,1/2=A_1/2, -1/2.
The angular distribution of J/ψ→Ξ^0 Ξ̅^0 can be expressed as
I(θ_0) ∝ Tr[ρ^Ξ^0Ξ̅^0]=|A_1/2,1/2|^2 sin ^2θ _0
+ 1/4 |A_1/2,-1/2|^2 (cos 2 θ _0+3).
If we choose
α_ψ=|A_1/2,-1/2|^2-2|A_1/2,1/2|^2/|A_1/2,-1/2|^2+2|A_1/2,1/2|^2,
then the angular distribution can be reduced to the formula commonly used in experiments
I(θ_0) ∝ 1+α _ψcos ^2θ _0,
where α_ψ is the angular distribution parameter.
On the other hand, the joint SDM of Ξ^0 Ξ̅^0 can also be expressed by the real multipole parameters Q^1_i,j as
ρ^Ξ^0Ξ̅^0=Q^1_0,0/4[I+∑_i,j=0^3Q^1_i,jσ^Ξ^0_i⊗σ^Ξ̅^0_j],
where the superscript of Q^1_i,j is used to distinguish from the parameters Q^2_i,j used in Eq. (<ref>), I is a 4 × 4 identity matrix and σ is Pauli matrix <cit.>.
Here, σ_i or σ_j (i,j = 1, 2, 3) correspond to σ_x, σ_y, σ_z, and denote by i or j = 0 the 2× 2 identity matrix, i,j=0 means they cannot be 0 at the same time.
Q^1_i, j can be calculated by Q^1_0,0=Trρ^Ξ^0Ξ̅^0, Q^1_0,0Q^1_i,j=Tr[σ_i⊗σ_j·ρ^Ξ^0Ξ̅^0].
In this way, the multipole parameters Q^1_i, j can be expressed with the helicity amplitudes as listed in Eq. (<ref>).
For the decay J/ψ→Ξ^0 Ξ̅^0, Q^1_0, 0 stands for the unpolarized decay rate.
The degree of Ξ^0 linear polarization can be expressed as 𝒫_x^Ξ^0=Q^1_1,0,𝒫_y^Ξ^0=Q^1_2,0 and the longitudinal polarization 𝒫_z^Ξ^0=Q^1_3,0.
For the Ξ̅^0, they are 𝒫_x^Ξ̅^0=Q^1_0,1,𝒫_y^Ξ̅^0=Q^1_0,2 and 𝒫_z^Ξ̅^0=Q^1_0,3. Since the parity conserves in the J/ψ decay, we have the polarization expressions as
𝒫^Ξ^0_x = -𝒫^Ξ̅^0_x=0, 𝒫^Ξ^0_z = -𝒫^Ξ̅^0_z=0,
𝒫^Ξ^0_y = -𝒫^Ξ̅^0_y
= √(1-α_ψ^2)sin2θ_0sinΔ_a/2(1+α_ψcos^2θ_0),
where Δ _a = ξ_1/2,-1/2-ξ_1/2,1/2 is the phase difference of the two amplitudes A_12,-12,A_12,12. Obviously, whether the transverse polarization exists or not depends on the phase angle difference Δ _a.
§.§ Ξ^0(Ξ̅^0) →Λπ^0 (Λ̅γ)
In these two decays Ξ^0 →Λπ^0 and Ξ̅^0 →Λ̅γ, the parity violation can be revealed by the study of the angular distribution of the decaying particle or by the measurement of the polarization. The joint angular distribution I(θ_0,θ_1,ϕ_1,θ_2,ϕ_2) of this decay can be calculated by the joint SDM of Ξ^0 Ξ̅^0 as
I(θ_0, θ_1, ϕ_1, θ_2, ϕ_2)
∝ ∑_λ_i,λ'_iρ^Ξ^0Ξ̅^0_λ_1,λ_2;λ'_1,λ'_2
D^1/2*_λ_1,λ_3(θ_1,ϕ_1)
× D^1/2_λ'_1,λ_3(θ_1,ϕ_1)
D^1/2*_λ_2,λ_5-λ_4(θ_2,ϕ_2)
× D^1/2_λ'_2,λ_5-λ_4(θ_2,ϕ_2)
B_λ_3B^*_λ_3
×
F_λ_5,λ_4F^*_λ_5,λ_4
where the summation is taken over all involved helicities λ_i and λ'_i,(i=1,2,3,4,5).
We factor out the constant term and then simplify the angular distribution as
I(θ_0, θ_1, ϕ_1, θ_2, ϕ_2)
∝
1+α _ψcos ^2θ _0
+ √(1-α _ψ^2)sinθ _0 cosθ _0
× {. sinθ _2 sinϕ _2 α _Ξ̅^0sinΔ _a
+ α _Ξ^0 [α _Ξ̅^0cosΔ _a (sinθ _2 cosθ _1 cosϕ _2
- sinθ _1 cosθ _2 cosϕ _1)
+ sinθ _1 sinϕ _1 sinΔ _a].}
- α _Ξ^0α _Ξ̅^0 [-cosθ _1 cosθ _2 (α _ψ+cos ^2θ _0)
+ α _ψsinθ _1 sinθ _2 sin ^2θ _0 sinϕ _1 sinϕ _2
+ sinθ _1 sinθ _2 sin ^2θ _0 cosϕ _1 cosϕ _2],
where α_Ξ^0 and α_Ξ̅^0 measure parity violation.
In order to simplify the calculations of next decay chain, we adopt the Ξ decay matrices to describe the joint angular distribution I(θ_0, θ_1, ϕ_1, θ_2, ϕ_2) <cit.>, i.e.,
I(θ_0, θ_1, ϕ_1, θ_2, ϕ_2)∝Tr[ρ^Ξ^0Ξ̅^0·(M^Ξ^0⊗ M^Ξ̅^0)^T],
where M^Ξ^0(M^Ξ̅^0) is the decay matrix of Ξ^0(Ξ̅^0), and its elements can be expressed as
|M_λ,λ'|^2 = ∑_λ_1,λ_2 D^J*_λ,λ_1-λ_2 (α,β,γ)
D^J_λ',λ_1-λ_2 (α,β,γ)
× A_λ_1,λ_2 A^*_λ_1,λ_2,
where J and λ represent the spin and helicity of the mother particle, and λ_1, λ_2 are the helicities of the daughter particles, respectively. A_λ_1,λ_2 is the helicity amplitude and (α,β,γ) corresponds to the helicity angles in the decay.
For Ξ^0 and Ξ̅^0, they are (ϕ_1, θ_1, 0) and (ϕ_2, θ_2, 0), then we have
M^Ξ^0 = 1/2(
[ 1+α_Ξ^0cosθ_1 e^iϕ_1α_Ξ^0sinθ_1; e^-iϕ_1α_Ξ^0sinθ_1 1-α_Ξ^0cosθ_1 ]),
M^Ξ̅^0 = 1/2(
[ 1-α_Ξ̅^0cosθ_2 -e^iϕ_2α_Ξ̅^0sinθ_2; -e^-iϕ_2α_Ξ̅^0sinθ_2 1+α_Ξ̅^0cosθ_2 ]).
The joint angular distribution I(θ_0, θ_1, ϕ_1, θ_2, ϕ_2) in this form is expressed as
I(θ_0, θ_1, ϕ_1, θ_2, ϕ_2)
∝ Q^1_0,0{.1+Q^1_2,0α _Ξ^0 sinθ _1 sinϕ _1+α _Ξ̅^0
× [-α _Ξ^0 (Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1
+ sinθ _2 (Q^1_2,2sinθ _1 sinϕ _1 sinϕ _2
+ Q^1_3,1cosθ _1 cosϕ _2)+Q^1_3,3cosθ _1 cosθ _2)
- Q^1_0,2sinθ _2 sinϕ _2] .}.
If we substitute the Q^1_i, j with Eq. (<ref>), we can see that it is consistent with Eq. (<ref>).
Furthermore, I(θ_0, θ_1, ϕ_1, θ_2, ϕ_2) can be written as
I(θ_0∼ϕ_2)
∝
Q^1_0,0+T^1_1 α_Ξ^0+
T̅^1_1 α_Ξ̅^0
+
T^1_2 α_Ξ^0α_Ξ̅^0,
where the superscript of T^1_i is used to distinguish from the parameters T^2_i used in Eq. (<ref>).
T^1_1 and T̅^1_1 measure the transverse polarization information of Ξ^0 and Ξ̅^0, respectively.
T^1_2 measures the Ξ^0 Ξ̅^0 spin correlations.
They are
T^1_1 = sinθ _1 sinϕ _1Q^1_0,0Q^1_2,0,
T̅^1_1 = -sinθ_2 sinϕ_2 Q^1_0,0Q^1_0,2,
T^1_2 = -Q^1_0,0(Q^1_2,2sinθ _1 sinθ _2 sinϕ _1 sinϕ _2
+ Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1
+Q^1_3,1sinθ _2 cosθ _1 cosϕ _2
+ Q^1_3,3cosθ _1 cosθ _2).
§.§ Λ (Λ̅) → p π^- (p̅π^+)
The parameters to measure the parity violation in the weak decays Λ→ p π^- and Λ̅→p̅π^+ have been defined in Eq. (<ref>).
The elements of the joint SDM of ΛΛ̅ can be given by the joint SDM of Ξ^0 Ξ̅^0 as
ρ^_λ_3,λ_4,λ'_3,λ'_4 ∝ ∑_λ_1,λ_2,λ'_1,λ'_2,λ_5ρ^Ξ^0Ξ̅^0_λ_1,λ_2;λ'_1,λ'_2
D^1/2*_λ_1,λ_3(θ_1,ϕ_1)
×
D^1/2_λ'_1,λ'_3(θ_1,ϕ_1)B_λ_3B^*_λ'_3
D^1/2*_λ_2,λ_5-λ_4(θ_2,ϕ_2)
×
D^1/2_λ'_2,λ_5-λ'_4(θ_2,ϕ_2)
F_λ_5,λ_4F^*_λ_5,λ'_4,
and the specific expressions are shown in Eq. (<ref>).
Here we use the SDM of Ξ^0 Ξ̅^0 with the Q^1_i,j parameters.
Also it can be calculated by a direct product of ρ^Λ and ρ^Λ̅ as well.
Analogous to Eq. (<ref>), we get the ΛΛ̅ joint SDM ρ^ΛΛ̅ with multipole parameters Q^2_i, j as
ρ^ΛΛ̅=Q^2_0,0/4[I+∑_i,j=0^3Q^2_i,jσ^Λ_i⊗σ^Λ̅_j],
where Q^2_i, j stands for the polarizations and spin correlations of ΛΛ̅.
As same as the situation of Ξ^0 Ξ̅^0, the polarization of Λ and Λ̅ can be expressed as
𝒫^Λ_x = Q^2_1, 0, 𝒫^Λ_y =Q^2_2, 0, 𝒫^Λ_z =Q^2_3, 0 ,
𝒫^Λ̅_x = Q^2_0, 1, 𝒫^Λ̅_y=Q^2_0, 2, 𝒫^Λ̅_z=Q^2_0, 3,
The expressions of Q^2_i, j are listed in Eq. (<ref>).
Using Eq. (<ref>), we get the decay matrices of Λ and Λ̅ as
M^Λ = 1/2(
[ 1+α_Λcosθ_3 e^iϕ_3α_Λsinθ_3; e^-iϕ_3α_Λsinθ_3 1-α_Λcosθ_3 ]),
M^Λ̅ = 1/2(
[ 1+α_Λ̅cosθ_4 e^iϕ_4α_Λ̅sinθ_4; e^-iϕ_4α_Λ̅sinθ_4 1-α_Λ̅cosθ_4 ]).
Combined with Eq. (<ref>), the joint angular distribution I(θ_0,θ_1,ϕ_1,θ_2,ϕ_2,θ_3,ϕ_3,θ_4,ϕ_4) at this level can be expressed as
I(θ_0,θ_1,ϕ_1,θ_2,ϕ_2,θ_3,ϕ_3,θ_4,ϕ_4)
∝ Q^2_0,0{.1-cosθ _4 α _Λ̅ [-α _Λ (sinθ _3 (Q^2_2,3sinϕ _3
+ Q^2_1,3cosϕ _3) +Q^2_3,3cosθ _3)-Q^2_0,3]
+ α _Λ [sinθ _3 (Q^2_2,0sinϕ _3 +Q^2_1,0cosϕ _3)
+ Q^2_3,0cosθ _3].}.
Analogue to Eq. (<ref>), it can be simplified as
I(θ_0,θ_1,ϕ_1,θ_2,ϕ_2,θ_3,ϕ_3,θ_4,ϕ_4)
∝
Q^2_0,0
+T^2_1 α_Λ
+
T̅^2_1 α_Λ̅+
T^2_2 α_Λα_Λ̅,
with
T^2_1 = Q^2_0,0sinθ _3 (Q^2_2,0sinϕ _3+Q^2_1,0cosϕ _3)
- Q^2_3,0cosθ _3,
T̅^2_1 = Q^2_0,0Q^2_0,3cosθ_4 ,
T^2_2 = Q^2_0,0cosθ_4 [sinθ _3 (Q^2_2,3sinϕ _3+Q^2_1,3cosϕ _3)
+ Q^2_3,3cosθ _3].
where T^2_1 and T̅^2_1 respect the transverse polarization information for Λ and Λ̅, respectively, while T^2_2 respects the ΛΛ̅ spin correlations, which are similar to the interpretation of Eq. (<ref>).
§ SENSITIVITY OF ASYMMETRIC PARAMETERS MEASUREMENTS
Sensitivity estimation is the basis of physical experiment design, which reveals the relationship between the measurement accuracy of physical quantities and data statistics. We use the entire decay chain to improve the accuracy of the statistical sensitivity estimate. The results of our calculations show the expected measurement accuracy of these asymmetric parameters in the experiment versus the statistics of the data. The method we use is also applicable to other similar decay processes. For the large-scale experimental devices to be built in the future, such as STCF and CEPC <cit.>, the estimation of sensitivity is urgently needed to guide the data acquisition plan.
In the estimation of sensitivities, we give the normalized angular distribution as
𝒲=𝒲(θ_0,θ_1,ϕ_1,θ_2,ϕ_2,θ_3,ϕ_3,θ_4,ϕ_4)/∫···∫𝒲(···) ∏_i=0^4dcosθ_i ∏_j=1^4dϕ_j ,
where the different asymmetric parameters used are taking as α_ψ=0.66 ± 0.03 ± 0.05,α_Λ=0.732 ± 0.014,α_Λ̅=-0.758 ± 0.010 ± 0.007 according to Refs. <cit.>. α_Ξ̅^0=0.70 ± 0.07 in the hypothesis of CP conservation, and α_Ξ^0=-0.349 ± 0.009 as mentioned in Sec. <ref>.
The phase angle differences arbitrarily take as Δ_a=π/3,Δ_b=π/4,Δ_f=π/6.
We also use other sets of phase angles differences for calculation, and the results show that the sensitivity estimation of the asymmetric parameters for large statistical quantities is not significantly affected.
Here, the maximum likelihood function is defined as
L=∏_i=1^N𝒲(θ_0,θ_1,ϕ_1,θ_2,ϕ_2,θ_3,ϕ_3,θ_4,ϕ_4),
where N represents the number of observed events <cit.>.
And the variance of the asymmetric parameters, for example, α_Ξ^0, can be expressed as
V^-1(α_Ξ^0)=N∫1/𝒲[∂𝒲/∂α_Ξ^0]^2 ∏_i=0^4dcosθ_i ∏_j=1^4dϕ_j.
Thus we give the statistical sensitivity of α_Ξ^0 and α_Ξ̅^0 as
δ_1=√(V(α_Ξ^0))/|α_Ξ^0|, δ_2=√(V(α_Ξ̅^0))/|α_Ξ̅^0|.
We take a set of possible α_Ξ^0 and α_Ξ̅^0 values to plot the sensitivity as shown in Fig. <ref> and Fig. <ref>, respectively.
From the figure, we can draw the following conclusions.
First, the larger the absolute value of the asymmetric parameter, the less data is needed to reach the same statistical sensitivity.
This shows that in the case of the same data statistics, a larger asymmetric parameter value means a high measurement accuracy.
Second, from Fig. <ref> we can see that our predictions on the asymmetric parameter α_Ξ̅^0 is consistent with the latest measurement result as mentioned in Sec. <ref>.
This tests the reliability of our estimations.
Lastly, in order to achieve a statistical sensitivity of 1 %, the data statistic must be greater than 100,000. Considering the influence of background level, detector efficiency, event reconstruction efficiency and other factors in each experiment, the actual required data sample size will be different and should be larger than we predicted.
§ SUMMARY AND OUTLOOK
By studying the cascaded J/ψ→Ξ^0 Ξ̅^0, Ξ^0→Λπ^0, Ξ̅^0→Λ̅γ decays, the formulae of the angular distribution and the observed quantities of the polarization are arrived at, which can be used to measure the Ξ decay asymmetric parameter in the future experiments. In particular, we estimate the statistical sensitivity of these parameters by considering the whole decay chain.
According to the estimation results, even a large asymmetric parameter value needs more than 100,000 data events to reach a the measurement accuracy at 1%.
§ ACKNOWLEDGEMENTS
The work is partly supported by the National Natural Science Foundation of China under Grants No. 12175244, No. 11875262, No. 11805037 and No. U1832121. National Key Research and Development Program of China under Contracts No. 2020YFA0406301.
§ PARAMETERS
The helicity amplitudes can be written in the form of a complex number as
A^J_λ_1,λ_2=a^J_λ_1,λ_2e^iξ_λ_1,λ_2,
1. Expressions of real multipole parameters Q^1_i,j
Q^1_0,0 = a_1/2,1/2^2 sin ^2θ _0+1/4 a_1/2,-1/2^2 (cos 2 θ _0+3),
Q^1_0,0Q^1_0,2 = -a_1/2,-1/2 a_1/2,1/2sin 2 θ _0sinΔ _a/√(2),
Q^1_0,0Q^1_1,1 = 1/2 (2 a_1/2,1/2^2+a_1/2,-1/2^2)sin ^2θ _0,
Q^1_0,0Q^1_1,3 = a_1/2,-1/2 a_1/2,1/2sin 2 θ _0cosΔ _a/√(2),
Q^1_0,0Q^1_2,0 = a_1/2,-1/2 a_1/2,1/2sin 2 θ _0sinΔ _a/√(2),
Q^1_0,0Q^1_2,2 = 1/2 (a_1/2,-1/2^2-2 a_1/2,1/2^2)sin ^2θ _0,
Q^1_0,0Q^1_3,1 = -a_1/2,-1/2 a_1/2,1/2sin 2 θ _0cosΔ _a/√(2),
Q^1_0,0Q^1_3,3 = a_1/2,1/2^2 sin ^2θ _0-1/4 a_1/2,-1/2^2 (cos 2 θ _0+3),
and others are equal to zero.
2. The elements of ρ^ΛΛ̅
ρ^ΛΛ̅_1/2, 1/2, 1/2,1/2 = Q^1_0,0 (1+α _Ξ^0 ) (1-α _Ξ̅^0)
× {.Q^1_0,2sinθ _2 sinϕ _2+Q^1_2,0sinθ _1 sinϕ _1
+ Q^1_2,2sinθ _1 sinθ _2 sinϕ _1 sinϕ _2
+ Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1
+ Q^1_3,1sinθ _2 cosθ _1 cosϕ _2
+ Q^1_3,3cosθ _1 cosθ _2+1.},
ρ^ΛΛ̅_1/2, 1/2, -1/2,1/2 = Q^1_0,0√(1-α _Ξ^0 ^2) e^-i Δ _b (1-α _Ξ̅^0)
× {.Q^1_1,1sinθ _2 cosϕ _2 (cosθ _1 cosϕ _1
+ i sinϕ _1)+Q^1_1,3cosθ _2 (cosθ _1 cosϕ _1
+ i sinϕ _1)+Q^1_2,0cosθ _1 sinϕ _1
- i Q^1_2,2sinθ _2 sinϕ _2 cosϕ _1
+ Q^1_2,2sinθ _2 cosθ _1 sinϕ _1 sinϕ _2
- Q^1_3,1sinθ _1 sinθ _2 cosϕ _2
- Q^1_3,3sinθ _1 cosθ _2-i Q^1_2,0cosϕ _1.},
ρ^ΛΛ̅_1/2, -1/2, 1/2,-1/2 = -Q^1_0,0 (1+α _Ξ^0 ) (1+α _Ξ̅^0)
× {.-1+Q^1_0,2sinθ _2 sinϕ _2
- Q^1_2,0sinθ _1 sinϕ _1
+ Q^1_2,2sinθ _1 sinθ _2 sinϕ _1 sinϕ _2
+ Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1
+ Q^1_3,1sinθ _2 cosθ _1 cosϕ _2
+ Q^1_3,3cosθ _1 cosθ _2.},
ρ^ΛΛ̅_1/2, -1/2, -1/2,-1/2 = -Q^1_0,0√(1-α _Ξ^0 ^2) e^-i Δ _b (α _Ξ̅^0+1)
× {.Q^1_1,1sinθ _2 cosϕ _2 (cosθ _1 cosϕ _1
+ i sinϕ _1)+Q^1_1,3cosθ _2 (cosθ _1 cosϕ _1
+ i sinϕ _1)-Q^1_2,0cosθ _1 sinϕ _1
- i Q^1_2,2sinθ _2 sinϕ _2 cosϕ _1
+ Q^1_2,2sinθ _2 cosθ _1 sinϕ _1 sinϕ _2
- Q^1_3,1sinθ _1 sinθ _2 cosϕ _2
- Q^1_3,3sinθ _1 cosθ _2+i Q^1_2,0cosϕ _1.},
ρ^ΛΛ̅_-1/2, 1/2, 1/2,1/2 = Q^1_0,0√(1-α _Ξ^0^2) e^i Δ _b (1-α _Ξ̅^0)
× {.Q^1_1,1sinθ _2 cosϕ _2 (cosθ _1 cosϕ _1
- i sinϕ _1)+Q^1_1,3cosθ _2 (cosθ _1 cosϕ _1
- i sinϕ _1)+Q^1_2,0cosθ _1 sinϕ _1
+ i Q^1_2,2sinθ _2 sinϕ _2 cosϕ _1
+ Q^1_2,2sinθ _2 cosθ _1 sinϕ _1 sinϕ _2
- Q^1_3,1sinθ _1 sinθ _2 cosϕ _2
- Q^1_3,3sinθ _1 cosθ _2+i Q^1_2,0cosϕ _1.},
ρ^ΛΛ̅_-1/2, 1/2, -1/2,1/2 = Q^1_0,0 (1-α _Ξ^0 ) (1-α _Ξ̅^0)
× {.1+Q^1_0,2sinθ _2 sinϕ _2
- Q^1_2,0sinθ _1 sinϕ _1
- Q^1_2,2sinθ _1 sinθ _2 sinϕ _1 sinϕ _2
- Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
- Q^1_1,3sinθ _1 cosθ _2 cosϕ _1
- Q^1_3,1sinθ _2 cosθ _1 cosϕ _2
- Q^1_3,3cosθ _1 cosθ _2.},
ρ^ΛΛ̅_-1/2, -1/2, 1/2,-1/2 = -Q^1_0,0√(1-α _Ξ^0 ^2) e^i Δ _b (α _Ξ̅^0+1)
× {.Q^1_1,1sinθ _2 cosϕ _2 (cosθ _1 cosϕ _1
- i sinϕ _1)+Q^1_1,3cosθ _2 (cosθ _1 cosϕ _1
- i sinϕ _1)-Q^1_2,0cosθ _1 sinϕ _1
+ i Q^1_2,2sinθ _2 sinϕ _2 cosϕ _1
+ Q^1_2,2sinθ _2 cosθ _1 sinϕ _1 sinϕ _2
- Q^1_3,1sinθ _1 sinθ _2 cosϕ _2
- Q^1_3,3sinθ _1 cosθ _2-i Q^1_2,0cosϕ _1.},
ρ^ΛΛ̅_-1/2, -1/2, -1/2,-1/2 = Q^1_0,0 (1-α _Ξ^0 )(1+α _Ξ̅^0)
× {.1-Q^1_0,2sinθ _2 sinϕ _2
- Q^1_2,0sinθ _1 sinϕ _1
+ Q^1_2,2sinθ _1 sinθ _2 sinϕ _1 sinϕ _2
+ Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1
+ Q^1_3,1sinθ _2 cosθ _1 cosϕ _2
+ Q^1_3,3cosθ _1 cosθ _2.},
and the unlisted are equal to zero.
3. Expressions of real multipole parameters Q^2_i,j
Q^2_0,0 = 1/4 Q^1_0,0{.1+Q^1_2,0α _Ξ^0 sinθ _1 sinϕ _1
+ α _Ξ̅^0 [-α _Ξ^0 (Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1
+ sinθ _2 (Q^1_2,2sinθ _1 sinϕ _1 sinϕ _2
+ Q^1_3,1cosθ _1 cosϕ _2)+Q^1_3,3cosθ _1 cosθ _2)
- Q^1_0,2sinθ _2 sinϕ _2].},
Q^2_0,0Q^2_0,3 = 1/4 Q_0,0{.-α _Ξ̅^0 (Q_2,0α _Ξ ^0sinθ _1 sinϕ _1+1)
+ α _Ξ^0 [Q_2,2sinθ _1 sinθ _2 sinϕ _1 sinϕ _2
+ Q_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q_1,3sinθ _1 cosθ _2 cosϕ _1
+ Q_3,1sinθ _2 cosθ _1 cosϕ _2+Q_3,3cosθ _1 cosθ _2]
+ Q_0,2sinθ _2 sinϕ _2.},
Q^2_0,0Q^2_1,0 = -1/4 Q_0,0√(1-α _Ξ^0 ^2){.α _Ξ̅^0 [Q_1,1sinθ _2 cosϕ _2
× (cosθ _1 cosϕ _1 cosΔ _b+sinϕ _1 sinΔ _b)
+ Q_1,3cosθ _2 (cosθ _1 cosϕ _1 cosΔ _b
+ sinϕ _1 sinΔ _b)+sinθ _2 (Q_2,2sinϕ _2
× (cosθ _1 sinϕ _1 cosΔ _b-cosϕ _1 sinΔ _b)
- Q_3,1sinθ _1 cosϕ _2 cosΔ _b)
- Q_3,3sinθ _1 cosθ _2 cosΔ _b]
+ Q_2,0 (cosϕ _1 sinΔ _b-cosθ _1 sinϕ _1 cosΔ _b).},
Q^2_0,0Q^2_1,3 = 1/4 Q^1_0,0√(1-α _Ξ^0 ^2) [-Q^1_2,0cosθ _1 sinϕ _1 α _Ξ̅^0
× cosΔ _b+Q^1_1,1sinθ _2 cosϕ _2
× (cosθ _1 cosϕ _1 cosΔ _b+sinϕ _1 sinΔ _b)
+ Q^1_1,3cosθ _2 (cosθ _1 cosϕ _1 cosΔ _b
+ sinϕ _1 sinΔ _b)
- Q^1_2,2sinθ _2 sinϕ _2 cosϕ _1 sinΔ _b
+ Q^1_2,2sinθ _2 cosθ _1 sinϕ _1 sinϕ _2 cosΔ _b
- Q^1_3,1sinθ _1 sinθ _2 cosϕ _2 cosΔ _b
+ Q^1_2,0cosϕ _1 α _Ξ̅^0sinΔ _b
- Q^1_3,3sinθ _1 cosθ _2 cosΔ _b],
Q^2_0,0Q^2_2,0 = 1/4 Q^1_0,0√(1-α _Ξ ^0^2){.α _Ξ̅^0 [Q^1_1,1sinθ _2 cosϕ _2
× (sinϕ _1 cosΔ _b-cosθ _1 cosϕ _1 sinΔ _b)
+ Q^1_1,3cosθ _2 (sinϕ _1 cosΔ _b
- cosθ _1 cosϕ _1 sinΔ _b)+sinθ _2 (-Q^1_2,2sinϕ _2
× (cosθ _1 sinϕ _1 sinΔ _b+cosϕ _1 cosΔ _b)
+ Q^1_3,1sinθ _1 cosϕ _2 sinΔ _b)
+ Q^1_3,3sinθ _1 cosθ _2 sinΔ _b]
+ Q^1_2,0 (cosθ _1 sinϕ _1 sinΔ _b+cosϕ _1 cosΔ _b).},
Q^2_0,0Q^2_2,3 = 1/4 Q^1_0,0√(1-α _Ξ ^0^2) [-Q^1_2,0cosθ _1 sinϕ _1 α _Ξ̅^0
× sinΔ _b-Q^1_2,0cosϕ _1 α _Ξ̅^0cosΔ _b
+ Q^1_1,1sinθ _2 cosϕ _2 (cosθ _1 cosϕ _1 sinΔ _b
- sinϕ _1 cosΔ _b)+Q^1_1,3cosθ _2 (cosθ _1 cosϕ _1
× sinΔ _b-sinϕ _1 cosΔ _b)+Q^1_2,2sinθ _2
× sinϕ _2 cosϕ _1 cosΔ _b+Q^1_2,2sinθ _2 cosθ _1
× sinϕ _1 sinϕ _2 sinΔ _b-Q^1_3,1sinθ _1 sinθ _2
× cosϕ _2 sinΔ _b-Q^1_3,3sinθ _1 cosθ _2 sinΔ _b],
Q^2_0,0Q^2_3,0 = 1/4 Q^1_0,0{.α _Ξ ^0 (1-Q^1_0,2sinθ _2 sinϕ _2 α _Ξ̅^0)
- α _Ξ̅^0 [Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1+sinθ _2 (Q^1_2,2sinθ _1
× sinϕ _1 sinϕ _2+Q^1_3,1cosθ _1 cosϕ _2)
+ Q^1_3,3cosθ _1 cosθ _2]+Q^1_2,0sinθ _1 sinϕ _1.},
Q^2_0,0Q^2_3,3 = 1/4 Q^1_0,0 [-α _Ξ^0 (α _Ξ̅^0-Q^1_0,2sinθ _2 sinϕ _2)
- Q^1_2,0sinθ _1 sinϕ _1 α _Ξ̅^0+Q^1_2,2sinθ _1 sinθ _2
× sinϕ _1 sinϕ _2+Q^1_1,1sinθ _1 sinθ _2 cosϕ _1 cosϕ _2
+ Q^1_1,3sinθ _1 cosθ _2 cosϕ _1+Q^1_3,1sinθ _2 cosθ _1
× cosϕ _2+Q^1_3,3cosθ _1 cosθ _2],
and the others are equal to zero.
**
pQCD1
J. Bolz and P. Kroll,
https://doi.org/10.1007/s100520050160Eur. Phys. J. C 2 (1998), 545-556(1998)
CPVinK
J. H. Christenson, J. W. Cronin, V. L. Fitch and R. Turlay,
10.1103/PhysRevLett.13.138Phys. Rev. Lett. 13, 138-140(1964)
CPVinB1
B. Aubert et al. [BaBar Collaboration],
10.1103/PhysRevLett.87.091801Phys. Rev. Lett. 87, 091801 (2001)
CPVinB2
K. Abe et al. [Belle Collaboration],
10.1103/PhysRevLett.87.091802Phys. Rev. Lett. 87, 091802(2001)
CPVinD
R. Aaij et al. [LHCb Collaboration],
10.1103/PhysRevLett.122.211803Phys. Rev. Lett. 122, 211803(2019)
CP in Baryon 01
N. G. Deshpande, X. G. He and S. Pakvasa,
https://doi.org/10.1016/0370-2693(94)91327-7Phys. Lett. B 326, 307-311(1994)
CP in Baryon 02
J. Tandean and G. Valencia,
https://doi.org/10.1103/PhysRevD.67.056001Phys. Rev. D 67, 056001(2003)
CP in Baryon 03
J. F. Donoghue, X. G. He and S. Pakvasa,
https://doi.org/10.1103/PhysRevD.34.833Phys. Rev. D 34, 833(1986)
CP in Baryon 04
J. F. Donoghue and S. Pakvasa,
https://doi.org/10.1103/PhysRevLett.55.162Phys. Rev. Lett. 55, 162(1985)
CP in Baryon 05
D. Chang, X. G. He and S. Pakvasa,
10.1103/PhysRevLett.74.3927Phys. Rev. Lett. 74, 3927-3930(1995)
CP in Baryon 06
X. G. He, H. Murayama, S. Pakvasa and G. Valencia,
10.1103/PhysRevD.61.071701Phys. Rev. D 61, 071701(2000)
CP in Baryon 07
C. H. Chen,
https://doi.org/10.1016/S0370-2693(01)01236-9Phys. Lett. B 521, 315-319(2001)
CP in Baryon 08
J. Tandean,
10.1103/PhysRevD.69.076008Phys. Rev. D 69, 076008(2004)
nature
BESIII Collaboration • Medina Ablikim (Beijing, Inst. High Energy Phys.) et al.
https://doi.org/10.1038/s41586-022-04624-1Nature 606 7912, 64-69(2022)
alpha_Xi
R. L. Workman et al. [Particle Data Group],
https://pdglive.lbl.gov/Particle.action?init=0 node=S023 home=BXXX030PTEP 2022 no.8, 083C01
alpha_Xibar
J. R. Batley, G. E. Kalmus, C. Lazzeroni, D. J. Munday, M. Patel, M. W. Slater, S. A. Wotton, R. Arcidiacono, G. Bocquet and A. Ceccucci, et al.
https://doi.org/10.1016/j.physletb.2010.08.046Phys. Lett. B 693, 241-248(2010)
LY
T. D. Lee and C. N. Yang,
10.1103/PhysRev.108.1645Phys. Rev. 108 , 1645-1647(1957)
HelicityMethod
Hong Chen, Rong-Gang Ping
https://doi.org/10.1103/PhysRevD.76.036005Phys.Rev.D 76, 036005(2007)
BESIII:2022udq
M. Ablikim et al. [BESIII Collaboration],
https://doi.org/10.1007/JHEP12(2022)033JHEP 12, 033(2022)
BESIII:2022yzp
M. Ablikim et al. [BESIII Collaboration],
https://doi.org/10.1007/JHEP01(2023)111JHEP 01 , 111(2023)
Belle:2017egg
K. Chilikin et al. [Belle Collaboration],
10.1103/PhysRevD.95.112003Phys. Rev. D 95 , 112003(2017)
Belle:2014nuw
K. Chilikin et al. [Belle Collaboration],
10.1103/PhysRevD.90.112009Phys. Rev. D 90 no.11, 112009(2014)
LHCb:2021sqa
R. Aaij et al. [LHCb Collaboration],
https://doi.org/10.1007/JHEP06(2021)177JHEP 06, 177(2021)
extract pars
H. Chen and R. G. Ping,
https://doi.org/10.1103/PhysRevD.99.114027Phys. Rev. D 99, 114027(2019)
spinfm
S. U. Chung, (1971),
https://doi.org/10.5170/CERN-1971-00810.5170/CERN-1971-008
angleDef
J. R. Batley, G. E. Kalmus, C. Lazzeroni, D. J. Munday, M. Patel, M. W. Slater, S. A. Wotton, R. Arcidiacono, G. Bocquet and A. Ceccucci, et al.
10.1016/j.physletb.2010.08.046Phys. Lett. B 693, 241-248(2010)
Spin
E. Leader,
http://www.cambridge.org/mw/academic/subjects/physics/theoretical-physics-and-mathematical-physics/spin-particle-physics?format=ARCamb. Monogr. Part. Phys. Nucl. Phys. Cosmol. 15 , pp.1-500 (2011)
PolaInChicj
H. Chen and R. G. Ping,
https://doi.org/10.1103/PhysRevD.102.016021Phys. Rev. D 102, 016021(2020)
sensitivity calculation
T. Z. Han, R. G. Ping, T. Luo and G. Z. Xu,
https://doi.org/10.1088/1674-1137/44/1/013002Chin. Phys. C 44, 013002(2020)
STCF1
M. Achasov, X. C. Ai, R. Aliberti, Q. An, X. Z. Bai, Y. Bai, O. Bakina, A. Barnyakov, V. Blinov and V. Bobrovnikov, et al.
https://arxiv.org/pdf/2303.15790arXiv:2303.15790 [hep-ex]
STCF2
J. Q. Lan, Q. Luo, C. Zhang, W. W. Gao, Y. Xu and Y. Bai,
https://doi.org/10.1088/1748-0221/16/07/T07001JINST 16 no.07, T07001(2021)
CEPC
Jie Gao
https://inspirehep.net/files/e37460b0fbf041f984d203749ba50a86JACoW eeFACT2022 262-269(2023)
alpha_psi
M. Ablikim et al. [BESIII Collaboration],
https://doi.org/10.1016/j.physletb.2017.04.048Phys. Lett. B 770 (2017), 217-225
alpha_lambda
P. A. Zyla et al. [Particle Data Group],
https://doi.org/10.1093/ptep/ptaa10410.1093/ptep/ptaa104
PDG
P. A. Zyla et al. [Particle Data Group],
https://pdglive.lbl.gov/Particle.action?init=0 node=S018 home=BXXX020PTEP 2020 (2020) no.8, 083C01
|
http://arxiv.org/abs/2306.10678v1
|
20230619025740
|
Amplitude of $H \to γZ$ process via one $W$ loop in unitary gauge (I. Details of calculation with Dyson scheme)
|
[
"Shi-Yuan Li"
] |
hep-ph
|
[
"hep-ph"
] |
Institute of Theoretical Physics, Shandong University, Jinan 250100, P. R. China
Decay amplitude of H →γ Z process
via one W loop in the unitary gauge is presented.
The divergent integrals including those of high divergence orders typical of unitary gauge
are arranged to cancel to get the electromagnetic U(1) gauge invariant finite result, hence no contribution to the renormalization constant of Zγ-mixing in this 1-loop subprocess.
For the calculation of the Feynman diagrams employing the Feynman rules, all the integrations of the propagator momenta and all the δ-functions representing the 4-momentum conservation of every vertex
are retained in the beginning. Therefore, the ambiguity of setting independent loop momentum for divergences worse than logarithmic does not exist, and shift of integrated variable in such divergent integrals is eschewed. The calculation are done in 4-dimension Minkowski momentum space without the aid of any regularization. The correct treatment on the surface terms for the quadratic and logarithmic tensor integral is one of the key points.
This part I is devoted to the calculation details and the indications from the key surface terms. Comparing with other gauge(s) and complete results for H →γ Z are left for part II.
Amplitude of H →γ Z process via one W loop in unitary gauge
(I. Details of calculation with Dyson scheme)
Shi-Yuan Li
July 31, 2023
============================================================================================================
§ INTRODUCTION
The Glashow-Weinberg-Salam
electroweak (EW) theory is a SU(2)×U(1) Yang-Mills gauge field theory, with the gauge symmetry 'broken' by a scalar field via the Englert-Brout-Higgs-Guralnik-Hagen-Kibble Mechanism and the scalar field coupling the fermion field in Yukawa style provides the mass term which distinguishes various SU(2)-doublet fermions. This theory has been confirmed
from experiment in the sense that the massive W^± Z particles v.s. the massless photon, and a 'remaining' neutral scalar particle which is generally referred to
as the Higgs or the 'God' particle, all are well measured.
In general, a realistic calculation of the S-matrix or scattering/decay amplitude employing the quantized field theory
of the standard model need to fix a specific gauge and it is adopted that the physical result should be independent from the choice (artificial rather than by nature) of the gauge. However, recently,
careful revisit on the H →γγ decay width in the unitary gauge and the R_ξ gauge <cit.> would like to imply some paradox.
For a review and remarks on the uncertainties in this paradox, see e.g., <cit.>.
In all ways
this paradox calls for calculations in the unitary gauge for loop diagrams to be extensively studied.
Many topics have been suggested <cit.>, and one of them is
the H →γ Z process via one W loop. Though less experimentally significant <cit.>,
it can also be an important example to investigate
in the unitary gauge and the R_ξ gauge to gain insights for the paradox.
It is well known that unitary gauge can be taken as a limit of the general R_ξ
gauge (but can be defined independently <cit.>) that does not commute with the loop integrations <cit.>, so that a lot of care
has to be taken when applied to loop calculations. In the above mentioned gauge (non)invariance paradox, several uncertainties could arise from the (maybe) non-commutability of various limitations <cit.>. Besides, high divergence order integrals are one of the difficulties.
Similar as H →γγ process via W loop,
H →γ Z process via one W loop in the unitary gauge also has many high divergence order integrals for each single diagrams and should properly cancel, or else one can not get the correct result, either not possible to make the comparison with results from other gauges.
Terms proportional to M_Z^2, not encountered in H →γγ process, cause new difficulties.
The purpose of this paper is to apply the experiences obtained from the investigation on the H →γγ process via one W loop <cit.>, i.e.,
without the setting of independent integrated loop momentum in the beginning and eschewing the shift of integrated variable for high order divergences, to provide the systematic framework to give the finite and electromagnetic U(1) gauge invariant amplitude of the H →γ Z process via one W loop for further study.
For any diagram whose divergence order higher than logarithmic, to shift the integral momentum can lead to extra terms with lower divergence (or finite).
In such case, the proper set of diagrams with correct inter-relations of the loop momenta must be treated together
to get the correct result, as pointed by <cit.> (in the following we refer to these two papers and works therein as GWW). Only a part of diagrams of the set shifting the momenta will change the result. This problem can be solved by the the original Dyson formulation, 'Dyson scheme' as called in <cit.>, without the ambiguity of setting independent loop momentum in the beginning, and shift of integrated variable in high divergence order integrals can be, and is, eschewed. Correspondingly, our cancellation of divergences are all at integral level rather than integrand level.
However, a wise setting of independent integrated momentum as GWW or employing Dyson scheme <cit.> is inadequate to definitely determine the final result in the unitary gauge
because of the presence of the surface term for the reduction of the divergent tensor integral. In H →γ Z process,
the same logarithmic divergent tensor integral appears as in H →γγ <cit.>; but there is also the new quadratic ones, especially for terms proportional to M_Z^2, which will be investigated in this paper.
The considerations and key points of employing Dyson scheme are as following:
To think that a high divergence order (> logartithmic) integral from the Feynman diagram is changed when shifting the integrated momentum, one inevitably raises the question what is the 'original' expression/value to be changed? There may not exist the 'original' one for a single diagram, once considering that different Feynman diagrams are related and hence the loop momenta (à la GWW). However, one can have the definiteness starting from the original form derived from the perturbative expansion of the S-matrix according to the standard Dyson-Wick procedure <cit.>, which is integrations on space-time at each perturbative order. Once taking these space-time integrations [leaving out the integrations on momenta from each propagators; this in fact exchanging order of integration, between the phase space and configuration space.], we get δ functions, one for each vertex, relating all the momenta of the propagators with energy-momentum conservation <cit.>. If we start from such a form for each diagram, without integrating the propagator momenta and δ functions, there will be no indefiniteness, or ambiguity to set independent integral momentum.
This corresponds to that the momentum space Feynman rules are slightly modified [in fact 'recovered', see the classical paper of Dyson <cit.>, especially its Eq. (20) and discussions before and after it.] as: Any propagator with momentum q has an extra [∫d^4 q/(2π)^4] 'operator', i.e., should this integration on q to be done in the calculation of the Feynman diagram;
any vertex has an extra factor (2π)^4 δ(∑_i q_i), with q_i, each momentum of all the propagators attaching the vertex, incoming.
In this paper we adopt this way to write the amplitude corresponding to each Feynman diagram for calculation. The method has been proved to be valid and feasible based on our investigation on the H →γγ process via one W loop in the unitary gauge <cit.>. Calculation in this way can also help us to eschew the shift of integrated variables in high order divergences.
For the present γ Z case in unitary gauge this seems the only practical way. In <cit.>, to eliminate uncertainties, the authors suggested to calculate the difference between unitary gauge and R_ξ gauge
by first to calculate the difference of the integrand and then to do the loop integration. Since the difference of the integrand still
leads to high order divergences, the choice of integrated momenta has also to be treated in the above suggested way to eliminated ambiguity.
There is another advantage in applying the Dyson scheme when treating the surface term for the divergent tensor integral reduction. Since the integrated momentum is just the one in the originally-defined Feynman propagator, its natural physical boundary condition can be used to determine the surface term (to be zero).
In the following section 2, we illustrate the details of the calculations in the way mentioned above.
The result is finite, U_EM(1) invariant, without the need of the Dyson subtraction with the correct treatment on the logarithmic and quadratic surface terms. No regularization is introduced and all calculations are done in the 4-dimension Minkowski momentum space. During the calculation, only real convergence or real logarithmic divergence need not to eschew shift. As is also drawn attention by <cit.>, especially for terms with odd power of momentum (fermion propagator is another example), it has long been noticed that the real divergence can be worse than simple power counting once all possible ways for momentum to infinity considered—this is just the
condition when we do not beg for regularization.
The physical implication of the finite terms proportional to M_Z^2/M^4 after all divergences from quadratic to logarithmic cancelled is very interesting as well. In γγ case, the M^-2 term not zero is explained because of the goldstone effects, which is equivalently exposed in unitary gauge by the corresponding terms in W propagator (and going to the final result once the surface term of logarithmic tensor reduction and cancellation is properly treated). In the γ Z case, the Z particle includes transverse as well as longitudinal components, which can expose more longitudinal (goldstone) effects from the W propagator. This is true because of the extra effects proportional to M^2_Z.
Besides the success of calculating these H decay channels, this way also shows power to discuss other important issues. For example, in <cit.> we demonstrate that it is very easy to
obtain both the vector current and axial vector current conservations at the same time via this Dyson scheme [It has been long recognized, both current conservations can be obtained at the same time via setting 'the most symmetric loop momentum' as in <cit.>. The author thanks Prof. T.T. Wu for informing this fact.], contrary to the Bell-Jackiw claim. From such investigations, one also recognizes the important
rôle of the infinite momentum surface integrals. So we make some discussions on the results, the Dyson scheme, the surface terms, and the physical implication on probing the structures and properties of space-time in section 3.
This part I is devoted to the calculation details to get the electromagnetic U(1) gauge invariant result and physical indication from this calculation procedure, especially the the divergence cancelation from the surface term. Comparing with other gauge(s) and complete results for H →γ Z are left for part II.
§ CALCULATION
§.§
There are totally 5 Feynman diagrams.
Three are similar as those of the H →γγ process, via the direct coupling of Higgs and W-boson loop. We will demonstrate the calculations in details.
The other two are via the direct 3-point coupling of Higgs and ZZ, while one Z transiting to gamma via the W bubble (3-point vertices) and W tadpole (4-point vertex). However, these latter two diagrams sum to be zero <cit.>, and the details of the calculation will not be presented in this paper. In this calculation the same surface term of the quadratic tensor integral is encountered, which will be investigated in details for the former three diagrams in the following [As a matter of fact, these diagrams are self-energy like diagrams, and by Lorentz and U(1)_EM gauge invariant arguments should be proportional to k_1^2 g_μα-k_1μk_1 α (μ the on shell photon index, α some dummy index), so are zero. But just the same as the QED photon self-energy diagram, direct calculation is non-trivial. The superficial quadratic latter also need the correct quadratic surface term to get the 'proper' form, which was regarded only available via a gauge invariant regularization. With the same argument, and also can be shown explicitly by the help of the same quadratic surface term, the fermion bubble of γ Z also zero since k_1 on shell, i.e., k_1^2=0, k_1 ·ϵ =0. The application to the fermion case in fact is even more useful since here we can freely work in 4 dimension Minkowski momentum space without the ambiguity of γ^5.]. This zero result together with the finite result of the former three diagrams show in this 1-loop subprocess no contribution to the Zγ mixing renormalization constant.
Figure 1 showes the three diagrams to be calculated in the following, k_1 is the 4-momentum of γ.
k_2 is the 4-momentum of the Z particle, k_2^2=M_Z^2, with the corresponding polarization vectors ϵ_Zλ^ν,
λ=1,2,3, since Z is massive. We still have k_2νϵ_Zλ^ν=0, ∀λ (see Eq. <ref>). There is also
an extra θ_W factor for the
WWZ vertex, where θ_W is the Weinberg angle.
As convention, The S-matrix and T-matrix have the relation S=I+i T, and the matrix element between initial and final states
i T_fi=i (2π)^4 δ(P_f-P_i) 𝔐_fi
for the space-time displacement invariant case.
Here we keep all the momenta respectively corresponding to each propagator and hence all
δ functions respectively corresponding to each vertex. The one corresponding to the initial-final state energy momentum conservation
is contained in these δ functions. After integrating over them, one will get the above form of T-matrix element with the 𝔐_fi
is the integration of the independent loop momenta only, without the δ functions attached to the vertices.
This is the standard procedure in developing the Dyson-Wick perturbation theory in the interaction picture <cit.>. The four-momentum conservation
δ function attached to each of the vertices is the result of integration of space-time variables in the perturbative expansion of the
S-matrix, and is the manifestation of space-time displacement invariance [This requires not only the boundary of space-time at infinity trivial but also no point or structure singular to 'block' the displacement. We conjecture this may be the condition to exchange the integration order of configuration space and momentum space, see footnote 1. We also would like to point out "no point or structure singular to 'block' the displacement" guarantees the investigation on the surface term at infinity in momentum space <cit.>.].
In the following, we do not integrate out the δ functions of each vertex until have to and is allowed to integrate out some of the momenta with corresponding δ functions
(only for 'hidden' case, see the following).
So here we deal with the matrix elements T_fi rather than 𝔐_fi:
T_1 = -ie^2gMθ_W/(2 π)^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× (g_α ^β - q_1α q_1 ^β/M^2 ) (g ^ρσ-q_3 ^ρ q_3 ^σ/M^2 ) (g^αγ -q_2^α q_2 ^γ/M^2 )
V_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
T_2 = ie^2gMθ_W/(2 π)^4∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
× (g_α ^β - q_1α q_1 ^β/M^2 ) (g^αγ -q_2^α q_2 ^γ/M^2 )
2g_μν g_βγ-g_μβ g_νγ -g_μγ g_νβ/(q_1 ^2-M^2)(q_2 ^2-M^2),
T_3 = -ie^2gMθ_W/(2 π)^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3)δ(q_3-k_1-q_2)
× (g_α ^β - q_1α q_1 ^β/M^2 ) (g ^ρσ-q_3 ^ρ q_3 ^σ/M^2 ) (g^αγ -q_2^α q_2 ^γ/M^2 )
V_βνρ (q_1, -k_2, -q_3) V_σμγ(q_3, -k_1, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
Here we do not explicitly write the matrix element subscript_fi, and all the δ fuctions are understood as four-dimensional one,
i.e., δ(P_1-P_2) :=δ^4 (P_1-P_2). As GWW, we also omit the polarization vector, so, e.g., T_1 should be understood as T_1μν.
In all this paper, we use M to represent the W mass.
The relation between T_1 and T_3, i.e., μ↔ν, and k_1 ↔ k_2
is clear to be read out.
To better illustrate the calculation procedure,
we list the formulae we use repeatedly
in Appendix A. They correspond to Eqs. (2.5-2.12) of GWW <cit.> and can go to GWW (2.5-2.12) by simply taking M_Z=0
since there all for photons.
Similarly as the H →γγ process, the property of 3-particle vertex, WWγ or WWZ, which is named as Ward Identity (W.I.) in GWW, is the key role in the evaluation,
because the extra terms q^α q^β/M^2 in W propagator product with the vertex is typical of the unitary gauge.
There is more subtle elements in considering these W.I.'s, since they demonstrate the special relations of the various propagator momenta provided by the
concrete dynamics in the standard mode. Furthermore, the equations (A9), (A10), and first and third terms of (A7), (A8) are independent of the integrated momentum choice (the (q_3^2-M^2) term reduces with one from the denominator, so in fact corresponding to a g_μν term).
In the following, we investigate the terms according to their minus power of M. The -ieg^2Mcosθ_W/(2 π)^4 factor will not explicitly written, and all terms should multiply with this factor to get the
proper terms in the corresponding T amplitude of Eqs. (<ref>-<ref>). So when we mention a term, we do not take into account the overall M factor coming from the
HWW coupling except explicitly addressed.
Since there is still a WWγ vertex, i.e., V_βμρ (q_1, -k_1, -q_3) of T_1, V_σμγ(q_3, -k_1, -q_2) of T_3, respectively, the M^-6 terms in T_1 and T_3 are again zero as the H →γγ process, according to Eq. (<ref>).
§.§
For the M^-4 terms, in T_1 and T_3, respectively, there are 3 ways of the combination of two of three q_i^α_i q_i^β_i/M^2 (i=1,2,3) from the W propagators.
One combination is zero because of the W.I. of the WWγ vertex, Eq. (<ref>). Another gives a M_Z^2/M^4 terms because of the W.I. of the WWZ vertex, Eq. (<ref>).
The third corresponds to those of the H →γγ process, gives likely terms besides extra M_Z^2/M^4 terms.
Those from T_1:
T”_11 = 1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1α q_1 ^β q_3^ρ q_3^σ g ^αγV_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)=0,
T'_11 = 1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× g_α ^β q_3^ρ q_3^σ q_2 ^α q_2 ^γV_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
= M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× (-q_1^2 q_2μ q_3ν+q_1· q_2 q_1μ q_3ν) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
employing the corresponding W.I.'s of Appendix A. Obviously, T'_11=0 if M_Z=0, which is consistent with the H →γγ case.
T_11 = 1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1α q_1 ^β g^ρσ q_2 ^α q_2 ^γV_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
Now since q_1 ^β V_βμρ (q_1, -k_1, -q_3)= (q_3^2-M^2) g_μρ-q_3μq_3ρ+M^2 g_μρ, according to (<ref>),
T_11=T_111+T_112+T_113. For this kind of step we employ the W.I. for WWγ (A7) in priority than WWZ vertex (A8) because it straightforwardly
gives the proper minus power of M for each term.
T_113 = M^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1· q_2 V_μνγ(q_3, -k_2, -q_2)q_2 ^γ/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
is obviously M^-2 term and to be discussed with other M^-2 terms later.
T_112 = 1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1· q_2 q_3μq_3^σ V_σνγ(q_3, -k_2, -q_2)(-q_2 ^γ)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
= M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1· q_2 q_3μ q_3ν/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
T_111 = 1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1 · q_2 V_μνγ(q_3, -k_2, -q_2)q_2 ^γ/(q_1 ^2-M^2)(q_2 ^2-M^2)
= 1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1 · q_2 q_1 · q_2 g_μν -q_1μ q_2ν +(q_1· k_2-q_2· k_1-k_1· k_2) g_μν -M_Z^2 g_μν/(q_1 ^2-M^2)(q_2 ^2-M^2).
Again to employ the corresponding W.I. of these two kinds of 3-particle vertices.
In the last step of Eq. (<ref>), we also take into account that, ∫ dx f(x) δ(x-a)=∫ dx f(a) δ(x-a), to use the relation q_3=q_1-k_1=q_2+k_2,
to get
(q_3^2 g_μν-q_3μq_3ν )= q_1 · q_2 g_μν -q_1μ q_2ν +(q_1· k_2-q_2· k_1-k_1· k_2) g_μν.
In fact, all the Ward identities for the 3-boson vertex we use here also have employed the energy momentum conservation at the vertex.
We write
T_11B = T'_11+T_112
= M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× (-q_1^2 q_2μ q_2ν+ 2 q_1· q_2 q_3μ q_3ν)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
so all the M^-4 term from T_1 are now T_11B and T_111, with the 4th and 3th order divergences only in T_111 (formally similar as those in H→γγ).
Now we come to T_3, from external variables, its relation with T_1 is μ↔ν, and k_1 ↔ k_2; from
internal momenta, it is q_1 ↔ -q_2 and q_3 ↔ -q_3.
For easy to investigate, we now deal with it separately, according to the same thread of T_1. Similarly for the three ways of combination,
T”_31=0, and
T'_31 = M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3)δ(q_3-k_1-q_2)
× (-q_2^2 q_1μ q_1ν+q_1· q_2 q_3μ q_3ν) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
For T_31, we also have T_31=T_311+T_312+T_313, with
T_313 = M^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3)δ(q_3-k_1-q_2)
× q_1· q_2 q_1 ^β V_βνμ(q_1, -k_2, -q_3)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
is obviously M^-2 term and to be discussed with other M^-2 terms later.
T_312 = M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3)δ(q_3-k_1-q_2)
× q_1· q_2 q_3μ q_3ν/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
T_311 = 1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3)δ(q_3-k_1-q_2)
× q_1 · q_2 q_1 · q_2 g_μν -q_1ν q_2μ +(q_1· k_1-q_2· k_2-k_1· k_2) g_μν -M_Z^2 g_μν/(q_1 ^2-M^2)(q_2 ^2-M^2).
We write
T_31B = T'_31+T_312
= M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3)δ(q_3-k_1-q_2)
× (-q_2^2 q_1μ q_1ν+ 2 q_1· q_2 q_3μ q_3ν) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
again all the M^-4 term from T_3 are now T_31B and T_311, with the 4th and 3th order divergences only in T_311.
T_111 and T_311 can be combined, since for both of them q_3 only appear in the δ functions and can be integrated out.
Then the 4th and 3th order divergences cancel after summing with those of T_2
and only quadratic one proportional to M_Z^2/M^4 left (zero for M_Z=0).
T_111 +T_311 = 1/M^4∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
× q_1 · q_2 2 q_1 · q_2 g_μν -q_1μ q_2ν-q_2μ q_1ν -M_Z^2 g_μν/(q_1 ^2-M^2)(q_2 ^2-M^2),
since (q_1· k_2-q_2· k_1-k_1· k_2) g_μν+ (q_1· k_1-q_2· k_2-k_1· k_2) g_μν=k_2^2 g_μν= M_Z^2 g_μν, takeing now q_1-q_2=k_1+k_2 from the δ function.
The M^-4 term in T_2 is
T_21 = -1/M^4∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
× q_1 · q_2 2 q_1 · q_2 g_μν -q_1μ q_2ν-q_2μ q_1ν/(q_1 ^2-M^2)(q_2 ^2-M^2),
so T_111 +T_311+T_21=T_A:
T_A = 1/M^4∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
-M_Z^2 /(q_1 ^2-M^2)(q_2 ^2-M^2)
× q_1 · q_2 g_μν,
= M_Z^2/M^4∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
1 /(q_1 ^2-M^2)(q_2 ^2-M^2)
× ((k_1+k_2)^2/2+(-(q_1^2-M^2)+(q_2^2-M^2)/2-M^2)) g_μν.
All the uncancelled terms of M^-4 order, T_A, T_11B, T_31B,
are proportional to M_Z^2, so are zero when Z mass goes to zero.
Since M_Z is a constant parameter in the dynamics,
the result is not because of the in-proper choice of the integrated
variables; as a matter of fact, all the above derivation is independent from any special choice of the integrated variable.
In the integral T_A, the q_3 does not appear (or only appearing in the δ functions and is integrated out). This case, which has appeared above, is called 'hidden' in the following.
We have
T_11B = M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3) δ(q_3-k_2-q_2)
× 1 /(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2)
× (-(k_1+k_2)^2 q_3μq_3ν+q_2^2 q_3μq_3ν+(q_3^2-M^2)k_2μq_2ν+M^2 k_2μq_2ν+2 k_1· q_3k_2μq_3ν);
T_31B = M_Z^2/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3) δ(q_3-k_1-q_2)
× 1 /(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2)
× (-(k_1+k_2)^2 q_3μq_3ν+q_1^2 q_3μq_3ν-(q_3^2-M^2)k_2μq_1ν-M^2 k_2μq_1ν+2 k_1· q_3k_2μq_3ν).
In the above equations, (q_1-q_2)=(k_1+k_2) is used. In the last line of each of Eqs. (<ref>,<ref>, <ref>), we have decomposed them into various terms with various orders of divergences.
Before discussing the M_Z^2/M^4 terms, we first collect the uncancelled finite M_Z^2/M^2 terms appearing in above equations for further investigation.
The ones directly read from the Eqs. (<ref>) and (<ref>) respectively are the fourth term in last line of each
T_11B1 = M_Z^2/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3) δ(q_3-k_2-q_2)
× k_2μq_2ν/(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) ,
T_31B1 = M_Z^2/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3) δ(q_3-k_1-q_2)
× - k_2μq_1ν/(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) .
The denominators are not the same, since different kinematic configuration. Similar attention should be paid in the following.
These two terms (<ref>, <ref>) are latter in need to be combined to get the U_EM(1) gauge invariant term. This very subtle fact is a signal of the self-consistency of the standard model.
The linear divergent terms in Eqs. (<ref>) and (<ref>) respectively are the third term in last line of each.
The (q_3^2-M^2) reduces
the corresponding factor in denominator, and then they are independent on q_3. Integrating on q_3 in both and they can combine and give
M_Z^2/M^4∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
-1 /(q_1 ^2-M^2)(q_2 ^2-M^2) k_2μ k_1ν.
Again (q_1-q_2)=(k_1+k_2) is used.
Further, dividing Eq. (<ref>) by 2, each respectively recover a (q_3^2-M^2) factor in the numerator and denominator, recover a third δ
function and integration on q_3 corresponding to T_1 and T_3. The q_3^2 term of numerator cancels the logarithmic divergence of fifth (last) term in last line
respectively of Eqs. (<ref>) and (<ref>) (the remaining finite terms are in the following).
The remaining M_Z^2/M^2 terms are
T_11B2 = M_Z^2/ M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3) δ(q_3-k_2-q_2)
× k_2μk_1ν/2 /(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) ,
T_31B2 = M_Z^2/ M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3) δ(q_3-k_1-q_2)
× k_2μk_1ν/2 /(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) .
Similar cancellation as above is also done for first term of last line
respectively of Eqs. (<ref>) and (<ref>) with the first term of last line of the equation of T_A (the remaining finite terms are in the following),
and the remaining M_Z^2/M^2 terms from T_A (coming from the q_3^2-M^2 factor for both numerator and denominator, recovering the ∫ dq_3 integration) are
T_A11 = M_Z^2/ M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3) δ(q_3-k_2-q_2)
× -(k_1+k_2)^2 g_μν/4/(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) ,
T_A31 = M_Z^2/ M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3) δ(q_3-k_1-q_2)
× - (k_1+k_2)^2 g_μν/4 /(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) .
In the above two logarithmic divergence cancellations,
the nonzero finite M_Z^2/M^4 terms are
T_11BF = M_Z^2/ M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3) δ(q_3-k_2-q_2)
× 1/4 (k_1+k_2)^2 q_3^2 g_μν-(k_1+k_2)^2 q_3μq_3ν- 1/2 q_3^2 k_2μ k_1ν +2 k_1· q_3 k_2 μ q_3ν/(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) ,
T_31BF = M_Z^2/ M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3) δ(q_3-k_1-q_2)
× 1/4 (k_1+k_2)^2 q_3^2 g_μν-(k_1+k_2)^2 q_3μq_3ν- 1/2 q_3^2 k_2μ k_1ν +2 k_1· q_3 k_2 μ q_3ν/(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) .
For the above six remaining non zero finite terms, contrary to (<ref>, <ref>), it can be shown that Eqs. (<ref>) to (<ref>) are cancelled by those terms in T_11BF and T_31BF proportional to an extra M^2 factor. Those remaining in T_11BF and T_31BF all proportional to (k_1· k_2 g^μν-k_2^μ k_1^ν)M_Z^2/M^4. This in fact only can be approved after the integration on Feynman-Schwinger parameters x_1, x_2, by which extra terms other than the gauge invariant one integrated to be zero.
The coefficient of (k_1· k_2 g^μν-k_2^μ k_1^ν)M_Z^2/M^4 is then
i/(4 π)^2∫ dx_1 dx_2 dx_3 δ(x_1+x_2+x_3-1)x_1x_2 (2 k_1· k_2+M_Z^2)/x_1x_2 2 k_1· k_2-x_2^2M_Z^2+x_2M_Z^2-M^2.
The result is of the form
1/2+O(M_Z^2/2 k_1· k_2) ···),
which guarantee the non-zero M_Z^2/M^4 terms in the unitary gauge.
This term is the most important part of the U_EM(1) gauge invariant final result, in the sense crucial to comparing with the result from other gauges (e.g., R_ξ gauge). For feasibility to collect for the whole result, we mark it as R1.
Now the terms including quadratically divergence to be considered are:
1/M^4∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
M_Z^2 /(q_1 ^2-M^2)(q_2 ^2-M^2)
× (-(q_1^2-M^2)+(q_2^2-M^2)/2-M^2) g_μν,
1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3) δ(q_3-k_2-q_2)
× M_Z^2 ((q_2^2-M^2)+M^2) /(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) q_3μq_3ν,
1/M^4∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3) δ(q_3-k_1-q_2)
× M_Z^2 ((q_1^2-M^2)+M^2) /(q_1 ^2-M^2)(q_2 ^2-M^2) (q_3^2-M^2) q_3μq_3ν.
But to get a clear and definite cancellation of the quadratic divergence, from the above three equations we still have to again take out the logarithmic divergent (M_Z^2/M^2) terms to be considered in next sections (which is indeed necessary to cancel divergence there, that also guarantees this derivation):
M_Z^2/M^2∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(k_1+k_2-q_1+q_2) - g_μν/(q_1 ^2-M^2) (q_2^2-M^2),
and
M_Z^2/M^2 [∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3) δ(q_3-k_2-q_2)
+ ∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_2-q_3) δ(q_3-k_1-q_2)]
× q_3μ q_3ν/(q_1 ^2-M^2) (q_2 ^2-M^2) (q_3^2-M^2) .
So the quadratic divergence from Eq. (<ref>) is
T_Aq= M_Z^2/M^4 (2 π)^4 δ(P-k_1-k_2) - g_μν/2 (∫ d^4q_1 1 /(q_1 ^2-M^2)+ d^4q_2 1 /(q_2 ^2-M^2)),
after cancelling the similar factor in the numerator and denominator, and integrating the variable only appearing in the δ function.
For the quadratics in Eqs. (<ref>) and (<ref>), respectively,
after cancelling the similar factor in the numerator and denominator, and integrating the variable only appearing in the δ function,
they become,
T_11Bq= M_Z^2/M^4∫ d^4q_1 d^4q_3 (2 π)^4 δ(P-q_1+q_3-k_2) δ(q_1-k_1-q_3) q_3μq_3ν/(q_1 ^2-M^2) (q_3^2-M^2),
T_31Bq= M_Z^2/M^4∫ d^4q_2 d^4q_3 (2 π)^4 δ(P+q_2-q_3-k_2) δ(q_3-k_1-q_2) q_3μq_3ν/(q_2 ^2-M^2) (q_3^2-M^2) .
In the above derivations, we take many 'petty' steps to extract various terms which is finite and definite or divergent only logarithmically, and at last arrive at the above three quadratic terms to sum and cancel. The reason is just that we eschew any shift of the integrated variable in the integral of high divergence order. The subtle need of them (except the above U(1) invariant M_Z^2/M^4 term) to cancel divergence and to get U(1) invariant form is of course a nontrivial guarantee the derivation.
The above three terms sum to be zero, because of the quadratic surface term. Here we come to investigate:
Similar as the relation of logarithmic tensor integral (here we only write the integrand)
∂_(l)^μl^ν/(l^2-Δ)^2=g^μν/(l^2-Δ)^2-4 l^μ l^ν/(l^2-Δ)^3=l^2 g^μν- 4 l^μ l^ν/(l^2-Δ)^3-Δ g^μν/(l^2-Δ)^3,
we have the quadratic one
∂_(l)^μl^ν/(l^2-Δ)=g^μν/(l^2-Δ)-2 l^μ l^ν/(l^2-Δ)^2=l^2 g^μν- 2 l^μ l^ν/(l^2-Δ)^2-Δ g^μν/(l^2-Δ)^2.
When we take the surface term to be integrated zero, we obtain the tensor reduction formula we need. Why the surface term can be zero considering the boundary condition at infinity inherent of the phase space free Feynman propagator <cit.>,
and the indication that this momentum space divergence as sensitive probe on local property of space time et vice verse, are to be investigated in the discussion section.
This quadratic form is also appear in QED photon self energy. It is easy for the above formula to give the electromagnetic gauge invariant form (W.I.), proportional only to a log pole which only affect the residue of the photon propagator and is absorbed by the coupling constant renormalization <cit.>.
By changing the dummy integrated variables according the relations set by the δ functions,
Eq.(<ref>) and Eq.(<ref>) are
q_1 → l_1, q_3 → l_2:
T_11Bq= M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) l_2μl_2ν/(l_1 ^2-M^2) (l_2^2-M^2),
q_2 → l_2, q_3 → l_1:
T_31Bq= M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) l_1μl_1ν/(l_1 ^2-M^2) (l_2^2-M^2) .
Effectively these two integrals equals to the following two integrals
T'_11Bq = M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) l_2μl_2ν/ (l_2^2-M^2)^2
= M_Z^2/M^4∫ d^4l_2 (2 π)^4 δ(P-k_1-k_2) l_2μl_2ν/ (l_2^2-M^2)^2,
T'_31Bq = M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) l_1μl_1ν/(l_1 ^2-M^2)^2
= M_Z^2/M^4∫ d^4l_1 (2 π)^4 δ(P-k_1-k_2) l_1μl_1ν/(l_1 ^2-M^2)^2.
In the second line of each of the above two equations, a dummy δ function is integrated out.
It is obviously that (<ref>) + (<ref>) + (<ref>)=0 according to the above quadratic surface integral formula, so that (<ref>) + (<ref>) + (<ref>)=0, i.e., T_Aq+T_11Bq+ T_31Bq=0.
In the following we show that ((<ref>) + (<ref>))- ((<ref>) + (<ref>))=0, i.e.,
(T_11Bq+ T_31Bq) -(T'_11Bq +T'_31Bq)=0:
(<ref>)- (<ref>) is
T_11Bq- T'_11Bq=- M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) l_2μl_2ν 2 l_2· k_1/(l_1 ^2-M^2) (l_2^2-M^2)^2,
(<ref>)- (<ref>) is
T_31Bq-T'_31Bq= M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) l_1μl_1ν 2l_1· k_1 /(l_1 ^2-M^2)^2 (l_2^2-M^2) .
The above two terms equal to, again by the quadratic surface formula,
- M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) g_μν l_2· k_1/(l_1 ^2-M^2) (l_2^2-M^2) +S_1,
M_Z^2/M^4∫ d^4l_1 d^4l_2 (2 π)^4 δ(P-(l_1-l_2)-k_2) δ(l_1-l_2-k_1) g_μν l_1· k_1 /(l_1 ^2-M^2) (l_2^2-M^2)+S_2 .
The fact expressed in the δ function is that l_1-l_2=k_1, k_1^2=0, l_1· k_1=l_2· k_1. So two terms explicitly written above cancel, leaving the surface related terms S_1, S_2.
S_1+S_2= l_2· k_1/(l_1 ^2-M^2)∂_μl_2ν/(l_2^2-M^2)- l_1· k_1/(l_2 ^2-M^2)∂_μl_1ν/(l_1^2-M^2).
(Here we neglect the integral symbol and the δ functions, and employ the fact ∂ _(l_1)μ=∂ _(l_2)μ=∂ _μ.)
S_1+S_2=-2 l_2· k_1 l_1μ/(l_1 ^2-M^2)^2l_2ν/(l_2^2-M^2)+2 l_1· k_1 l_2μ/(l_2 ^2-M^2)^2l_1ν/(l_1^2-M^2) +surface term.
Here we use ∂ _μ l· k_1=k_1μ=0. The surface term comes from the total derivative
∂_μ [ l_2· k_1/(l_1 ^2-M^2)l_2ν/(l_2^2-M^2)- l_1· k_1/(l_2 ^2-M^2)l_1ν/(l_1^2-M^2)],
and is zero (integrated).
Again with l_1· k_1=l_2· k_1, but l_1ν-l_2ν=k_1ν,
T_11Bq- T'_11Bq+T_31Bq-T'_31Bq=S_1+S_2= -(T_11Bq- T'_11Bq+T_31Bq-T'_31Bq)
+ 2 l_1· k_1 l_1μ/(l_1 ^2-M^2)^2k_1ν/(l_2^2-M^2)+2 l_2· k_1 l_2μ/(l_2 ^2-M^2)^2k_1ν/(l_1^2-M^2).
The second line is now logarithmic, so we can employ Feynman-Schwinger parameterization to directly calculate it. It will lead to the factor k_1μ and is zero, and then T_11Bq- T'_11Bq+T_31Bq-T'_31Bq is solved to be zero.
q.e.d.
Here we emphasize again this petty derivation just to show the importance of the surface term and the possibility to eschew any shift for high order divergence.
Now we see that the total result on the terms proportional to M_Z^2/M^4 is finite, non-zero, and U(1) gauge invariant.
And the M_Z^2/M^2 terms left after all the above cancellation are to be shown necessary for the following cancellation.
From the above, we learn that the calculation in the way introduced in this paper, especially not to integrate the δ functions before have to and ia allowed to (once a variable only appearing in the delta function, it is a dummy and the integrand is independent from it),
provides the exact definition of the Feynman diagram.
One may suspect that for the most general case of the calculation of the Feynman diagrams, the proper way of
setting the independent
integrated variables at beginning as done by GWW <cit.>, may not be available. So calculation without integration on the δ functions until have to is a more proper or maybe necessary way of the employment of the Feynman rules.
As for the surface term, it is consistent with the naïve symmetric integration for convergent integral. The surface integral formula is also consistent with a D dimension regularization calculation (which is always considered as convergent), and share the similar spirit of the 'IBP'.
But it is unclear All the above calculations done in D dimension can get the same result.
§.§
Now here we investigate the M^-2 terms (T_113 (<ref>), T_313 (<ref>), T_11B1 (<ref>), T_31B1 (<ref>), and Eqs. (<ref>), (<ref>), (<ref>) included).
Besides the above listed M^-2 terms in bracket, we need to investigate the following: From T_1
T_12 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× g_α ^β g ^ρσ q_2^α q_2 ^γV_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
T_13 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× g_α ^β q_3 ^ρ q_3 ^σ g^αγV_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
T_14 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_1α q_1 ^βg ^ρσg^αγV_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
T_12 and T_14 both give q_3^2-M^2 term in numerator when applying the Ward identity directly, which could reduce the corresponding factor in
denominator and combine with T_22+23,
T_22+23 = 1/M^2∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
× 2q_1^2 g_μν + 2q_2^2 g_μν-2 q_1 μ q_1 ν-2 q_2 μ q_2 ν/(q_1 ^2-M^2)(q_2 ^2-M^2).
T_12=T_121+T_122+T_123+T_124:
T_121 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_2^βV_βμν (q_1, -k_1, -q_3)/(q_1 ^2-M^2)(q_2 ^2-M^2);
T_122 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_2^βV_βμσ (q_1, -k_1, -q_3) (-q_3 ^σ) q_3ν/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2);
T_123 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_2^βV_βμν (q_1, -k_1, -q_3) M^2/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
is M^0 term;
T_124 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_2^βV_βμν (q_1, -k_1, -q_3) (-M_Z^2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
T_14=T_141+T_142+T_143:
T_141 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× V_μνγ (q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_2 ^2-M^2)q_1^γ;
T_142 = 1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× q_3μ q_3 ^σ V_σνγ (q_3, -k_2, -q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2) q_1^γ;
T_143 = -1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× M^2 V_μνγ (q_3, -k_2, -q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2) q_1^γ
is M^0 term.
This discussion also applies to T_3, so the following is to investigate T_121+T_141+T_321+T_341+T_22+23, and we use
q_2=q_1-P, so can use W.I. again, while the extra term -P^β
V_βμν (q_1, -k_1, -q_3) will combine with the corresponding term from T_141.
We use q_1=q_2+P, and the extra term is V_μνγ (q_3, -k_2, -q_2)P^γ there.
We found that the extra M_Z^2 term from the W.I. and from (k_1+k_2)^2 just cancelled, so all the other similar as two photon case <cit.>, i.e.,
T_121+T_141+T_321+T_341+T_22+23=0.
Now the remaining M^-2 terms are all from T_1 and T_3, as well as the remained M_Z^2 terms from the above subsection.
As the case of the two photons, cancellation for the linear divergence will be taken by the summation of the corresponding terms from T_1 and T_3 respectively.
Those directly from T_1 are
T_113+T_13+T_122+T_124+T_142,
which must be considered together with M_Z^2 terms half of Eq. (<ref>), then T_11B1 (<ref>) and Eq. (<ref>).
It is easy to find that the terms without the M_Z^2 factor are quite similar as two photon case, but extra terms from k_2^2=M_Z^2 are subtle.
They emerge from various equations, and cancel in various ways leading to the U(1) invariant final result, which is a manifest of the self-consistent of the standard model.
So we separate (<ref>) as two parts: A, those explicitly without M^2_Z factor in the beginning; B, those with. Then we calculate A, to see which canceled, which giving extra M_Z^2 terms to be canceled
with extra terms from last section, or to be arranged with part B—together with all extra terms from last subsection, leading to the final U(1) invariant results.
We here again apply the W.I. and combine them together to obtain a simple form of part A:
(T_113+T_13+T_122+T_142 )_A
= 1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
× (2 q_1^2 q_2μ q_2ν + 2 q_2^2 q_1μ q_1ν -4 q_1· q_2 q_1μ q_2ν + q_1· q_2 q_3^2 g_μν-q_1^2 q_2 ^2 g_μν).
It looks as quadratic, but easy to see in fact to the most linear, quite similar as the two photon case,
since
2 q_1· q_2=-(q_1-q_2)^2+q_1^2+q_2^2, then
2 q_1^2 q_2μ q_2ν + 2 q_2^2 q_1μ q_1ν -4 q_1· q_2 q_1μ q_2ν
equals to
2 q_1^2(q_2μ-q_1μ)q_2ν +2 q_2^2 q_1μ(q_1ν-q_2ν)
+2 (q_1-q_2)^2q_1μq_2 ν=2 q_1^2 (-k_2μ)q_2ν +2 q_2^2 q_1μk_1ν
+2 (k_1+k_2)^2 q_1μq_2 ν,
and
q_1· q_2 q_3^2 g_μν-q_1^2 q_2 ^2 g_μν=-(k_1+k_2)^2/2q_3^2 g_μν+q_1^2+q_2^2/2q_3^3 g_μν-q_1^2 q_2 ^2 g_μν.
However,
q_1^2+q_2^2/2q_3^3 g_μν-q_1^2 q_2 ^2 g_μν= (q_1^2/2 (q_3+q_2)· k_2 -q_2^2/2 (q_3+q_1)· k_1)g_μν
hence is also linear (q_3 and q_2 can not combine since k_2^2=M^2_Z≠ 0).
Now we write
(T_113+T_13+T_122+T_142)_A
as the summation of two parts:
1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
× 2 q_1^2 (-k_2μ)q_2ν +2 q_2^2 q_1μk_1ν+ (q_1^2/2 (q_3+q_2)· k_2 -q_2^2/2 (q_3+q_1)· k_1)g_μν,
1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
× 2 (k_1+k_2)^2 q_1μq_2 ν-(k_1+k_2)^2/2q_3^2 g_μν,
i.e., the linear and logarithmic divergent terms respectively.
Then the above linear term, after further taking out logarithmic and finite terms from it,
should combine with the corresponding term from T_3, then is deduced to get as two terms, one is logarithmic divergent, the other is finite.
Some details are:
2 q_1^2 (-k_2μ)q_2ν = -2(q_3^2-M^2)k_2μq_2ν-4q_3· k_1 k_2μq_2ν-2M^2k_2μq_2ν
2 q_2^2 q_1μk_1ν = 2 (q_3^2-M^2) q_1μk_1ν -4 q_3· k_2 q_1μk_1ν +2 M^2 q_1μk_1ν+2 M_Z^2 q_1μk_1ν
q_1^2/2 (q_3+q_2)· k_2 g_μν = (q_3^2-M^2)/2 (q_3+q_2)· k_2 g_μν
+ q_3· k_1(q_3+q_2)· k_2 g_μν
+ M^2/2(q_3+q_2)· k_2 g_μν
-q_2^2/2 (q_3+q_1)· k_1g_μν = -(q_3^2-M^2)/2 (q_3+q_1)· k_1 g_μν
+ q_3· k_2(q_3+q_1)· k_1 g_μν
- M^2/2(q_3+q_1)· k_1 g_μν - M_Z^2/2(q_3+q_1)· k_1 g_μν
First of all we address that the M_Z^2 terms (some are not explicitly written in the above) are all finite terms and are to be calculated later.
So it is the following linear term (with (q_3^2-M^2) factor reduced with the common one in denominator, and (q_3+q_2) written as (2q_2+k_2), (q_3+q_1) written as (2q_1-k_1), then q_3 integrated)
1/M^2∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)/(q_1 ^2-M^2)(q_2 ^2-M^2)
( -2k_2μq_2ν +2 q_1μk_1ν
+ 1/2((2q_2+k_2)· k_2 -(2q_1-k_1) · k_1)) g_μν)
to be combined with that from T_3:
1/M^2∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)/(q_1 ^2-M^2)(q_2 ^2-M^2)
( -2k_1νq_2μ +2 q_1νk_2μ
+ 1/2((2q_2+k_1)· k_1 -(2q_1-k_2) · k_2)) g_μν),
and summed with Eq. (<ref>) (!)
then deduces to logarithmic term. Half of their summation is then:
1/M^2∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)/(q_1 ^2-M^2)(q_2 ^2-M^2)
( 2 k_2μk_1ν - (k_1+ k_2)^2/2 g_μν)
This term can again be separated into a logarithmic term and a finite term,
1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
q_3^2 ( 2 k_2μk_1ν - (k_1 + k_2)^2/2 g_μν)
+1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
(-M^2) ( 2 k_2μk_1ν - (k_1 + k_2)^2/2 g_μν)
Hence effectively
(T_113+T_13+T_122+T_142)_A +Eq. (<ref>)/2 =T1LG_A+T1LG_B+T1F_A+T1F_B, with
T1LG_A = 1/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
× [(- 2 q_3^2 k_1· k_2 + 4 k_1 · q_3 k_2 · q_3) g_μν-4 k_1 · q_3 k_2 μ q_3 ν-4 k_2 · q_3 q_3 μ k_1 ν
+2 q_3^2 k_2μ k_1ν +4 k_1 · k_2 q_3μ q_3 ν]
T1F_A = ∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2) /(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2)
× [2 q_1μ k_1ν-2k_2μq_2ν-2k_2μk_1ν +1/2((q_3+q_2)· k_2-(q_3+q_1) · k_1+(k_1+ k_2)^2 )g_μν],
(M^0 term to be combined into the final result), and U(1) invariant finite
T1F_B = M_Z^2/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× 2 q_1μ k_1ν -2 q_1 · k_1g_μν/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
But the T1LG_B term is the logarithmic terms with M_Z^2 factor,
summing the M_Z^2 terms (T_113+T_13+T_124+T_142)_B, Eq. (<ref>) , and T_11B1 (<ref>).
The result is
T1F_C = M_Z^2/M^2∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× 2 k_1· k_2 g_μν-2 k_2μk_1ν/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
The T_3 is in fact effectively discussed, and just make the above doubled.
We see that all the above are finite and U(1) invariant (T1F_A is to be combined in the following), and all the remaining ones in last subsection has been cancelled and combined.
This procedure is subtle, interesting and self-consistent (due to standard model).
From this subsection we obtain U(1) EM gauge invariant terms proportional to 1/M^2, T1LG_A (the logarithmic cancelled so finite) as the two photon case, we mark as R2. We also obtain the extra finite U(1) EM invariant terms
proportional to M_Z^2/M^2, T1F_B and T1F_C, we mark as R3, R4. These are also to be considered for the investigation of the gauge invariant w.r.t. R_ξ gauge result.
§.§
Now the M^0 terms.
The third part of T_12, T_14, i.e., T_123, T_143, as well as those corresponding ones from T_3, are M^0 terms and to be investigated here
together with the corresponding terms from T_2
(Pay attention that T_2 lack of a overall minus sign) and the other remaining M^0 terms form T_1 and T_3:
T_15 = ∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× g_α ^β g ^ρσ g^αγV_βμρ (q_1, -k_1, -q_3) V_σνγ(q_3, -k_2, -q_2)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
and T_35 is not explicitly written here.
T_24 = - ∫ d^4q_1 d^4q_2 (2 π)^4 δ(P-q_1+q_2) δ(q_1-q_2-k_1-k_2)
× g_α ^β g^αγ2g_μν g_βγ-g_μβ g_νγ -g_μγ g_νβ/(q_1 ^2-M^2)(q_2 ^2-M^2).
These four terms (half of T_24) summed still give terms as logarithmic
∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× (-3 q_3^2 g_μν+12 q_3μ q_3ν)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2),
but all the divergences cancelled, and the total summation of the terms, half of T_24, T_15, T_123, T_143, as well as
T1F_A, give the exactly similar form of Eq. (36) of <cit.> (H →γγ process), i.e.,
∫ d^4q_1 d^4q_2 d^4q_3 (2 π)^4 δ(P-q_1+q_2) δ(q_1-k_1-q_3)δ(q_3-k_2-q_2)
× -6 k_1· k_2 g_μν + 6 k_2μk_1ν + 3M^2 g_μν + (-3 q_3^2 g_μν+12 q_3μ q_3ν)/(q_1 ^2-M^2)(q_3 ^2-M^2)(q_2 ^2-M^2).
But pay attention since the denominator different because of k_2^2=M_Z^2≠ 0, this is U(1) invariant just after some of the terms integrated to be zero. However, same as two photon case, the Dyson subtraction is
not needed since the surface term formula for log divergence.
We mark this term as R5.
In this paper we do not intend to produce the final results. Investigation on SU(2) × U(1) gauge invariance and combining fermion loops are to be done in the following paper. In this paper we just give the original form we obtained above, R1+R2+R3+R4+R5.
§ DISCUSSION AND SPECULATION
In this paper we eliminate the paradoxes and ambiguities via the framework of the original Dyson scheme, each propagator momentum integration kept and each δ function at each vertex not integrated out:
1, the original loop momentum definite in the beginning to write down the amplitude in momentum space with Feynman rules;
2, any momentum shift for divergences worse than logarithmic eschewed;
3, no regularization or dimension extrapolation.
Some speculation of new physics beyond the standard model can be based on some new symmetry which introduces some correspondence to the known standard model
particles,
and may simplify the renormalization dramatically via divergence cancellation, especially high order divergence cancelation.
So to properly write down these divergences and then to properly cancel them become a very crucial point.
To proceed the concrete calculation, the evaluation on the surface terms <cit.> for the loop momentum going to infinite is very important to eliminate any uncertainty. We refer to the physical boundary condition of the scattering S matrix static problem to determine the surface term to be zero <cit.> which is taken as undetermined in <cit.>.
In this paper we encounter and determine the quadratic (<ref>) and logarithmic (<ref>) ones.
Besides the H →γγ, H →γ Z, the photon and photon-Z self-energy diagram <cit.>,
these also appear in other places, as for the axial anomaly. The Dyson scheme and the 'most symmetric loop momentum' (see footnote 3) are both in contrary to the Bell-Jackiw claim. There, no necessity referring to the surface term, but we can see the surface term determined same as above via the physical boundary condition of the free propagator, give
the consistency <cit.>.
However, one must adopt that in the most general consideration which is not restricted in the physical boundary condition of the free propagator in Minkowski momentum space (the scattering S matrix static problem), whether the surface term equals to zero depending on the
concrete condition of the infinite momentum surface, hence the space-time
structure, geometry or topology, which is very crucial.
For example, when the divergence of the chiral current is calculated via the concrete QFT defined on a specified space-time manifold, the value can be zero or not. This is in fact taken as an input to generate many interesting physics which have been widely studied, e.g., relation with the Atiyah-Singer index/Pontryagin class, 't Hoot symmetry breaking, etc.
What lesson we learn here is that for a concrete physical problem,
the Atiyah-Singer index in no way must equal to nonzero integer, but must be determined by the proper calculation of the QFT defined on the manifold, as well as the physical condition, especially for special limits.
As a classical original example, the Polyakov solution in Euclidean space-time <cit.>. Here we only want to point out that besides the index q ≠ 0 case which gives many interesting speculations so far, the q=0 case also exist and can be considered as special limit of the q=1 solution.
One trivial case is the group element g in the paper independent from the space-time at the large sphere S_3, i.e., becoming a global gauge. This is more consistent with confinement than pure gauge.
Another case is when x_4=- it can be neglected, index q again can be calculated to be zero. This may show a triviality of the high energy behaviour with the axial current conservation, sharing similar spirit as the divergence cancellation in our present paper with Energy larger larger than 3-momentum on the infinite momentum surface.
So it is important to clarify the concrete condition for the divergence (un)cancellation, the surface term (= or ≠ 0).
The concrete manifold of space-time can make zero nonzero, make finite nonfinite, can
make a renormalizable field theory non renormalizable, or need more operators to close, etc.
Besides others the anomaly is very important considering the 't Hooft way to get charge non-conservation. This can
explain the non-conservation of U(1) number and CP violation of early universe, and conservation restoration now, based the difference of the property of the space-time.
In other words, divergence integral (or cancellation, or the concrete value of the surface term) in the calculation of the QFT in momentum space is a sensitive probe on the topology or some other local structure (holes...) of
the space-time.
For the study of the cosmology,
the above discussion on the (ultraviolet) divergence cancellation possibly depending on the concrete (local) structure of spacetime implies interesting speculation. The 'historical versions' of QFT's vary with the universe evolution. QFT's can be defined on various spacetime math structure. In more details,
different versions of QFT's, defined on different manifolds or more general math structures corresponding to our universe—spacetime—at various special periods can be self -consistent or -inconsistent.
But in whatsoever cases,
the divergences may cancel to get finite (including zero) prediction or may not and can not give clear prediction, it can always probe the math structure of the spacetime of the universe. The ultraviolet probes the local, the infrared probes the global, or just manifests the special information of the space time which leads to the inconsistency between the spacetime and QFT.
In the more concrete cases addressed above in the paper, the spacetime in early universe manifold may have singularity, defect, bubble wall or other structure to cause the anomaly to give the CP violation and U(1) charge nonconservation for baryogenesis but these structures of the spacetime manifold now evolute/expand so that locally become Minkowski and no anomaly, no violations but only conservation of the corresponding currents, which is consistent with experiments and has to ask for cancellation in some theoretical framework without elimination inherently. At the same time these early defects or structures can also play the role of 'seed' of the curvature of spacetime (primordial curvature perturbation) via the homogeneous Einstein equations, i.e., this curvature is not caused by 'matter' (inhomogeneous term in Einstein equations) but is now observed as 'dark matter' when the universe evolute to today.
§ ACKNOWLEDGMENTS
The authors greatly thank Prof. Tai Tsun Wu for stimulating the topic related with the gauge paradox, for encouragement and many instructive discussions.
This work is supported in part by National Natural Science Foundation of China (grant No. 12275157, 11775130, 11635009).
§ MATHEMATICS FOR H →Γ Z CORRESPONDING TO EQS. (2.5-2.12) OF GWW <CIT.>
(These formulation will recover to the complemented GWW formulations which we employed in the above for calculating the H →γγ process once taking M_Z=0.)
k_1^2=0, k_2^2=M_Z^2; k_1μ=k_2ν=0.
(k_1+k_2)^2=2 k_1 · k_2 +M_Z^2=M_H^2.
V_αβγ (p_1, p_2, p_3)= (p_2-p_3)_αg_βγ +(p_3-p_1)_βg_γα+(p_1-p_2)_γg_αβ;
p_1+p_2+p_3=0 (incoming).
p_1^α V_αβγ (p_1, p_2, p_3)=(p_3^2g_βγ-p_3 βp_3 γ)-(p_2^2g_βγ-p_2 βp_2 γ)
V_αβγ (p_1, p_2, p_3) p_3 ^γ =-(p_1^2 g_αβ-p_1α p_1β) +(p_2^2 g_αβ-p_2α p_2β)
p_1^α V_αμγ (p_1, -k_1, p_3)=p_3^2g_μγ-p_3 μp_3 γ
V_αμγ (p_1, -k_1, p_3) p_3 ^γ =-(p_1^2 g_αμ-p_1α p_1μ)
p_1^α V_ανγ (p_1, -k_2, p_3)=p_3^2g_νγ-p_3 νp_3 γ-M_Z^2 g_νγ
V_ανγ (p_1, -k_2, p_3) p_3 ^γ =-(p_1^2 g_αν-p_1α p_1ν)+M_Z^2 g_αν
p_1^α V_αμγ (p_1, -k_1, p_3)=(p_3^2-M^2)g_μγ-p_3 μp_3 γ+M^2 g_μγ
V_αμγ (p_1, -k_1, p_3) p_3 ^γ =-[(p_1^2-M^2) g_αμ-p_1α p_1μ]-M^2 g_αμ
p_1^α V_ανγ (p_1, -k_2, p_3)=(p_3^2-M^2)g_νγ-p_3 νp_3 γ+(M^2-M_Z^2) g_νγ
V_ανγ (p_1, -k_2, p_3) p_3 ^γ =-[(p_1^2-M^2) g_αν-p_1α p_1ν]-(M^2-M_Z^2) g_αν
p_1^α V_αμγ (p_1, -k_1, p_3) p_3 ^γ=0
p_1^α V_ανγ (p_1, -k_2, p_3) p_3 ^γ=-M_Z^2 p_3ν=M_Z^2p_1ν
150
wein73
S. Weinberg, Phys. Rev. D 7 (1973) 1068.
Wu:2017rxt
T. T. Wu and S. L. Wu,
Nucl. Phys. B 914 (2017) 421.
doi:10.1016/j.nuclphysb.2016.11.007
Wu:2016nqf
T. T. Wu and S. L. Wu,
Int. J. Mod. Phys. A 31 (2016) no.04n05, 1650028.
doi:10.1142/S0217751X16500287
Gastmans:2015vyh
R. Gastmans, S. L. Wu and T. T. Wu,
Int. J. Mod. Phys. A 30 (2015) no.32, 1550200.
doi:10.1142/S0217751X15502000
Gastmans:2011wh
R. Gastmans, S. L. Wu and T. T. Wu,
arXiv:1108.5872 [hep-ph].
Gastmans:2011ks
R. Gastmans, S. L. Wu and T. T. Wu,
arXiv:1108.5322 [hep-ph].
Duch:2020was
P. Duch, M. Dütsch and J. M. Gracia-Bondía,
Eur. Phys. J. C 81 (2021) no.2, 131
doi:10.1140/epjc/s10052-021-08898-z
[arXiv:2011.12675 [hep-ph]].
Aad:2014fia
G. Aad et al. [ATLAS Collaboration],
Phys. Lett. B 732 (2014) 8
doi:10.1016/j.physletb.2014.03.015
[arXiv:1402.3051 [hep-ex]].
Chatrchyan:2013vaa
S. Chatrchyan et al. [CMS Collaboration],
Phys. Lett. B 726 (2013) 587
doi:10.1016/j.physletb.2013.09.057
[arXiv:1307.5515 [hep-ex]].
wein73
S. Weinberg, Phys. Rev. D 7 (1973) 1068.
Li:2017hnv
S. Y. Li, Z. G. Si and X. F. Zhang,
arXiv:1705.04941 [hep-ph].
Dyson:1949ha
F. J. Dyson,
Phys. Rev. 75 (1949) 1736.
doi:10.1103/PhysRev.75.1736
Bao:2021byx
S. S. Bao, S. Y. Li and Z. G. Si,
[arXiv:2109.14835 [hep-ph]].
Ferreira:2011cv
L. C. Ferreira, A. L. Cherchiglia, B. Hiller, M. Sampaio and M. C. Nemes,
Phys. Rev. D 86 (2012), 025016
doi:10.1103/PhysRevD.86.025016
[arXiv:1110.6186 [hep-th]].
Belavin:1975fg
A. A. Belavin, A. M. Polyakov, A. S. Schwartz and Y. S. Tyupkin,
Phys. Lett. B 59 (1975), 85-87
doi:10.1016/0370-2693(75)90163-X
|
http://arxiv.org/abs/2306.02990v1
|
20230605160133
|
Integrated Sensing, Computation, and Communication for UAV-assisted Federated Edge Learning
|
[
"Yao Tang",
"Guangxu Zhu",
"Wei Xu",
"Man Hon Cheung",
"Tat-Ming Lok",
"Shuguang Cui"
] |
cs.IT
|
[
"cs.IT",
"cs.LG",
"eess.SP",
"math.IT"
] |
[figure]labelfont=footnotesize,labelformat=simple,labelsep=period,name=Fig.
Integrated Sensing, Computation, and Communication for UAV-assisted Federated Edge Learning
Yao Tang, Guangxu Zhu, Wei Xu, Man Hon Cheung, Tat-Ming Lok, and Shuguang Cui
Yao Tang and Tat M. Lok are with the Department of Information Engineering, the Chinese University of Hong Kong (CUHK), Hong Kong (e-mail: {ty018, tmlok}@ie.cuhk.edu.hk).
Guangxu Zhu is with Shenzhen Research Institute of Big Data, Shenzhen, China (e-mail: [email protected]).
Wei Xu is with the National Mobile Communications Research Lab, Southeast University, Nanjing 210096, China (e-mail: [email protected]).
Man Hon Cheung is with the Department of Computer Science, City University of Hong Kong, Hong Kong (e-mail: [email protected]).
Shuguang Cui is with the School of Science and Engineering (SSE), the Future Network of Intelligence Institute (FNii), and the Guangdong Provincial Key Laboratory of Future Networks of Intelligence, The Chinese University of Hong Kong (Shenzhen),
Shenzhen, China.
He is also with Peng Cheng Laboratory (e-mail: [email protected]).
July 31, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
headings
empty
Federated edge learning (FEEL) enables privacy-preserving model training through periodic communication between edge devices and the server.
Unmanned Aerial Vehicle (UAV)-mounted edge devices are particularly advantageous for FEEL due to their flexibility and mobility in efficient data collection.
In UAV-assisted FEEL, sensing, computation, and communication are coupled and compete for limited onboard resources, and UAV deployment also affects sensing and communication performance. Therefore, the joint design of UAV deployment and resource allocation is crucial to achieving the optimal training performance.
In this paper, we address the problem of joint UAV deployment design and resource allocation for FEEL via a concrete case study of human motion recognition based on wireless sensing.
We first analyze the impact of UAV deployment on the sensing quality and identify a threshold value for the sensing elevation angle that guarantees a satisfactory quality of data samples.
Due to the non-ideal sensing channels, we consider the probabilistic sensing model, where the successful sensing probability of each UAV is determined by its position.
Then, we derive the upper bound of the FEEL training loss as a function of the sensing probability.
Theoretical results suggest that the convergence rate can be improved if UAVs have a uniform successful sensing probability.
Based on this analysis, we formulate a training time minimization problem by jointly optimizing UAV deployment, integrated sensing, computation, and communication (ISCC) resources under a desirable optimality gap constraint.
To solve this challenging mixed-integer non-convex problem, we apply the alternating optimization technique, and propose the bandwidth, batch size, and position optimization (BBPO) scheme to optimize these three decision variables alternately.
Simulation results demonstrate that our BBPO scheme outperforms other baseline schemes regarding convergence rate and testing accuracy.
Federated edge learning, UAV deployment design, sensing-computation-communication resource allocation, integrated sensing and communication.
§ INTRODUCTION
§.§ Motivations
Advances in artificial intelligence (AI) and the Internet of Things (IoT) have shifted wireless networks from 5G's connected things to 6G's connected intelligence, with more stringent requirements for reliability and latency <cit.>.
In the pursuit of demanding requirements, federated edge learning (FEEL) has emerged as a popular distributed framework that trains global machine learning (ML) models at the edge of wireless networks <cit.>, which helps preserve data privacy and avoid long delays caused by data transmission.
Owing to the UAVs' mobility, UAV-mounted edge devices can facilitate data collection in FEEL.
A typical UAV-assisted FEEL iteration includes sensing, computation, and communication, particularly for FEEL <cit.>.
Specifically, starting with a common initial model broadcast from the edge server, the UAV-mounted edge devices collect/sense data from the target and use their local sensing data to compute the model updates. Then, they upload their model updates to the edge server for further aggregation to yield an updated global model.
To fully exploit the benefits of UAV-assisted FEEL, overcoming several challenges associated with UAV deployment and limited onboard resources is crucial.
Firstly, UAV deployment affects the sensing data quality.
Due to practical constraints in sensing range and accuracy, the sensing data quality heavily relies on the sensing channel between the UAV and the target, which is controlled by the UAV position. This in turn affects the accuracy of model <cit.>.
Secondly, UAV deployment affects the communication efficiency.
The transmission rate of local model upload depends on the communication channel between the UAV and the server. As the channel model described in <cit.>, shorter relative distances and larger elevation angles favour line-of-sight (LOS) communication links. However, this improved communication efficiency leads to a decrease in sensing quality.
Specifically, the shorter the UAV-server distance, the longer the UAV-target distance, resulting in a complex sensing environment, such as dense obstacles, which hinder the ability of UAVs to sense the target successfully.
As a result, the UAV deployment results in a tradeoff between sensing quality and communication efficiency, ultimately impacting the FEEL performance.
Thirdly, the limited onboard energy of UAVs imposes strict constraints on the duration of training.
In order to minimize the training time, it is essential to allocate resources efficiently on both the UAV and server sides.
Therefore, we focus on the UAV deployment design and resource allocation in UAV-assisted FEEL to minimize the training time.
The preliminary studies <cit.> have mainly focused on implementing classical federated learning (FL) architectures using UAVs. Typically, UAVs are deployed as FL clients with stored datasets, or aerial base stations as model aggregators.
In recent research <cit.>, UAV deployment has been integrated into the FL system, enabling UAVs to serve as relays for efficient data collection from other devices, or as aggregators with an adaptive position to enhance the FL performance.
However, these existing studies rarely account for the data sensing process, and assumed that the training data is either already available on the UAVs or can be directly acquired from other devices.
In addition, the sensing, computing, and communication processes are highly coupled in FEEL, competing for resources.
While the most relevant study <cit.> investigated the resource management of integrated sensing, computation, and communication (ISCC), it did not include UAV deployment in their problem formulation.
Therefore, we aim to address an open problem of optimizing the UAV deployment and ISCC resource allocation to enhance the training performance in UAV-assisted FEEL systems.
§.§ Contributions
In this paper, we propose a novel approach for human motion recognition using a UAV-assisted FEEL system.
The system comprises multiple UAVs and one edge server. Each UAV is considered an integrated sensing and communication (ISAC) device due to the size, weight, and power constraints <cit.>.
In each training round, the ISAC UAVs sense the target and get their local datasets through wireless sensing.
However, due to complex sensing environments such as trees, buildings, and other obstacles, UAVs may fail to sense targets successfully. Therefore, we employ a probabilistic sensing model, where the relative positions of UAVs and targets determine the successful sensing probability.
Only the UAVs that successfully sense the target can update their local models and upload them to the server for model aggregation.
To minimize the total training time, we optimize the UAV deployment and ISCC resource allocation, including bandwidth and batch size design, building upon a partial participation FEEL framework.
We aim to address three main challenges: 1) How does UAV deployment affect the sensing quality? 2) How does the probabilistic sensing model affects FEEL convergence? 3) How can we optimize the UAV deployment and ISCC resource allocation to improve the training performance?
To the best of our knowledge, this is the first work to investigate the impact of UAV deployment and ISCC resource allocation on the training performance of a UAV-assisted FEEL system.
We summarize the key results and contributions as follows:
* Impact of UAV deployment on sensing quality:
We provide a theoretical analysis of the sensing process for human motion recognition and verify it experimentally.
Our results demonstrate that the sensing quality will reach a very good level when the sensing elevation angle is larger than a given threshold.
This finding suggests that it is sufficient to sense targets at the aforementioned elevation angle threshold so that each UAV can generate high-quality data samples.
This addresses the first challenge above.
* FEEL convergence analysis under probabilistic sensing model:
We first analyze the effect of the successful sensing probability on the convergence of FEEL by deriving the upper bound of the training loss.
Our derived theoretical results show that non-uniform successful sensing probabilities amplify the negative effects caused by data heterogeneity.
However, these negative effects can be mitigated if UAVs have uniform successful sensing probabilities.
This addresses the second challenge above.
* Optimized UAV deployment and ISCC resource allocation:
Based on solutions to the two challenges above, we formulate a joint UAV deployment and ISCC resource allocation problem to minimize the total training time under per round latency and training optimality gap constraints.
To solve this challenging mixed-integer non-convex problem, we adopt the alternating optimization technique and split it into three subproblems, which optimize the bandwidth, batch size, and UAV position, respectively.
Our proposed scheme can efficiently compute suboptimal solutions.
This addresses the third challenge above.
* Performance evaluation:
We perform extensive simulations of a specific wireless sensing task (i.e., human motion recognition) on a high-fidelity wireless sensing simulator <cit.>. Our bandwidth, batch size, and position optimization (BBPO) scheme achieves the best convergence rate and testing accuracy performance compared to other baseline schemes.
The rest of this paper is organized as follows. Section <ref> discusses the related works. Section <ref> describes the system model. Section <ref> presents the theoretical analysis of the sensing process. Section <ref> establishes the problem formulation. Section <ref> develops the UAV deployment design and ISCC resource management. Section <ref> provides the simulation results, and Section <ref> concludes this paper.
§ RELATED WORKS
Initial studies on UAV-assisted FEEL focused primarily on implementing classical FL architectures using UAVs <cit.>.
Typically, the UAVs are deployed as FL clients with stored datasets or aerial base stations as model aggregators.
To fully reap the benefits of the UAV's mobility, recent works <cit.> have integrated UAV deployment into the UAV-assisted FEEL system, enabling UAVs to efficiently collect data from other devices, or act as an aggregator with adaptive position to enhance FL performance.
For example, Ng et al. <cit.> proposed using UAVs as wireless relays to facilitate communications between the Internet of Vehicles (IoV) components and the FL aggregator, thus improving the accuracy of the FL. Similarly, Lim et al. <cit.> proposed an FL-based sensing and collaborative learning approach for UAV-enabled IoVs, where UAVs collect data from subregions and train ML models for IoVs.
In <cit.>, the authors proposed an asynchronous advantage actor-critic-based joint device selection, UAV placement, and resource management algorithm to enhance federated convergence rate and accuracy.
Additionally, the work in <cit.> introduced a new online FL scheme to improve the performance of personalized local models by jointly optimizing the CPU frequency and trajectories of the UAVs.
The work in <cit.> proposed a joint algorithm for UAV placement, power control, bandwidth allocation, and computing resources, to minimize the energy consumption of the UAV aggregator and users.
Overall, these studies <cit.> assumed that training data is readily available onboard the UAVs or can be obtained from other devices, regardless of the data sensing process. However, this assumption can significantly impact training performance since sensing, computation, and communication in UAV-assisted FEEL are interdependent and compete for resources.
The most related study <cit.> focused on integrated sensing, computation, and communication (ISCC) resource management to optimize training performance.
However, they did not consider the UAV-mounted edge devices, which could significantly affect sensing and communication performance.
Therefore, we aim to address an open problem of optimizing ISCC resource allocation and UAV deployment to enhance the training performance in a UAV-assisted FEEL system.
§ SYSTEM MODEL
As shown in Fig. <ref>, we consider a UAV-assisted FEEL system consisting of K UAVs and a server. The set of UAVs is represented as 𝒦={1,2,…, K}.
Each UAV is equipped with a single-antenna ISAC transceiver that can switch between the sensing and communication modes as needed in a time-division manner[A practical implementation of such an ISAC device via a software-defined radio platform has been demonstrated in <cit.>.].
Specifically, in the sensing mode, a dedicated radar waveform frequency-modulated continuous-wave (FMCW) <cit.> is transmitted. Then, sensing data containing the motion information of the human target can be obtained on board the UAVs by processing the received radar echo signals. This mode is used for UAVs to collect training data.
On the other hand, in the communication mode, a constant frequency carrier modulated by communication data is transmitted. This mode is used for information exchange between UAVs and the edge server.
Our UAV-assisted FEEL system aims to train an ML model for specific aerial sensing applications, e.g., target recognition, using the data wirelessly sensed by the UAVs.
§.§ Learning Model
The training process is to minimize a global loss function in a distributed manner. In particular, we define the global loss function as
F(𝐰)=1/K∑_k∈𝒦𝔼_ε∼ P_k[f_k(𝐰;ε)],
where 𝐰 represents the global model, jointly trained on all UAVs and coordinated with the edge server. The function f_k(𝐰;ε) is the local loss function for UAV k, and ε is a random seed with distribution P_k, of which its realization represents a batch of samples. To facilitate the subsequent analysis, we define F_k(𝐰)=𝔼_ε∼ P_k[f_k(𝐰;ε)].
The training process is iterated in multiple communication rounds. The system repeats the following five steps in each round n until the global model converges (see Fig. <ref>).
* Global model broadcasting:
The server broadcasts the current global model 𝐰^(n) to each UAV via the wireless broadcast channel.
* Target sensing:
Each UAV switches to the sensing mode and transmits dedicated FMCW signals for target sensing.
We assume that the UAV can only sense the target successfully if it has a LOS sensing link with the target and no obstacles in the link.
Due to the unpredictable sensing environment, we utilize the LOS probabilistic model[The formulation and the analysis can be readily applied to other probabilistic sensing models described in <cit.>.] for sensing.
We define the 3-dimensional coordinates of UAV k as u_k=(x_u,k,y_u,k,z_u,k), where z_u,k represents the flying altitude.
The 3-dimensional coordinates of the sensed target is v_k=(x_v,k,y_v,k,z_v,k), where z_v,k is the altitude of the corresponding target.
The successful sensing probability of UAV k is <cit.>
q_s,k( u_k)=1/1+ψexp(-ζ[θ_s,k( u_k)-ψ]),
where ψ and ζ are constant values determined by the type of environment, θ_s,k( u_k) is the sensing elevation angle between UAV k at position u_k and its sensed target.
More specifically, θ_s,k( u_k)=180^∘/π×sin^-1 |z_u,k-z_v,k/d_s,k|, where d_s,k= u_k- v_k is the Euclidean distance between UAV k and the sensed target.
Once UAV k successfully senses the target in round n, it obtains a batch of data samples with size δ_k.
The batch size δ_k can vary adaptively across different UAVs but remain unchanged across all training rounds.
* Local gradient updating:
Each UAV that successfully senses the target updates its local gradient by running one step of the stochastic gradient descent (SGD) from 𝐰^(n), i.e.,
𝐠_k^(n)=1/δ_k∑_ξ∈𝒟_k^(n)∇ f_k(𝐰^(n);ξ).
For UAVs that do not successfully sense the targets, their local gradients will not be updated, i.e., 𝐠_k^(n)=𝐠_k^(n-1).
* Local uploading:
We assume that only successfully sensed UAVs upload their local gradients to the server via the uplink
wireless channel (see Section <ref> for more details on the sensing model). Therefore, we consider the partial participating FEEL scenario.
* Global model aggregation and updating:
The server aggregates local gradients and updates the global model as
𝐰^(n+1)=𝐰^(n) - η∑_k∈ K1_k^(n)( u_k)𝐠_k^(n)/∑_k∈ K1_k^(n)( u_k),
where η is the learning rate.
The indicator function 1_k^(n)( u_k) is
1_k^(n)( u_k)=
1, if UAV k senses its target successfully in round n w.p. q_s,k( u_k),
0, if UAV k senses its target unsuccessfully in round n w.p. 1-q_s,k( u_k).
§.§ UAV-server Communication Model
Without loss of generality, we define the 3-dimensional coordinates of the server as s=(x_s,y_s,z_s), where z_s is the altitude of the server.
Next,
we analyze the channel model between UAV k and the server, which can be LOS or non-LOS (NLOS).
The LOS probability in each round n is <cit.>
q_c,k( u_k)=1/1+ψexp(-ζ[θ_c,k( u_k)-ψ]),
where θ_c,k( u_k) is the communication elevation angle. More specifically, θ_c,k( u_k)=180^∘/π×sin^-1|z_u,k-z_s/d_c,k|, where d_c,k= u_k- s is the Euclidean distance between UAV k and the server.
For ease of analysis, we consider that the UAVs are flying at a constant altitude, i.e., z_u,k=H, ∀ k∈ K.
Moreover, the NLOS probability is 1-q_c,k( u_k).
The average channel gain between UAV k and the server is <cit.>
h_k( u_k)= (K_0d_c,k)^-α/η_1 q_c,k( u_k)+η_2 (1-q_c,k( u_k)),
where K_0=4π f_c/c, f_c is the carrier frequency, c is the speed of light, and α is the path loss exponent of the link between the UAV and the server. Also, η_1 and η_2 (η_2 > η_1>1) are the excessive path loss coefficients in LOS and NLOS cases, respectively, which models the shadowing effect and reflection effect <cit.>.
We consider that each UAV is assigned a dedicated sub-channel of bandwidth B_k for uploading.
Suppose that UAV k uploads the gradient with a power of p_c,k>0.
The received rate of the server from UAV k is
[ r_k( u_k,B_k)
=B_klog(1+p_c,k h_k( u_k)/B_kN_0), ]
where the bandwidth allocation is limited by the total bandwidth B_c such that ∑_k ∈K B_k = B_c, and N_0 is the noise power spectral density.
§.§ UAV-target Sensing Model
The dedicated FMCW signal consisting of multiple up-chirps is used for UAVs' sensing <cit.>.
The sensing signal of UAV k at time t is defined as x_k(t).
We use the primitive-based method <cit.> to model the scattering from the whole human body to UAVs. The human contains L body primitives.
The scattering along a direct/indirect reflection path is approximated using the superposition of the returns from L body primitives.
The signal received by UAV k at time t is[
UAVs are performing human motion recognition in different places and far apart. Therefore, UAVs can sense their targets through the same sensing bandwidth without interfering with each other. ]
[ y_k(t)
=√(p_s,k(t))×A_0/√(4π)∑_l=1^L√(G_k,l(t))/d_k,l^2(t)exp(-j4π f_c/cd_k,l(t))x_k,l(t-2d_k,l(t)/c) + n_k(t), ]
where p_s,k(t) is the sensing transmission power of UAV k, A_0 is the gain of the antenna, G_k,l(t) is the complex amplitude proportional to the radar cross section (RCS) of the l-th primitive, d_k,l(t) is the distance from the l-th primitive to UAV k, and n_k(t) is the signal due to ground clutter and noise.
Note that we only consider the direct reflection path in (<ref>), since UAVs in aerial space are less susceptible to indirect reflection paths.
§.§ Training Time
Based on the learning procedure in Section <ref>, the latency for each UAV k in training round n consists of the following three parts[We ignore the global download time of step 1) in Section <ref> because the server can use the entire frequency band to broadcast the global model to all UAVs. Usually, the server has a more considerable transmit power.]:
* Sensing time: We consider that each UAV takes time T_0 to generate a sample (see Section <ref> for details). The number of samples generated by UAV k is δ_k. Therefore, the sensing time of UAV k in round n is
T_s,k^(n)(δ_k)=T_0·δ_k.
* Local computation time: The indicator function 1_k^(n)( u_k) defined in (<ref>) indicates whether UAV k successfully senses the target. Since UAVs partly participate in FEEL, the local calculation time of UAV k in round n is
T_cp,k^(n)(δ_k, u_k)=1_k^(n)( u_k) ·δ_k ξ/f_cpu,
where ξ is the CPU cycles required to execute one sample in the local gradient computing, and f_cpu is each UAV's CPU frequency (cycles/s).
* Local upload time: Each UAV that successfully senses the target needs to upload its gradient to the server. We consider the data size of the gradient to be a constant, defined as D_0.
The local upload time of UAV k in round n is
T_cm,k^(n)( u_k,B_k)= 1_k^(n)( u_k) ·D_0/r_k( u_k,B_k).
We consider the synchronous FL on the server side, i.e., aggregation happens until local gradients from all participating UAVs are received. Then, the latency for round n is[For ease of illustration, we denote {δ_k, ∀ k∈ K}, { u_k, ∀ k∈ K}, and {B_k, ∀ k∈ K} as {δ_k}, { u_k}, and {B_k}, respectively.]
T^(n)({δ_k}, { u_k}, {B_k})= max_k∈ K{T_s,k^(n)(δ_k)+T_cp,k^(n)(δ_k, u_k)+T_cm,k^(n)( u_k,B_k)}.
§ SENSING ANALYSIS
In this section, we analyze the effect of UAV deployment on the quality of sensing samples.
Micro-Doppler signature is a characteristic of human motion.
We aim to verify our hypothesis that there exists a threshold value of the sensing evaluation angle that can guarantee a satisfactory quality of the data samples.
A common technique for micro-Doppler analysis is time-frequency representation, such as spectrograms<cit.>. Therefore, the received signal in (<ref>) requires further preprocessing to obtain the micro-Doppler signature.
Specifically, each spectrogram is generated from the received signal over a time period T_0=MT_p, where M is the number of chirps and T_p is the duration of each chirp.
Since each UAV k can control δ_k collected samples (i.e. spectrograms), as defined in Section <ref>, UAV k needs to sense the target for a continuous duration of δ_kT_0.
The received raw signal in (<ref>) first needs to be sampled with rate f_s.
After that, the signal in the ρ-th sensing duration can be reconstructed as a 2D sensing data matrix 𝐘_k(ρ)∈ℂ^f_sT_p× M <cit.>, i.e.,
[ 𝐘_k(ρ)
=√(p_s,k(ρ))×A_0/√(4π)∑_l=1^L√(G_k,l(ρ))/d_k,l^2(ρ)exp(-j4π f_c/cd_k,l(ρ))𝐗_k,l(ρ) + 𝐍_k(ρ). ]
Sensing power p_s,k(ρ) and distance d_k,l(ρ) will remain constant for each ρ-th duration, where ρ∈ [ 1, 2, …, δ_k ].
The element of row r and column m of matrix 𝐘_k(ρ) is [𝐘_k(ρ)]_r,m=y_k((ρ-1)T_0
+(m-1)T_p+r/f_s), [𝐗_k,l(ρ)]_r,m=x_k,l((ρ-1)T_0+(m-1)T_p+r/f_s-2d_k,l(ρ)/c), and [𝐍_k(ρ)]_r,m=n_k((ρ-1)T_0+(m-1)T_p+r/f_s), where r∈ [1,2, …, f_sT_p] represents the sampled signal index, and m∈ [1, 2, …, M] represents the chirp index.
Next, we adapt the short-time Fourier transform (STFT) with a sliding window function o[w] of length W to generate the range-Doppler-time (RDT) cube in order to obtain time-frequency features of 𝐘_k(ρ). The RDT cube of 𝐘_k(ρ) after STFT is given by <cit.>
[ 𝐘_k(ρ,f)
=√(p_s,k(ρ))×A_0/√(4π)∑_l=1^L√(G_k,l(ρ))/d_k,l^2(ρ)exp(-j4π f_c/cd_k,l(ρ))𝐗_k,l(ρ,f) + 𝐍_k(ρ,f). ]
The element of 𝐗_k,l(ρ,f) is
[ [𝐗_k,l(ρ,f)]_r,m
=∑_w=0^Wx_k,l((ρ-1)T_0+(m(W-Q)-w-1)T_p+r/f_s-2d_k,l(ρ)/c)exp(-j2π fw/W)o(w), ]
where Q is the number of overlapping points, f∈[0,W-1] and m∈[1,M-Q/W-Q] are the frequency and temporal shift index, respectively.
We non-coherently integrate 𝐗_k,l(ρ,f) within all the range bins, and obtain the integrated STFT as [𝐗_k,l(ρ,f)]_m=∑_r=1^fsT_p[𝐗_k,l(ρ,f)]_r,m.
Moreover, the element definition of 𝐍_k(ρ,f) is similar to 𝐗_k,l(ρ,f).
Then, we have
[ 𝐘_k(ρ,f)
=√(p_s,k(ρ))×A_0/√(4π)∑_l=1^L√(G_k,l(ρ))/d_k,l^2(ρ)exp(-j4π f_c/cd_k,l(ρ))𝐗_k,l(ρ,f)_Useful information
+ 𝐍_k(ρ,f)_Interference, ]
where the clutter and noise related term 𝐍_k(ρ,f) represents the interference, and the rest represents useful information in the spectrogram.
Impact of UAV position { u_k} on spectrogram quality.
From (<ref>), we can see that the quality of spectrograms improves as we reduce d_k,l(ρ), which is the distance between UAV k and the l-th primitive of the human body.
Since each UAV k flies at a constant altitude, distance d_k,l(ρ) is minimized when the UAV hovers over the target, which leads to the best quality of the spectrograms.
The experimental results also validate this in Fig. <ref> and Fig. <ref>.
For ease of illustration, we show the relationship between the spectrogram and sensing elevation angle θ_s,k( u_k)∈[0^∘, 90^∘]. Note that higher θ_s,k( u_k) leads to smaller d_k,l(ρ).
We apply the wireless sensing simulator in <cit.> to simulate various human motions and generate five human motion spectrograms as shown in Fig. <ref>. We can observe that when θ_s,k( u_k)=30^∘, the quality of the spectrogram becomes the worst.
Moreover, we use the peak-signal-to-noise ratio (PSNR) index[
PSNR is a metric measuring the perceptual difference between two similar images, widely used in image compression and computer vision. It is a complete reference metric that requires two images, i.e., a reference image and a processed image.]
to visualize the quality of the spectrogram <cit.>.
Fig. <ref> shows the average PSNR versus θ_s,k( u_k), and a fitting curve that is very close to actual PSNR.
We can see that PSNR increases with θ_s,k( u_k) and reaches the highest value at θ_s,k( u_k)=90^∘.
Since the spectrograms have a very good level of quality when PSNR≥ 30 dB <cit.>, we can set a threshold for the sensing evaluation angle, e.g. θ_s,k( u_k)≥θ_0.
As a result, each UAV can generate data samples (spectrograms) of roughly satisfactory quality.
§ PROBLEM FORMULATION
In this section, we aim to speed up the training process under a specific optimality gap of the loss function.
In Sections <ref> and <ref>, we analyze the convergence of the UAV-assisted FEEL training process and derive an upper bound on the loss function.
In Section <ref>, we formulate the training time minimization problem through bandwidth allocation, batch size design, and UAV position design, which balances the sensing, computation, and communication during training.
§.§ Assumptions
We consider general smooth convex learning problems with the following commonly adopted assumptions <cit.>.
(Smoothness).
Each local function F_k(𝐰) is Lipschitz continuous with Lipschitz constant L:
∀𝐰_i and 𝐰_j, F_k(𝐰_i)≤ F_k(𝐰_j)+(𝐰_i-𝐰_j)^T∇ F_k(𝐰_j)+L/2‖𝐰_i-𝐰_j‖^2, ∀ k∈ K.
(Strong convexity).
Each local function F_k(𝐰) is strongly convex in that there exists a constant μ>0 such that
∀𝐰_i and 𝐰_j, F_k(𝐰_i)≥ F_k(𝐰_j)+(𝐰_i-𝐰_j)^T∇ F_k(𝐰_j)+μ/2‖𝐰_i-𝐰_j‖^2, ∀ k∈ K.
(Unbiasedness and bounded variance of local gradients).
The mean and variance of stochastic gradient 𝐠^(n)_k of local loss function F_k(𝐰), ∀ k∈ K, satisfy that
[ 𝔼[𝐠_k^(n)]=∇ F_k(𝐰^(n)),; 𝔼[‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2]
≤σ_k^2/δ_k, ]
where δ_k is the batch size in calculating gradient 𝐠_k^(n).
(Bounded data variance).
[ 𝔼[‖∇ F_k(𝐰^(n))-∇ F(𝐰^(n))‖^2]
≤Λ_k^2, ]
which measures the heterogeneity of local datasets.
§.§ Theoretical results
Under Assumptions <ref>-<ref>, the following theorem establishes the convergence rate.
Consider the UAV-assisted FEEL system with a fixed learning rate η, satisfying
[ 0<η<1/4L. ]
The expected optimality gap of the loss function after n rounds is upper bounded by
[ F(𝐰^(n)) - F(𝐰_*)≤ G1-(1-μη(1-4Lη))^n/μη(1-4Lη)+(1-μη(1-4Lη))^nF(𝐰^0)-F(𝐰_*), ]
with
[ G=η∑_k∈ Kα_k^2 ∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
+ Lη^2∑_k∈ Kβ_k (σ_k^2/δ_k+2Λ_k^2)
+2Lη^2 ∑_k∈ Kγ_k ((q_s,k( u_k)-q_s)^2+q_s^2)Λ_k^2, ]
where 𝐰_* is the optimal model as 𝐰_*=min_𝐰F(𝐰), each α_k, β_k, γ_k, ∀ k∈ K is a function of {q_s,k( u_k), ∀ k∈ K}, as shown in equations (<ref>), (<ref>), and (<ref>) in Appendix <ref>, and q_s=1/K∑_k∈ Kq_s,k( u_k) is the average successful sensing probability of UAVs.
We represent the right hand side of (<ref>) as Φ({δ_k},{ u_k}, n), and we have F(𝐰^(n))-F(𝐰_*)≤Φ({δ_k},{ u_k}, n).
See Appendix <ref>.
It can be seen from (<ref>) that when the UAVs have different successful sensing probabilities q_s,k( u_k), the negative effects caused by data heterogeneity 2Lη^2 ∑_k∈ Kγ_k ((q_s,k( u_k)-q_s)^2+q_s^2)Λ_k^2 will be amplified.
However, these negative effects can be mitigated if UAVs have uniform successful sensing probabilities q_s,k( u_k)=q_s, ∀ k∈ K.
In this case, the training process can still converge to a proper stationary solution.
Next, we derive the following corollary.
Under learning rate constraint (<ref>), when UAVs have the uniform successful sensing probability (i.e., q_s,k( u_k) = q_s, ∀ k ∈K), we can obtain an upper bound of G in (<ref>) as
[ G≤η/K∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
+2Lη^2/K^2 χ q_s∑_k∈ K(σ_k^2/δ_k+2Λ_k^2)
+4Lη^2/K q_s^K-2∑_k∈ KΛ_k^2, ]
where χ=1-(1-q_s,min)^K, and q_s,min=min_ u_kq_s,k( u_k) with constraint θ_s,k( u_k)≥θ_0, ∀ k ∈K.
See Appendix <ref>.
From Theorem <ref> and Corollary <ref>, we have two important insights:
1) From (<ref>) and (<ref>), it can be seen that the convergence rate is affected by the batch size {δ_k} and the successful sensing probability q_s.
Specifically, we observe that a decrease in {δ_k} slows down the training convergence in (<ref>), because fewer data samples are used for each training round.
Moreover, a smaller q_s also deteriorates the training convergence in (<ref>).
Since only UAVs that successfully sense the targets can participate in the training, a smaller q_s represents the smaller successful sensing probability, and the smaller probability of the UAVs participating in the training.
Therefore, the left hand side of (<ref>) takes more training rounds to converge.
2) From (<ref>), the loss function eventually converges as n→∞, and F(𝐰^(n)) - F(𝐰_*) will reach G/μη(1-4Lμ) instead of diminishing to zero.
This implies that partial participation of UAVs has a negative impact on the training process, and causes the loss function to converge to a biased solution.
Furthermore, the biased solution G/μη(1-4Lμ) strongly depends on q_s and {δ_k}, since it contains G restricted by (<ref>).
§.§ Problem Formulation
As probabilistic sensing is inevitable in delay-constrained wireless sensing systems, our goal is to investigate its impact on the training time of UAV-assisted FEEL.
From (<ref>), we know that the successful sensing probability is determined by UAV position.
Therefore, we formulate a resource allocation problem to minimize the total training time by optimizing the batch size {δ_k}, UAV position { u_k}, and bandwidth allocation {B_k}.
P1: min_{δ_k},{ u_k},{B_k}, N, T_max N· T_max
s.t. Φ({δ_k},{ u_k}, N)≤ϵ,
𝔼[T^(n)({δ_k}, { u_k}, {B_k})]≤ T_max, ∀ n∈ N_ϵ,
q_s,k( u_k)=q_s,k'( u_k'), ∀ k, k'∈ K,
θ_s,k( u_k)≥θ_0, ∀ k∈ K,
δ_k∈ℤ^+, ∀ k∈ K,
∑_k∈ K B_k=B_c.
The objective function N· T_max is the total training time,
where N is the number of training rounds required to guarantee the ϵ-optimality gap, i.e., F(𝐰^(N))-F(𝐰_*)≤Φ({δ_k},{ u_k}, N)≤ϵ in (<ref>), and T_max is the per round latency.
Constraint in (<ref>) indicates that the average training time per round cannot exceed the delay requirement T_max, and N_ϵ={1,…,N} is the set of training rounds.
As suggested by Corollary <ref>, it is crucial to maintain a uniform successful sensing probability across the UAVs, so we enforce the constraints in (<ref>).
The constraint on sensing quality (<ref>) has been discussed in Section <ref>.
We consider that the batch size {δ_k} are integer variables in (<ref>).
Moreover, equation (<ref>) constrains the total bandwidth allocated to all UAVs as B_c in Section <ref>.
The objective function and constraints (<ref>) and (<ref>) in P1 are complicated by the coupling of {δ_k}, { u_k}, and {B_k}.
Furthermore, the batch size δ_k of each UAV k can only take integer values.
Therefore, P1 is a mixed-integer non-convex problem and is thus very challenging to solve optimally.
To deal with this difficulty, we will utilize the alternating optimization technique in Section <ref> to solve P1 with any given N, in which the three decision variables are optimized alternately.
Moreover, we adopt a one-dimension search to find N to achieve the minimum objective value, as summarized in Algorithm <ref> of Section <ref>.
Next, we show how the decision variables {δ_k}, { u_k} and {B_k} collectively affect the objective function N· T_max as follows.
The batch size {δ_k} balances the number of training rounds N and the per round latency T_max.
For UAVs that successfully sense targets, a larger {δ_k} indicates longer sensing and local computation time, since they are proportional to {δ_k} defined in (<ref>) and (<ref>).
Thus, the T_max, including sensing, computation and communication time, will also increase.
However, large batch sizes can speed up the convergence of FEEL and reduce N.
Therefore, the batch size {δ_k} must be properly designed to minimize the total training time.
The UAV position { u_k} also balances the number of training rounds N and the per round latency T_max.
Specifically, when UAV k is close to the target, the sensing elevation angle θ_s,k( u_k) will increase.
Since the successful sensing probability q_s,k( u_k) increases with θ_s,k( u_k), the UAV also have a higher probability of participating in the training. This will accelerate the convergence of FEEL and reduce N.
However, in this case, UAVs may be far away from the server, and the bandwidth allocated to each UAV will be reduced due to more UAVs participating in the training.
These factors will slow the uplink transmission rate and increase T_max.
Therefore, the UAV position { u_k} also needs to be properly designed to minimize the total training time.
The Bandwidth {B_k} can minimize the per round latency T_max.
Based on (<ref>) and (<ref>), we have that sensing time T_s,k^(n)(δ_k) and local computation time T_cp,k^(n)(δ_k, u_k) increase with δ_k.
Since each UAV can choose different δ_k the number of generated samples, T_s,k^(n)(δ_k) and T_cp,k^(n)(δ_k, u_k) will exhibit diversity among different UAVs.
We introduce bandwidth allocation in the system, let each UAV k control the uplink transmission time T_cm,k^(n)( u_k, B_k) by adjusting B_k, to minimize T_max.
As shown in Fig. <ref>, UAV 2 spends less time on sensing and local computing, while UAV K spends more time on them. That is, T_s,2^(n)(δ_2) + T_cp,2^(n)(δ_2,u_k) < T_s,K^(n)(δ_K) + T_cp,K^(n)(δ_2,u_K).
In this way, we can allocate less bandwidth to UAV 2 to achieve large T_cm,2^(n)( u_2, B_2) and allocate more bandwidth to UAV K to achieve small T_cm,K^(n)( u_K, B_K), so that the total latency of UAV 2 is equal to that of UAV K.
Thus, adjusting {B_k} can minimize T_max.
§ TRAINING TIME MINIMIZATION
In this section, we adopt the alternating optimization technique to solve P1.
We first divide P1 into three subproblems, and then optimize {δ_k}, { u_k}, and {B_k} in Section <ref>, <ref>, and <ref>, respectively.
Overall, we propose the bandwidth, batch size, and position optimization (BBPO) scheme, which can compute a suboptimal solution of P1 efficiently in Section <ref>.
§.§ Bandwidth Allocation Optimization with Fixed Batch Size and Position
We consider the subproblem of P1 for optimizing the bandwidth allocation {B_k} by assuming that batch size {δ_k}, UAV position { u_k}, and training round N are fixed.
It follows from (<ref>) that the successful sensing probability {q_s,k( u_k)} are given. Given that {δ_k} and { u_k} satisfy the constraints (<ref>), and (<ref>)-(<ref>), P1 reduces to
P2: min_{B_k} T_max
s.t.
T_0·δ_k+1_k^(n)( u_k) ·δ_k ξ/f_cpu+1_k^(n)( u_k) ·D_0/r_k( u_k,B_k)≤ T_max, ∀ k∈ K, ∀ n∈ N_ϵ,
contraint (<ref>),
where (<ref>) is the same as (<ref>).
Specifically, due to (<ref>), constraints (<ref>) can be simplified to
T_0·δ_k+q_s ·δ_k ξ/f_cpu+q_s ·D_0/r_k( u_k,B_k)≤ T_max, ∀ k∈ K,
where q_s,k( u_k)=q_s, ∀ k∈ K.
Since the left hand side of constraints (<ref>) are convex with respect to {B_k}, P2 is a standard convex problem.
Therefore, it can be efficiently solved by standard convex optimization tools such as CVXPY <cit.>.
§.§ Batch Size Optimization with Fixed Position and Bandwidth Allocation
We consider the subproblem of P1 for optimizing batch size {δ_k} by assuming that { u_k}, {B_k}, and training round N are fixed.
Since the successful sensing probability q_s,k( u_k)=q_s, ∀ k∈ K is determined by { u_k}, parameter q_s is also given.
In this case, given that {u_k} and {B_k} satisfy the constraints (<ref>), (<ref>), and (<ref>), P1 reduces to
P3: min_{δ_k} T_max
s.t. δ_k>0, ∀ k∈ K,
constraints (<ref>), (<ref>),
where we relax the value of {δ_k} from integer in (<ref>) to real number δ_k>0, ∀ k∈ K,
and (<ref>) is simplified as (<ref>).
Next, we characterize the optimal solution of P3 as follows.
At the optimal {δ_k^*} of P3, equality constraint in (<ref>) holds. That is,
T_0·δ_k^*+q_s ·δ_k^* ξ/f_cpu+q_s ·D_0/r_k( u_k,B_k) = T_max, ∀ k∈ K.
Therefore, each δ_k^* can be represented as a function of T_max, i.e.,
δ_k^*(T_max)=T_max-q_sD_0/r_k( u_k,B_k)/T_0+q_s ξ/f_cpu.
According to (<ref>), as Φ({δ_k(T_max)},{ u_k}, N) increases, T_max decreases.
To minimize T_max, we consider Φ({δ_k(T_max)},{ u_k}, N)=ϵ in (<ref>).
Based on (<ref>), (<ref>), and (<ref>), we have
η/K∑_k∈ Kσ_k^2/δ_k^*(T_max)
+2Lη^2/K^2 χ q_s∑_k∈ Kσ_k^2/δ_k^*(T_max)
+J
= (1-A)(ϵ-λ A^N)/1-A^N,
where J=(η/K+4Lη^2/K^2 χ q_s+4Lη^2/K q_s^K-2)∑_k∈ KΛ_k^2, A=1-μη(1-4Lη), and λ=F(𝐰^0)-F(𝐰_*).
It can be transformed into a polynomial equation with respect to T_max and solved efficiently.
§.§ Position Optimization with Fixed Batch Size and Bandwidth Allocation
We consider the subproblem of P1 for optimizing UAV position { u_k} by assuming that {δ_k}, {B_k}, and training round N are fixed.
In this case, P1 given that {δ_k} and {B_k} satisfy the constraints (<ref>) and (<ref>) reduces to
P4: min_{ u_k} T_max
s.t. constraints (<ref>), (<ref>), (<ref>), (<ref>).
Constraints (<ref>) in P1 can be simplified as (<ref>).
Below, we aim to transform the constraints on { u_k} of P4 into constraints on q_s, as this will simplify our problem optimization. Since (<ref>) indicates q_s,k( u_k)=q_s, ∀ k∈ K, we omit the transformation of (<ref>).
§.§.§ Constraint (<ref>)
Based on (<ref>), (<ref>), and (<ref>), we have
η/K∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
+2Lη^2/K^2 χ q_s∑_k∈ K(σ_k^2/δ_k+2Λ_k^2)
+4Lη^2/K q_s^K-2∑_k∈ KΛ_k^2
≤(1-A)(ϵ-λ A^N)/1-A^N.
Since q_s≥ q_s,min, (<ref>) can be rewritten as
q_s≥2Lη^2/K^2 χ∑_k∈ K(σ_k^2/δ_k+2Λ_k^2)
/[(1-A)(ϵ-λ A^N)/1-A^N-η/K∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
-4Lη^2/K q_s,min^K-2∑_k∈ KΛ_k^2].
§.§.§ Constraint (<ref>)
According to (<ref>), the successful sensing probability increases as the sensing elevation angle θ_s,k( u_k) increases.
Based on (<ref>), we have
q_s≥1/1+ψexp(-ζ[θ_0-ψ]).
§.§.§ Relationship between q_s and r_k( u_k,B_k) in (<ref>)
After the above constraint transformations, we can see that only r_k( u_k,B_k) in P4 contains u_k.
According to the expression of r_k( u_k,B_k) in (<ref>), we have the following proposition.
For any given q_s and B_k, the optimal position u_k^* = (x_u,k^*, y_u,k^*, H) for each UAV k to maximize r_k( u_k,B_k) is derived from
x_u,k^* = x_v,k-H(x_v,k-x_s)/tanθ_s‖ s- v_k‖ and y_u,k^* = y_v,k-H(y_v,k-x_s)/tanθ_s‖ s- v_k‖,
where θ_s is the corresponding sensing elevation angle of q_s in (<ref>).
From solid geometry, position u_k^* leads to the minimum UAV-server distance d_c,k, and the maximum communication elevation angle θ_c,k. Due to (<ref>), we have the maximum r_k( u_k,B_k).
We mark the optimal position with a red triangle in Fig. <ref>.
According to u_k^* with given q_s in Proposition <ref>, transmission rate r_k( u_k,B_k) can be written as r̂_k(q_s, B_k), which is a monotonically decreasing function of q_s.
From Fig. <ref>, we have the relationship between θ_c,k and θ_s as
θ_c,k= tan^-1(H/‖ s- v_k‖-H/tanθ_s)×180^∘/π,
which is derived by H/tanθ_c,k+H/tanθ_s=‖ s- v_k‖.
Moreover, the relationship between d_c,k^2 and θ_s is
d_c,k^2 = (‖ s- v_k‖-H/tanθ_s)^2+H^2.
Based on (<ref>), the sensing elevation angle θ_s can be represented as a function of q_s, i.e.,
θ_s = -1/ζ[ln(1/q_s-1)-lnψ]+ψ.
Substituting the above equations into (<ref>), we have r̂_k(q_s, B_k). Also, it decreases monotonically as q_s increases.
Based on Proposition <ref> and Lemma <ref>, P4 can be reformulated as
P4': min_q_s T_max
s.t. T_0·δ_k+q_s ·δ_k ξ/f_cpu+q_s ·D_0/r̂_k(q_s, B_k)≤ T_max, ∀ k∈ K,
0< q_s ≤ 1,
constraints (<ref>), (<ref>).
Since r̂_k(q_s, B_k) is a monotonically decreasing function of q_s, we can achieve the minimum T_max by minimizing q_s from (<ref>).
We first find the minimum q_s^* that satisfies constraints (<ref>), (<ref>), and (<ref>).
Then, based on (<ref>) we have T_max=max_k∈ K{T_0·δ_k+q_s^* ·δ_k ξ/f_cpu+q_s^* ·D_0/r̂_k(q_s^*, B_k)}.
Finally, the optimal UAV position { u_k^*} can be derived from Proposition <ref>.
§.§ Iterative Bandwidth, Batch Size, and Position Optimization (BBPO)
Based on the solutions of the three subproblems in the above sections, we propose the iterative algorithm for P1, summarized in Algorithm <ref>[The parameters A and λ are defined in (<ref>).
The minimum training round N_min is derived by (<ref>) with constraint λ(1-A)-G_max>0, where G_max is derived by (<ref>) with q_s=1 and δ_k=δ_max, ∀ k∈ K. The maximum training round N_max depends on the maximum tolerance training time and is thus a given parameter.].
We first employ a one-dimension search for N. With each given N, we solve sub-problems P2, P3, and P4 alternately.
Since N is searched linearly and P2-P4 can be efficiently solved, Algorithm <ref> can efficiently solve P1.
§ SIMULATION RESULTS
In this section, we present the numerical results to evaluate the performance of our proposed BBPO scheme.
We first describe the basic simulation settings in Section <ref>. Then, we discuss the baseline schemes in Section <ref>. We demonstrate and compare the BBPO scheme's performance with the baseline schemes in Section <ref>.
§.§ Simulation Settings
§.§.§ UAV-assisted FEEL system
We consider a UAV-assisted federated learning system with an edge server and K=8 ISAC UAVs.
The UAVs are randomly distributed within a circular area of radius 300 meters and are geometrically separated at different positions for sensing.
Moreover, we define the minimum sensing elevation angle as θ_0=70^∘, corresponding to PSNR =30 dB in Fig. <ref>. Other simulation parameters are listed in Table <ref>.
§.§.§ Human motion recognition
We apply the wireless sensing simulator in <cit.> to simulate various human motions and generate human motion datasets. In the simulation, we aim to identify five different human motions, i.e., child walking, child pacing, adult walking, adult pacing, and standing <cit.>.
Examples of data samples of each human motion are shown in Fig. <ref>.
§.§.§ Learning model
We apply widely used ResNet-10 (4,900,677 model parameters) with batch normalization as the classifier model. The learning rate is set to η=0.03.
§.§ Baseline Schemes
* Baseline 1: (Det-UAVposition) This scheme considers that the UAV position { u_k} is given. The batch size {δ_k} and bandwidth {B_k} are optimized as discussed in Section <ref> and Section <ref> in our proposed scheme. By comparing with this baseline, we evaluate the validity of the UAV position design in our proposed scheme.
* Baseline 2: (Eq-Bandwidth) This scheme uses the uniform bandwidth B_k = B_c/K for uplink transmission in Section <ref>. The batch size {δ_k} and UAV position { u_k} are optimized as discussed in Section <ref> and Section <ref> in our proposed scheme. By comparing with this baseline, we evaluate the validity of bandwidth allocation in our proposed scheme.
* Baseline 3: (Eq-Batchsize) This scheme considers the uniform batch size δ_k = δ, ∀ k∈ K in Section <ref>. The bandwidth {B_k} and UAV position { u_k} are optimized as discussed in Section <ref> and Section <ref> in our proposed scheme. By comparing with this baseline, we evaluate the validity of batch size design in our proposed scheme.
* Baseline 4: (BBPO-Ideal) The only difference between Baseline 4 and the proposed BBPO scheme is that this scheme assumes that all UAVs always successfully sense the targets and that all UAVs participated in each training round. Specifically, we have the successful sensing probability q_s=1 in (<ref>), (<ref>), and (<ref>). The optimization of batch size {δ_k}, UAV position { u_k}, and bandwidth {B_k} is the same as our proposed scheme. This baseline is an ideal situation for sensing.
§.§ Performance Evaluation
§.§.§ Balance effect of decision variables on T_max
In Fig. <ref>, we plot the per round delay, which is the sum of the sensing, computation, and communication time, of each participating UAV under the BBPO scheme.
We can observe that the UAVs with longer sensing time T_s,k^(n)(δ_k) will also have longer computation time T_cp,k^(n)(δ_k, u_k).
This is because T_s,k^(n)(δ_k) and T_cp,k^(n)(δ_k, u_k) are proportional to batch size δ_k.
Moreover, each UAV k can control the uplink transmission time T_cm,k^(n)( u_k, B_k) by adjusting its B_k, to minimize T_max.
This observation confirms out discussion in Remark <ref> (illustrated in Fig. <ref>) that all participating UAVs achieve the same T_max=46.78 s.
§.§.§ Impact of UAV position { u_k} on training performance
In Fig. <ref> and Fig. <ref>, we evaluate the training performance of the proposed BBPO scheme by comparing with Baseline 1 (Det-UAVposition) under different UAV positions.
We use sensing elevation angle θ_s,k( u_k) to reflect different UAV positions in Baseline 1. For instance, Baseline 1 (θ_s,k( u_k)=35^∘) represents that current UAV position { u_k} corresponds to sensing elevation angle θ_s,k( u_k)=35^∘, ∀ k∈ K.
In Fig. <ref>, we plot the training loss versus the total training time.
We can observe that the training loss decreases with training time under the BBPO scheme and Baseline 1, consistent with the analysis in Section <ref>.
It also shows that the BBPO scheme achieves the fastest convergence rate compared to other schemes, and the convergence rate of Baseline 1 (θ_s,k( u_k)=35^∘) is the slowest.
This is because the smaller θ_s,k( u_k) indicates the further distance between the UAV and the sensing target, resulting in poor quality (i.e., lower PSNR) of data samples.
It also indicates the smaller successful sensing probability q_s( u_k), resulting in fewer UAVs participating in training. According to (<ref>), we need more training rounds N to achieve the specified optimality gap.
However, this does not mean that a larger θ_s,k( u_k) will lead to a better performance. As UAVs move closer to the target but farther away from the server, the uplink transmission time and per round latency T_max will increase.
Since we consider the trade-off between N and T_max in our BBPO scheme. It achieves a faster convergence rate than other baseline schemes.
In Fig. <ref>, we plot the testing accuracy under two schemes against the total training time. We observe that the BBPO scheme achieves the highest testing accuracy compared to other baseline schemes. The reason is the same as above.
§.§.§ Impact of bandwidth {B_k} and sensing time {δ_k} on training performance
Fig. <ref> and Fig. <ref> show the training performance under four schemes. Compared with Baseline 2 (Eq-Bandwidth) and Baseline 3 (Eq-Batchsize), the proposed BBPO scheme achieves the best convergence rate and testing accuracy performance.
Due to the per round latency constraint (<ref>), we try to adaptively assign different bandwidths to each UAV to minimize T_max under the BBPO scheme.
However, in Baseline 2, all UAVs use a uniform bandwidth, which increases T_max and worsens the training performance.
In Baseline 3, we consider that the UAVs have the uniform batch size δ_k=64 or δ_k=128, ∀ k∈ K. Since a larger δ_k reduces the number of training rounds N but increases T_max.
Therefore, the BBPO scheme that jointly optimizes {δ_k} and {B_k} outperforms Baseline 2 and Baseline 3 in both Fig. <ref> and Fig. <ref>.
Specifically, in Fig. <ref> under the same training loss ϵ=0.002, the training time of the BBPO scheme is 20% less than Baseline 2, and 33.33% less than Baseline 3 with δ_k=64, ∀ k∈ K.
In Fig. <ref>, to achieve a testing accuracy of 90%, the training time of the BBPO scheme is 30% less than Baseline 2 and 60% less than Baseline 3 with δ_k=64, ∀ k∈ K.
Moreover, it achieves almost the same performance as Baseline 4.
§ CONCLUSION
In this paper, we investigated the UAV deployment design and resource allocation in UAV-assisted federated edge learning (FEEL) for improving training performance.
Specifically, we studied the impact of UAV deployment on sensing quality for human motion recognition. We identified a threshold for the sensing elevation angle that produces satisfactory quality of data samples.
Then, we derived an upper bound on the UAV-assisted FEEL training loss as a function of the successful sensing probability, and showed that the negative impact of data heterogeneity can be reduced if UAVs have a uniform successful sensing probability.
Based on the convergence analysis, we minimized the total training time by jointly optimizing the UAV deployment and integrated sensing, computation, and communication (ISCC) resources.
Then, we applied the alternating optimization technique and decomposed this challenging mixed-integer non-convex problem into three subproblems.
Moreover, we proposed the BBPO scheme to alternately optimize bandwidth, batch size, and position, which efficiently computes suboptimal solutions.
Simulation results showed that our BBPO scheme outperforms other baselines in terms of convergence rate and testing accuracy.
§ PROOF OF THEOREM 1
Based on Assumption 1, the expected loss function F(𝐰^(n+1)) can be expressed as
[ F(𝐰^(n+1))≤F(𝐰^(n))+⟨∇ F(𝐰^(n)), 𝐰^(n+1)-𝐰^(n)⟩+L/2‖𝐰^(n+1)-𝐰^(n)‖^2. ]
We first provide two key lemmas to derive the convergence rate in Section <ref>. Then, the proofs of these lemmas are in Section <ref> and <ref>.
§.§ Convergence Analysis
With Assumption 3 and Assumption 4, it holds that
[ ⟨∇ F(𝐰^(n)), 𝐰^(n+1)-𝐰^(n)⟩≤-η/2‖∇ F(𝐰^(n))‖^2
+η∑_k∈ Kα_k^2·∑_k∈ K(σ_k^2/δ_k+Λ_k^2), ]
where each α_k, ∀ k∈ K is a function of {q_s,k( u_k)}.
Detailed expression of {α_k} is in (<ref>).
With Assumption 3 and Assumption 4, it holds that
[ ‖𝐰^(n+1)-𝐰^(n)‖^2 ≤2η^2(∑_k∈ Kβ_kσ_k^2/δ_k+2 ∑_k∈ Kβ_kΛ_k^2
+2 ∑_k∈ Kγ_k((q_s,k( u_k)-q_s)^2+q_s^2)Λ_k^2); +4η^2 ‖∇ F(𝐰^(n))‖^2, ]
where each β_k, γ_k, ∀ k∈ K is a function of {q_s,k( u_k)}.
Detailed expressions for {β_k} and {γ_k} are contained in (<ref>) and (<ref>) respectively.
By substituting (<ref>) and (<ref>) into (<ref>), we have
F(𝐰^(n+1))
≤F(𝐰^(n))
-(η/2-2Lη^2)‖∇ F(𝐰^(n))‖^2
+η∑_k∈ Kα_k^2·∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
+ Lη^2(∑_k∈ Kβ_kσ_k^2/δ_k+2 ∑_k∈ Kβ_kΛ_k^2+2 ∑_k∈ Kγ_k((q_s,k( u_k)-q_s)^2+q_s^2)Λ_k^2)
≤F(𝐰^(n))
-μη(1-4Lη)F(𝐰^(n))-F(𝐰_*)
+η∑_k∈ Kα_k^2 ∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
+ Lη^2(∑_k∈ Kβ_kσ_k^2/δ_k+2 ∑_k∈ Kβ_kΛ_k^2+2 ∑_k∈ Kγ_k((q_s,k-q_s)^2+q_s^2)Λ_k^2),
where (<ref>) is due to ‖∇ F(𝐰^(n))‖^2≥ 2μ(F(𝐰^(n))-F(𝐰_*)) based on Assumption 2.
Subtract F(𝐰_*) in both sides of (<ref>), we have
F(𝐰^(n+1))-F(𝐰_*)
≤(1-μη(1-4Lη))F(𝐰^(n))-F(𝐰_*)
+η∑_k∈ Kα_k^2 ∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
+ Lη^2∑_k∈ Kβ_k(σ_k^2/δ_k+2Λ_k^2)
+2Lη^2 ∑_k∈ Kγ_k((q_s,k( u_k)-q_s)^2+q_s^2)Λ_k^2_≜ G.
Applying (<ref>) recursively, we have
F(𝐰^(n))-F(𝐰_*)≤ G1-(1-μη(1-4Lη))^n/μη(1-4Lη)+(1-μη(1-4Lη))^nF(𝐰^0)-F(𝐰_*).
To guarantee the convergence of the learning process, we have 1-μη(1-4Lη)<1, which leads to 0<η<1/4L.
Thus, Theorem 1 is proved.
§.§ Proof of Lemma <ref>
We first derive the desired expression for the aggregated gradient in each round n as
∑_k∈ K1_k^(n)( u_k)𝐠_k^(n)/∑_k∈ K1_k^(n)( u_k)∑_k∈ K1_k^(n)( u_k)≠ 0
=∑_l=1^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l(1_k_1^(n)( u_k_1)=1 ∀ k_1∈ K_1, 1_k_2^(n)( u_k_2)=0 ∀ k_2∈ K_2 | ∑_k∈ K1_k^(n)( u_k)≠0)
1/l∑_k_1∈ K_1𝐠_k_1^(n)
=∑_l=1^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_1∈ K_1q_s,k_1∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)·1/l∑_k_1∈ K_1𝐠_k_1^(n)
≜∑_k∈ Kα_k 𝐠_k^(n),
where K_1 is the set of UAVs that successfully sense the target, and K_2 is the set of UAVs that fail to sense the target, and do not participate in round n.
Moreover, each α_k, ∀ k∈ K is a function of {q_s,k( u_k)}. Let 𝐠_k^(n)=1, ∀ k∈ K, we have ∑_k∈ Kα_k=1.
For the convenience of expression, we use q_s,k_1 and q_s,k_2 to represent q_s,k_1( u_k_1) and q_s,k_1( u_k_2) respectively.
Next, we proof Lemma <ref> as follows
⟨∇ F(𝐰^(n)), 𝐰^(n+1)-𝐰^(n)⟩
=-η⟨∇ F(𝐰^(n)),∑_k∈ Kα_k 𝐠_k^(n)⟩<ref>a
=η/2‖∇ F(𝐰^(n))-∑_k∈ Kα_k 𝐠_k^(n)‖^2
-‖∇ F(𝐰^(n))‖^2
-‖∑_k∈ Kα_k 𝐠_k^(n)‖^2<ref>b
≤η/2‖∇ F(𝐰^(n))-∑_k∈ Kα_k 𝐠_k^(n)‖^2
-‖∇ F(𝐰^(n))‖^2
=η/2‖∇ F(𝐰^(n))-∑_k∈ Kα_k F_k(𝐰^(n))+∑_k∈ Kα_k F_k(𝐰^(n))-∑_k∈ Kα_k 𝐠_k^(n)‖^2-η/2‖∇ F(𝐰^(n))‖^2
≤η‖∑_k∈ Kα_k (∇ F(𝐰^(n))-F_k(𝐰^(n)))‖^2+‖∑_k∈ Kα_k (F_k(𝐰^(n))-𝐠_k^(n))‖^2-η/2‖∇ F(𝐰^(n))‖^2<ref>c
≤η∑_k∈ Kα_k^2 ∑_k∈ K‖∇ F(𝐰^(n))-F_k(𝐰^(n))‖^2
+∑_k∈ K‖ F_k(𝐰^(n))-𝐠_k^(n)‖^2-η/2‖∇ F(𝐰^(n))‖^2<ref>d
≤η∑_k∈ Kα_k^2·∑_k∈ K(Λ_k^2
+σ_k^2/δ_k)
-η/2‖∇ F(𝐰^(n))‖^2,
<ref>e
where (<ref>) is obtained by (<ref>),
(<ref>) follows from the basic identity ⟨𝐚,𝐛⟩=1/2(‖𝐚‖^2+‖𝐛‖^2-‖𝐚-𝐛‖^2),
(<ref>) is due to ‖𝐚+𝐛‖^2≤2(‖𝐚‖^2+‖𝐛‖^2) and ∇ F(𝐰^(n))=∑_k∈ K(α_k∇ F(𝐰^(n))),
(<ref>) is due to the Cauchy-Schwarz Inequality ‖∑_k∈ K𝐚_k 𝐛_k‖^2≤∑_k∈ K‖𝐚_k‖^2+∑_k∈ K‖𝐛_k‖^2,
and inequality (<ref>) is due to Assumption 3 and Assumption 4.
Therefore, we obtain Lemma <ref>.
§.§ Proof of Lemma <ref>
‖𝐰^(n+1)-𝐰^(n)‖^2
=η^2‖∑_k∈ K1_k^(n)( u_k)(𝐠_k^(n)-∇ F_k(𝐰^(n)))/∑_k∈ K1_k^(n)( u_k)
+∑_k∈ K1_k^(n)( u_k)∇ F_k(𝐰^(n))/∑_k∈ K1_k^(n)( u_k)‖^2
≤2η^2‖∑_k∈ K1_k^(n)( u_k)(𝐠_k^(n)-∇ F_k(𝐰^(n)))/∑_k∈ K1_k^(n)( u_k)‖^2_≜ M_1+2η^2‖∑_k∈ K1_k^(n)( u_k)∇ F_k(𝐰^(n))/∑_k∈ K1_k^(n)( u_k)‖^2_≜ M_2,
<ref>a
where (<ref>) is due to ‖𝐚+𝐛‖^2≤2(‖𝐚‖^2+‖𝐛‖^2).
Next we bound M_1 and M_2 as follows.
§.§.§ Bound of M_1
M_1
=∑_k∈ K1_k^(n)( u_k)‖𝐠_k^(n) - ∇ F_k(𝐰^(n))‖^2/(∑_k∈ K1_k^(n)( u_k))^2
+∑_i∈ K∑_j∈ K, j≠ i1_i^(n)( u_i) 1_j^(n)( u_j)(𝐠_i^(n)-∇ F_i(𝐰^(n)))(𝐠_j^(n)-∇ F_j(𝐰^(n)))/(∑_k∈ K1_k^(n)( u_k))^2_=0.
Since we have 𝔼[𝐠_k^(n)]=∇ F_k(𝐰^(n)), ∀ k∈ K, the last term in (<ref>) is 0.
Thus,
M_1
=∑_l=1^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l(1_k_1^(n)( u_k_1)=1 ∀ k_1∈ K_1,1_k_2^(n)( u_k_2)=0 ∀ k_2∈ K_2 | ∑_k∈ K1_k^(n)( u_k)≠0)
1/l^2 ∑_k_1∈ K_1‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2
=∑_l=1^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_1∈ K_1q_s,k_1∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)·1/l^2∑_k_1∈ K_1‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2
≜∑_k∈ Kβ_k ‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2
≤∑_k∈ Kβ_kσ_k^2/δ_k,
where each β_k, ∀ k∈ K is a function of {q_s,k},
and (<ref>) is due to 𝔼[‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2]≤σ_k^2/δ_k.
§.§.§ Bound of M_2
M_2
=‖∑_k∈ K1_k^(n)( u_k)(∇ F_k(𝐰^(n))-∇ F(𝐰^(n)))+∑_k∈ K1_k^(n)( u_k)∇ F(𝐰^(n))‖^2/(∑_k∈ K1_k^(n)( u_k))^2
≤2‖∑_k∈ K1_k^(n)( u_k)(∇ F_k(𝐰^(n))-∇ F(𝐰^(n)))‖^2/(∑_k∈ K1_k^(n)( u_k))^2+‖∑_k∈ K1_k^(n)( u_k)∇ F(𝐰^(n))‖^2/(∑_k∈ K1_k^(n)( u_k))^2
=2‖∑_k∈ K1_k^(n)( u_k)(∇ F_k(𝐰^(n))-∇ F(𝐰^(n)))‖^2/(∑_k∈ K1_k^(n)( u_k))^2+2(∑_k∈ K1_k^(n)( u_k))^2 ‖∇ F(𝐰^(n))‖^2/(∑_k∈ K1_k^(n)( u_k))^2
=2∑_k∈ K1_k^(n)( u_k)‖∇ F_k(𝐰^(n))-∇ F(𝐰^(n))‖^2/(∑_k∈ K1_k^(n)( u_k))^2_≜ M_21
+2‖∇ F(𝐰^(n))‖^2
+2∑_i∈ K∑_j∈ K, j≠ i1_i^(n)( u_i) 1_j^(n)( u_j)(∇ F_i(𝐰^(n))-∇ F(𝐰^(n)))(∇ F_j(𝐰^(n))-∇ F(𝐰^(n)))/(∑_k∈ K1_k^(n)( u_k))^2_≜ M_22.
Next, we bound M_21 and M_22 in (<ref>) as follows.
Firstly, similar to (<ref>), we have
M_21 ≜∑_k∈ Kβ_k ‖∇ F_k(𝐰^(n))-∇ F(𝐰^(n))‖^2
≤∑_k∈ Kβ_kΛ_k^2,
where (<ref>) is due to 𝔼[‖∇ F_k(𝐰^(n))-∇ F(𝐰^(n))‖^2] ≤Λ_k^2 in Assumption 4.
Secondly,
M_22 =𝔼[∑_l=2^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l(1_k_1^(n)( u_k_1)=1 ∀ k_1∈ K_1, 1_k_2^(n)( u_k_2)=0 ∀ k_2∈ K_2 | ∑_k∈ K1_k^(n)( u_k)≥ 2)
·1/l^2∑_k_1∈ K_1∑_k'_1∈ K_1, k'_1≠ k_1(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))
(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n)))]
=𝔼[∑_l=2^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_1∈ K_1q_s,k_1∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)-∑_k∈ Kq_s,k∏_k'∈ K, k'≠ k(1-q_s,k')
·1/l^2∑_k_1∈ K_1∑_k'_1∈ K_1, k'_1≠ k_1(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))
(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n)))]
≤∑_l=2^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)-∑_k∈ Kq_s,k∏_k'∈ K, k'≠ k(1-q_s,k')
·1/l^2∑_k_1∈ K_1∑_k'_1∈ K_1, k'_1≠ k_1 q_s,k_1(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))· q_s,k'_1(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n)))_≜ M_221
where (<ref>) is due to ∏_k_1∈ K_1q_s,k_1≤ q_s,k_1q_s,k'_1, ∀ k,k'∈ K_1, k'≠ k.
Next, we define the average successful sensing probability as q=1/K∑_k∈ Kq_s,k, and we have
M_221
=𝔼[∑_k_1∈ K_1∑_k'_1∈ K_1, k'_1≠ k_1(q_s,k_1-q_s+q_s)(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))
·(q_s,k'_1-q_s+q_s)(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n)))]
=𝔼[∑_k_1∈ K_1∑_k'_1∈ K_1, k'_1≠ k_1[(q_s,k_1-q_s)(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))
·(q_s,k'_1-q_s)(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n)))
+(q_s,k_1-q_s)(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))·q_s(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n)))
+q_s(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))·(q_s,k'_1-q_s)(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n)))
+q_s(∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n)))·q_s(∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n))) ]]
≤1/2𝔼[∑_k_1∈ K_1∑_k'_1∈ K_1, k'_1≠ k_1[(q_s,k_1-q_s)^2‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2 +(q_s,k'_1-q_s)^2‖∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n))‖^2
+(q_s,k_1-q_s)^2‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2 + q_s^2‖∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n))‖^2
+q_s^2‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2 + (q_s,k'_1-q_s)^2‖∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n))‖^2
+q_s^2‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2 + q_s^2‖∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n))‖^2 ]] <ref>a
=∑_k_1∈ K_1∑_k'_1∈ K_1, k'_1≠ k_1(
((q_s,k_1-q_s)^2+q_s^2)
‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2
+ ((q_s,k'_1-q_s)^2+q_s^2)‖∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n))‖^2)
=∑_k_1∈ K_1(
(l-1)((q_s,k_1-q_s)^2+q_s^2)
‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2
+ ∑_k'_1∈ K_1, k'_1≠ k_1((q_s,k'_1-q_s)^2+q_s^2)‖∇ F_k'_1(𝐰^(n))-∇ F(𝐰^(n))‖^2) <ref>b
=∑_k_1∈ K_1(
(l-2)((q_s,k_1-q_s)^2+q_s^2)
‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2
+ ∑_k_1∈ K_1((q_s,k_1-q_s)^2+q_s^2)‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2)
=2(l-1)∑_k_1∈ K_1((q_s,k_1-q_s)^2+q_s^2)
‖∇ F_k_1(𝐰^(n))-∇ F(𝐰^(n))‖^2
≤2(l-1)∑_k_1∈ K_1((q_s,k_1-q_s)^2+q_s^2)Λ_k_1^2
<ref>c
where (<ref>) is due to ⟨𝐚, 𝐛⟩≤1/2 (‖𝐚‖^2+‖𝐛‖^2),
(<ref>) is due to | K_1|=l,
(<ref>) is due to 𝔼[‖∇ F_k(𝐰^(n))-∇ F(𝐰^(n))‖^2]≤Λ_k^2 in Assumption 4.
Substituting (<ref>) into (<ref>), we have
M_22 ≤∑_l=2^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)-∑_k∈ Kq_s,k∏_k'∈ K, k'≠ k(1-q_s,k')·2(l-1)/l^2∑_k_1∈ K_1((q_s,k_1-q_s)^2+q_s^2)Λ_k_1^2]
<∑_l=2^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)-∑_k∈ Kq_s,k∏_k'∈ K, k'≠ k(1-q_s,k')·2/l∑_k_1∈ K_1((q_s,k_1-q_s)^2+q_s^2)Λ_k_1^2]
≜∑_k∈ Kγ_k((q_s,k-q_s)^2+q_s^2)Λ_k^2,
where each γ_k, ∀ k∈ K is a function of {q_s,k}.
By substituting (<ref>) and (<ref>) into (<ref>), we obtain the bound of M_2 as follows.
M_2
<2∑_k∈ Kβ_kΛ_k^2+2∑_k∈ Kγ_k((q_s,k( u_k)-q_s)^2+q_s^2)Λ_k^2+2‖∇ F(𝐰^(n))‖^2
Finally, by substituting (<ref>) and (<ref>) into (<ref>),
we obtain Lemma <ref>.
‖𝐰^(n+1)-𝐰^(n)‖^2
≤2η^2(∑_k∈ Kβ_kσ_k^2/δ_k+2 ∑_k∈ Kβ_kΛ_k^2
+2 ∑_k∈ Kγ_k((q_s,k-q_s)^2+q_s^2)Λ_k^2)
+4η^2 ‖∇ F(𝐰^(n))‖^2.
§ PROOF OF COROLLARY 1
We consider that all UAVs have a uniform successful sensing probability q_s,k( u_k)=q_s, ∀ k∈ K.
We first explore the relationship between α_k, β_k, γ_k, ∀ k∈ K and q_s.
Then, we derive an upper bound on G.
§.§ Expression of {α_k}
Due to q_s,k( u_k)=q_s, ∀ k∈ K, we can simplify (<ref>) to
∑_l=1^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_1∈ K_1q_s,k_1∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)·1/l∑_k_1∈ K_1𝐠_k_1^(n)
=∑_l=1^Kq_s^l (1-q_s)^K-l/1-(1-q_s)^K·1/l·∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∑_k_1∈ K_1𝐠_k_1^(n)
=∑_l=1^Kq_s^l (1-q_s)^K-l/1-(1-q_s)^K·1/l· C_K-1^l-1∑_k∈ K𝐠_k^(n)<ref>a
=∑_l=1^K C_K^lq_s^l (1-q_s)^K-l/1-(1-q_s)^K·1/K·∑_k∈ K𝐠_k^(n)<ref>b
=1/K∑_k∈ K𝐠_k^(n)<ref>c
≜∑_k∈ Kα_k 𝐠_k^(n),
where (<ref>) is due to ∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∑_k_1∈ K_1𝐠_k_1^(n)= C_K-1^l-1∑_k∈ K𝐠_k^(n),
(<ref>) is due to K/l C_K-1^l-1=C_K^l,
and (<ref>) is due to ∑_l=1^K C_K^lq_s^l (1-q_s)^K-l/1-(1-q_s)^K=1. Therefore, we have α_k=1/K, ∀ k∈ K.
§.§ Expression of {β_k}
Due to q_s,k( u_k)=q_s, ∀ k∈ K, we can simplify (<ref>) to
∑_l=1^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_1∈ K_1q_s,k_1∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)·1/l^2∑_k_1∈ K_1‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2
=∑_l=1^Kq_s^l (1-q_s)^K-l/1-(1-q_s)^K·1/l^2·∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∑_k_1∈ K_1‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2
=∑_l=1^KC_K^l /l·q_s^l (1-q_s)^K-l/1-(1-q_s)^K·1/K∑_k∈ K‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2<ref>a
≤2/K^2 χ q_s∑_k∈ K‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2<ref>b
≜∑_k∈ Kβ_k ‖𝐠_k^(n)-∇ F_k(𝐰^(n))‖^2,
where (<ref>) can be obtained based on the same reason as obtaining (<ref>),
(<ref>) is obtained by Lemma 5 in <cit.>, which leads to ∑_l=1^K1/l C_K^l q_s^l (1-q_s)^K-l≤2/Kq_s,
parameter χ=1-(1-q_s,min)^K and q_s,min represents the minimum value of q_s,k( u_k).
Therefore, we have β_k=2/K^2 χ q_s, ∀ k∈ K.
§.§ Expression of {γ_k}
Due to q_s,k( u_k)=q_s, ∀ k∈ K, we can simplify (<ref>) to
∑_l=2^K∑_ K_1∪ K_2= K, | K_1|=l, | K_2|=K-l∏_k_2∈ K_2(1-q_s,k_2)/1-∏_k∈ K(1-q_s,k)-∑_k∈ Kq_s,k∏_k'∈ K, k'≠ k(1-q_s,k')·2/l∑_k_1∈ K_1((q_s,k_1-q_s)^2+q_s^2)Λ_k_1^2
=2∑_l=2^K C_K^l(1-q_s)^K-l/1-(1-q_s)^K-Kq_s(1-q_s)^K-1·1/K∑_k∈ K((q_s,k-q_s)^2+q_s^2)Λ_k_1^2 <ref>a
=2 ·∑_l=2^K C_K^l (1-q_s)^K-l/∑_l=2^K C_K^l (1-q_s)^K-lq_s^l·1/K∑_k∈ K((q_s,k-q_s)^2+q_s^2)Λ_k_1^2 <ref>b
≤ 2 ·∑_l=2^K C_K^l (1-q_s)^K-l/∑_l=2^K C_K^l (1-q_s)^K-lq_s^K·1/K∑_k∈ K((q_s,k-q_s)^2+q_s^2)Λ_k_1^2 <ref>c
=2/Kq_s^K∑_k∈ K((q_s,k-q_s)^2+q_s^2)Λ_k_1^2
≜∑_k∈ Kγ_k((q_s,k-q_s)^2+q_s^2)Λ_k^2,
where (<ref>) can be obtained based on the same reason as obtaining (<ref>),
(<ref>) is due to ∑_l=2^K C_K^l (1-q_s)^K-lq_s^l=1-(1-q_s)^K-Kq_s(1-q_s)^K-1,
and (<ref>) is due to q_s^l>q_s^K, ∀ l≥ 2.
Therefore, we have γ_k=2/K q_s^K, ∀ k∈ K.
Finally, by substituting the expression of {α_k}, {β_k}, {γ_k} into G bound, we obtain Corollary 1.
[ G≤η/K∑_k∈ K(σ_k^2/δ_k+Λ_k^2)
+2Lη^2/K^2 χ q_s∑_k∈ K(σ_k^2/δ_k+2Λ_k^2)
+4Lη^2/K q_s^K-2∑_k∈ KΛ_k^2. ]
IEEEtran
|
http://arxiv.org/abs/2306.02354v2
|
20230604131345
|
TIFF: Gyrofluid Turbulence in Full-f and Full-k
|
[
"Alexander Kendl"
] |
physics.plasm-ph
|
[
"physics.plasm-ph",
"physics.comp-ph"
] |
Institut für Ionenphysik und Angewandte Physik, Universität
Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria
[email protected]
A model and code (“TIFF”) for isothermal gyrofluid computation of
quasi-two-dimensional interchange and drift wave turbulence in magnetized
plasmas with arbitrary fluctuation amplitudes (“full-f”) and arbitrary
polarization wavelengths (“full-k”) is introduced.
The model reduces to the paradigmatic Hasegawa-Wakatani model in the limits of
small turbulence amplitudes (“delta-f”), cold ions (without finite Larmor
radius effects), and homogeneous magnetic field.
Several solvers are compared for the generalized Poisson problem, that is
intrinsic to the full-f gyrofluid (and gyrokinetic) polarization equation, and
a novel implementation based on a dynamically corrected Fourier method is
proposed.
The code serves as a reference case for further development of
three-dimensional full-f full-k models and solvers, and for fundamental
exploration of large amplitude turbulence in the edge of magnetized plasmas.
TIFF: Gyrofluid Turbulence in Full-f and Full-k
Alexander Kendl
Received / Accepted
===============================================
§ INTRODUCTION: GYROFLUID MODELS AND POLARIZATION
Turbulence in magnetized plasmas, which is generally driven by ubiquitous
gradients of density and temperature, is a subject of considerable interest
and importance in fusion energy research with toroidal magnetic
high-temperature plasma confinement experiments such as tokamaks and
stellarators <cit.>.
Models for instabilities and turbulence in magnetized plasmas are based on
gyrokinetic, gyrofluid or fluid drift descriptions <cit.>.
Gyrofluid theory as a fluid-like model for the low-frequency dynamics of
gyrocenter densities in magnetized plasmas including finite Larmor radius
(FLR) effects has been pioneered in the late 1980s <cit.>.
In the early 1990s collisionless Landau closure has been applied to derive
three-dimensional gyrofluid models <cit.> for computations in slab
<cit.> and toroidal geometry <cit.>.
The approach was further generalized to electromagnetic
models <cit.>.
Energetically consistent 6-moment electromagnetic gyrofluid equations with FLR
effects have subsequently been systematically derived from the corresponding gyrokinetic
theory <cit.>.
Gyrofluid theory has its place in the hierarchy of magnetized plasma models
somewhere in between gyrokinetic and drift-reduced fluid theory
<cit.>.
The quality of a model of course not only depends on the rank within some
straight hierarchy, but also on multiple secondary modelling assumptions
that are introduced in any practical application and numerical simulation.
Gyrofluids have the advantage over fluid theories to consistently model
some kinetic effects such as finite Larmor radii or Landau damping.
They do not depend on (Braginskii type) collisional closures, but collisional
effects can be amended. Compared to gyrokinetic modelling of
a 5-dimensional distribution function, the 3-dimensional gyrofluid moment
models can be computationally much more efficient.
In “full-f” (a.k.a. “total-f”) gyrofluid models
<cit.>, similar to full-f gyrokinetic models
that evolve the total distribution function, no assumptions on the smallness of
fluctuation amplitudes f( x,t) = f_0( x) + f̃ ( x,t)
are made, instead of evolving only a small “delta-f” deviation f̃
from a static background equilibrium f_0 <cit.>.
In fusion plasmas this is particularly relevant for the edge and scrape-off
layer (SOL) regions, where large-amplitude blob or edge localized mode (ELM)
filaments dominate the transport <cit.>.
The development of full-f gyrokinetic codes started a bit more than a decade ago (see, for example,
refs. <cit.>),
and often involves long-term development in large research groups or
collaborations. All these (perpetually evolving) codes employ different levels
of secondary modelling assumptions and approximations, such as for geometry
(full torus vs. flux tube; circular vs. X-point; core vs. edge/SOL), for
polarization (linear vs. nonlinear), or for collision operators,
electromagnetic effects, and (SOL and core) boundary conditions.
Full-f gyrofluid models are directly related to gyrokinetics, as they are usually obtained by taking
appropriate fluid moments of the full-f gyrokinetic equations
<cit.>. Therefore the difference between delta-f and full-f gyrofluid
models involves the same choice of assumptions on the level of the gyrokinetic
Lagrangian action, which in principle requires a consistent decision between
either small perturbations but applicability for arbitary wave lengths, or
arbitrary perturbation and flow amplitudes but restriction to long wave lengths
<cit.>.
For simulations of full-f gyrofluid models, over the last years two separate
code implementations were developed
alongside, which are both based on the same or similar model sets
<cit.>, but use largely different numerical approaches:
The modular open source code suite FELTOR (“Full-F ELectromagnetic code in TORoidal geometry”)
<cit.> has been primarily developed and maintained by
Wiesenberger and Held et al. <cit.>;
and the code family TOEFL is being primarily developed by Kendl and
includes the 2d code branch “TIFF”, which is reported herein.
The acronym TOEFL denotes “TOkamak Edge FLuid”, and TIFF is “TOEFL In
Full-f and Full-k”. In its first (drift-Alfvén fluid in C/C++)
implementation, TOEFL was largely tantamount to DALF <cit.> but in
another language.
The 3d delta-f gyrofluid version T3P in the TOEFL set had been designed to be
comparable with the GEM3 model and code by Scott <cit.> and Ribeiro
<cit.>, and was applied in
refs. <cit.>, and as 2d delta-f gyrofluid
model reduction in refs. <cit.>.
Full-f low-k 2d and 3d gyrofluid versions of TOEFL in the previous long wavelength
approximation implementation, with otherwise similar numerics as employed here for TIFF, have
already been applied in refs. <cit.>.
These previous delta-f and full-f low-k simulations can serve as test cases
for the implementation of the present full-f full-k model in its respective parameter limits.
The dual FELTOR vs. TOEFL code development strategy allows cross-verification,
but they for example also have optimized applicability for different
geometries. Whereas FELTOR can be applied to full global 3d torus geometry,
TOEFL is presently designed for locally field-aligned 3d flux-tube type
toroidal simulation geometry.
What ever gyrofluid moment sets (such as thermal or isothermal, 2d or 3d)
are treated in the codes, all gyrocenter densities are evolved and coupled to obtain the
electric potential ϕ( x,t) via the polarization equation, which is
isomorphic to its full-f gyrokinetic equivalent, and is nothing but
the gyrocenter density (N_s( x,t)) formulation of quasi-neutrality,
summing up all species (index s) charge and polarization densities.
Until recently, both full-f gyrofluid code families have treated the usual, consistent
long-wavelength form of the full-f polarization equation <cit.>:
∑_s [ ·( m_s N_s( x) / B^2_⊥ϕ( x) ) + q_s G_1s N_s ( x) ]
= 0.
This can be re-cast into a generalized 2d Poisson equation:
·ε _⊥ϕ = σ
where ε ( x) ≡∑_s m_s N_s ( x) / B^2, and
σ ( x) ≡ - ∑_s q_s G_1s N_s ( x).
Here m_s is the mass and q_s = Z_s e the charge of plasma species s with
gyrocenter density N_s. The plasma species include electrons and at least one
but often several ion species. In this present work only one main ion species
(index s=i) in addition to the electrons (index s=e) is considered.
The full-f gyrofluid generalization to multiple ion species has been discussed
in refs. <cit.>.
In the case of a pure e-i plasma, the polarization of the electrons can be
neglected because m_e ≪ m_i, so that ε ( x) ≈m_i N_i ( x) / B^2.
The gyrocenter densities N_s in σ are affected by gyro-averaging, which is
denoted by the operator G_1s. This can be expressed as G_1(b_s) =
G_0^1/2(b_s) in wavenumber space with the gyro-screening operator
G_0s≡ I_0(b_s) e^-b_s for b_s =(ρ_s k_⊥)^2 <cit.>.
This form (derived from velocity integration of the gyrokinetic pendant
including Bessel functions) makes use of the modified Bessel function of the
first kind I_0. The Larmor radius ρ_s = √(T_s m_s)/(eB) of the
particle species (s), with temperature T_s and mass m_s, normalizes the
k_x and k_y wavenumber components in the 2d plane locally perpendicular to the
magnetic field B = B e_z. The gyro-averaging operators can be
efficiently approximated <cit.> by their Padé forms G_0(b_s)
≈Γ_0(b_s) ≡ 1/(1+b_s) and G_1(b_s) ≈Γ_1(b_s) ≡ 1/(1+b_s/2).
The full dynamical nonlinearity ·ε(x,y,t) _⊥ϕ in the long-wavelength polarization equation is here retained.
Most full-f gyrokinetic implementations so far approximate this term by using either only radial
variations of the static background ε_0(x), or completely linearise it to ∼ε_0
_⊥^2 ϕ and so neglect spatial and temporal variations in the polarization.
However, the general delta-f form of the polarization equation
∑_s [ q_s e N_0 /T_s( Γ_0s -1 ) ϕ +
q_s N_0 Γ_1sÑ_s / N_0 ] = 0,
for small perturbations Ñ_s on a reference background density N_0,
differs from this linearised long-wave length polarization by an additional FLR
contribution proportional to the Larmor radius ρ_s, and only agrees in lowest order
Taylor expansion in b = (ρ_s k_⊥)^2 <cit.>:
e^2 Z_s N_0 /T_s( Γ_0s -1 ) ϕ = m_s
Z_s N_0/B^2_⊥^2/1-ρ_s^2 _⊥^2ϕ
≠ m_s Z_s N_0/B^2_⊥^2 ϕ≡ε_0 _⊥^2 ϕ
In any case consistency throughout the equations has to be ensured, for example in full-f models
by keeping E-cross-B energy terms in the generalised potential <cit.>.
Computation of delta-f gyrofluid turbulence with the exact model in comparison
to the delta-f long-wavelength approximation such as in
eq. (<ref>) show that the differences can be rather pronounced.
This has motivated the development of a consistent arbitrary wavelength
full-f gyrofluid polarization model <cit.>.
A first implementation of this “full-f full-k” model in the code FELTOR and
application to simulation of interchange driven “blob” perturbations in a
magnetized plasma is presented by Held and Wiesenberger in ref. <cit.>. The results therein
clearly show the relevance of arbitrary wave length polarization for
interchange drift modes.
In the present work, an independent code implementation (TIFF) of the “full-f
full-k” model and application to 2d drift instabilities and turbulence is given.
In the respective interchange blob mode limit, the results are cross-verified
with the recent FELTOR code results. Different solvers for the underlying
generalized Poisson problem are compared. An efficient solver based on a
dynamically corrected Fourier method is proposed and tested.
§ FULL-F FULL-K 2D GYROFLUID TURBULENCE MODEL
The arbitrary wave length full-f model derived in ref. <cit.> and first
applied by Held and Wiesenberger in ref. <cit.>, is in the following
implemented in a form which also includes the full-f formulation of the
gyrofluid generalization <cit.> of the quasi-2d (modified)
Hasegawa-Wakatani (HW) drift wave turbulence model <cit.>.
§.§ Gyrocenter density equations
The set of isothermal full-f full-k gyrofluid equations is based on dynamical
evolution equations of the gyrocenter densities for each species s
in the general form of a continuity equation:
∂_t N_s + · ( N_s U_s ) = 0.
The velocities U_s = U_E + U_B + U_∥ include
the gyro-center E-cross-B drifts U_E = (1/B^2) B×_⊥ϕ_s, the gradient-B drifts U_B = (T_s/q_s B^2) B×_⊥ln (B/B_0), and parallel velocities U_∥.
In contrast to the corresponding fluid continuity equation for particle
densities, the gyrofluid formulation for gyrocenter densities does not contain
polarization drifts, whose effects are covered by the relation of the gyrocenter
densities within the polarization equation for the electric potential ϕ.
The resulting set of equations can be seen as a variation of the
vorticity-streamfunction formulation for a 2D Euler fluid model. The electric
potential here takes the role of a streamfunction for the advecting E-cross-B
velocity.
The gyrofluid potentials ϕ_s = Γ_1sϕ + Ψ_s in the E-cross-B
drift U_E include the gyro-average part, and the consistent full-f full-k form <cit.> of the
polarization contribution through the E-cross-B energy as Ψ_s = (m/2qB^2)
|∇_⊥√(Γ_0)ϕ|^2. For electrons ϕ_e ≈ϕ
can be used because of the small mass ratio and associated small Larmor radius
in comparison to ions. The potential ϕ is retrieved from solution of the polarization equation.
§.§ Polarization equation
The consistent arbitrary wavelength full-f polarisation equation has been
derived in ref. <cit.> and is used here in isothermal
(constant gyroradius) form as in ref. <cit.>:
∑_s q_s N_s - · ( P_1 + P_2 ) = 0,
where the polarization densities are given as
P_1 = - (1/2) ∑_s q_s _⊥Γ_1 ρ_s^2 N_s ,
P_2 = - ∑_s ( √(Γ_0)q_s N_s/Ω_s B√(Γ_0)_⊥ϕ).
Here, in the (isothermal) constant gyroradius approximation the Γ_1 and
√(Γ_0) operators are self-adjoint, and may for example be evaluated efficiently
in k space. The general form for ρ_s = ρ_s( x) is given in ref. <cit.>.
For the present implementation in the code TIFF only one ion species
(e.g. Deuterium) is treated, and the electron polarization is neglected.
The full-f full-k polarization equation then can be re-written as:
·√(Γ_0)ε_i √(Γ_0)_⊥ϕ = σ
where σ ( x) ≡ - ∑_s Z_s Γ_1s N_s ( x),
with Γ_0 (here for ions only) and Γ_1s given in second order
accurate Padé approximation <cit.>, as given above, and
ε_i ( x)= m_i N_i ( x) / B^2.
In the constant gyroradius (isothermal) case the √(Γ_0) operators
commute with the operators, so that also √(Γ_0)·ε_i _⊥√(Γ_0)ϕ = σ holds.
By defining ϕ_G ≡√(Γ_0)ϕ and σ_G ≡√(Γ_0)^-1σ, the arbitrary wave length polarization
eq. (<ref>) can again be re-cast into the usual form of a
generalised 2d Poisson equation as ·ε_i _⊥ϕ_G = σ_G. This allows to re-use common solvers (see Appendix)
for this type of problem to obtain ϕ_G and thus ϕ from known
ε_i and σ. For variable gyroradii ρ_s ( x) or
multiple polarizable species other forms of numerical solvers may have to be implemented.
The isothermal gyrofluid model consists basically of eqs. (<ref>)
for both electron and ion gyrocenter densities, which are coupled via
eq. (<ref>). The numerical implementation in non-dimensional form
is achieved by the usual drift normalization.
For this purpose the flux divergence contributions in
eqs. (<ref>) are first restated.
§.§ Divergence of perpendicular fluxes
The divergence of the E-cross-B flux part in eq. (<ref>)
provides the advective derivative term U_E · N_s = (1/B)
[ϕ_s,N_s] as the primary turbulent nonlinearity, and in the case of an
inhomogeneous magnetic field the “curvature” term N_s ·
U_E = N_s [ln B, ϕ_s ]. (Side note: this contribution is here referred to as
“curvature” although in a strict sense a quasi-2d model with straight
magnetic field lines B = B(x) e_z only has a gradient-B effect; however, in a toroidal
system both curvature and gradient-B contributions occur in combination, and can be
treated similarly within joint 3d expressions in (gyro-) fluid models if T_∥≈ T_⊥.)
The 2d advective drift operators ( e_z ×ϕ_s) · f ≡ [ϕ_s, f] are here expressed in Poisson bracket notation,
where [a,b]=(∂_x a) (∂_y b) - (∂_x b) (∂_y a).
The 2d gyrofluid potential field thus has the meaning of a stream function for the
turbulent E-cross-B flows.
The divergence of the gradient-B flux gives the diamagnetic curvature term
· (N_s U_B) = (N_s T_s / q_s B) [ln B, N_s]. In contrast
to delta-f models, these terms are not linearized and the full gyrocenter
densities are in this form retained as multipliers to the Poisson brackets.
§.§ Parallel closure
In the present quasi-2d model the parallel velocity U_∥ contribution can be
approximated by means of the Hasegawa-Wakatani closure <cit.>.
From the full-f electron parallel momentum equation <cit.> in the
quasi-stationary limit a relation in the form of a generalized Ohm's law is
obtained as e η_∥ J_∥ = T_e ∇_∥ln N_e
- e ∇_∥ϕ. With J_∥≈ - e N_e
U_∥ e, by assumption of a Spitzer resistivity η_∥ =
0.51 m_e ν_e /(n_e e^2), and applying that the electron gyrocenter density
N_e ≈ n_e can be approximated well by the electron particle density
n_e, an expression for the term · (N_e
U_∥ e) ≡ - Λ_ce in eq. (<ref>) can be obtained
<cit.>. The ion velocity contribution in the parallel response can be
neglected because of the high ion inertia compared to electrons.
For this purpose a full-f non-adiabatic coupling parameter
α≡ T_e k_∥^2 / (η_∥ e^2 n_0 ω_0)
= n_e T_e k_∥^2 /(0.51 m_e ν_e n_0 ω_0) can be defined for
a selected parallel wavenumber k_∥ with ω_0=eB/m_i.
The electron collision frequency ν_e is in principle proportional to n_e,
inversely to T_e^3/2, and to the (in general density and temperature
dependent) Coulomb logarithm. The usual approximation of a constant Coulomb
logarithm is applied here, and in the present isothermal model only the
density dependence ν_e (n_e) ∼ n_e ∼ N_e needs to be discussed <cit.>:
in the classical delta-f fluid HW model the collision frequency is assumed
constant, so that α is a free constant parameter, whereas for the
present full-f model the dependence is kept as α = N_e α_0.
The final non-adiabatic “ordinary” HW drive term for electrons in the full-f
form <cit.> is Λ_c = α n_0 ω_0 [ (e ϕ/T_e) - ln (N/⟨ N
⟩)]. The angled brackets denote a zonal average, which in the present
2d geometry amounts to averaging in the y direction.
§.§ Normalization to dimensionless form
The preceding evaluation of perpendicular and parallel fluxes gives the
(still dimensional) density equations (<ref>) alternatively as
∂_t N_s + 1/B [ϕ_s, N_s] + N_s/B [B, ϕ_s] +
N_s T_s/ q_s B^2 [B, N_s] = Λ_cs,
where the nonadiabatic coupling term Λ_cs only is contributing for
electrons (Λ_ci≡ 0).
Time t in the partial time derivative is normalized with respect to L_⊥ / c_0, where c_0 =
√(T_e/m_i) is the thermal speed, and L_⊥ is a “typical” perpendicular
length scale.
For local pressure gradient driven systems this is usually set equal to the background
gradient length, here L_⊥≡ L_n = | ∂_x ln n_0(x) |^-1, so that temporal
normalization relates to the diamagnetic drift frequency ω_∗ = c_0 / L_n.
For global gradient driven systems often the minor torus radius L_⊥≡ a is rather used. In case of model systems
with absent background gradient, such as for interchange driven “blob”
setups, L_⊥≡ρ_0 ≡√(T_e m_i)/(eB) is chosen as the
drift scale, and time normalization is related
to the ion gyration frequency ω_0 = c_0 / ρ_0. These choices can be
set by specifying δ≡ρ_0 / L_⊥ as a free input parameter.
The perpendicular spatial derivatives in x and y are always normalized with
respect to ρ_0, so that ∂̂_t ≡ (L_⊥/c_0)
∂_t, and {·,·}≡ρ_0^2 [·,·].
Densities and the magnetic field are normalized to reference quantities N̂ = N / N_0 and B̂ = B / B_0, respectively, and the electric potential as ϕ̂≡ e ϕ / T_e. Temperature ratios are defined as τ_s ≡ T_s/(T_e Z_s).
Dividing eq. (<ref>) by N_s and multiplying with L_⊥/c_0 gives:
∂̂_t lnN̂_s + 1/B̂δ{ϕ̂_s,
lnN̂_s } = Λ̂_cs + Λ̂_Bs ,
The normalized dissipative coupling term
Λ̂_ce≡ (L_⊥/c_0) (Λ_ce/N̂_e) = α̂[ ϕ̂- ln(N_e/⟨ N_e ⟩) ],
with α̂= δ (L_⊥/L_∥)^2(ω_ce/ν_e)
(k̂_∥^2 / 0.51) as a free parameter, appears only in the
electron equation, and Λ_ci≡ 0. The “modified HW” model for toroidally more concordant
zonal flow treatment requires to use Λ̂_ce = α̂[ (ϕ̂- ⟨ϕ̂⟩ ) - ( ln N_e - ⟨ln N_e ⟩ ) ]
instead <cit.>. In these forms both are directly consistent with their respective common
delta-f limits (ln N_e →Ñ_e / N_0).
In the present quasi-2d setup it is assumed that the
magnetic field has locally only a weak dependence in B(x), so that δ^-1{ln B, f }≡ - κ∂̂_y f ≡ - K (f),
with κ≡ - δ^-1(∂̂_x ln B) taken as a constant parameter.
This is also consistent with the assumption of a constant gyroradius,
throughout the computational domain which is supposed to have a radial
extension L_x≪ R much smaller than the major torus radius R. For global
simulations across the whole torus cross section this condition would need to
be relaxed.
The x direction is here chosen to be local radially outwards on the outboard
midplane side of a torus (compare Fig. <ref>), where κ >0, and
B̂(x) ≈ 1 - κ x ≈ 1 is weakly decreasing with
x. This gives Λ̂_Bs≡K (ϕ̂_s) + τ_sK (lnN̂_s).
The curvature strength can be evaluated as κ≈ 2 ρ_0 / R.
The temperature ratio is always τ_e = -1 for electrons, and a free
parameter for ions in the order of τ_i ∼ + 1.
The ion temperature ratio τ_i thus controls the FLR effects, and in
addition here also the ion diamagnetic contribution to gradient-B
(interchange) drive for inhomogeneous magnetic fields (κ≠ 0).
The formulation in terms of lnN̂ as the dynamical variable for the
time evolution of densities ensures that N̂ is always positive definite,
which is a requirement for the solution of the generalized Poisson equation,
and directly reduces to the delta-f density equation as lnN̂≈N̂ - 1 = (N_0 + Ñ )/ N_0 - 1 = Ñ / N_0 for small relative
fluctuation amplitudes N̂≪ N_0.
§.§ Normalized polarization
In a quasi-neutral magnetized plasma under fluid drift ordering, the divergences of
the above perpendicular and parallel fluxes are balanced by the divergence of
the polarization drift. In gyrofluid and gyrokinetic models this is taken into
account by enforcing quasi-neutrality ∑_s q_s n_s ≡ 0, and replacing
the particle densities n_s herein in terms of their respective gyrocenter and
polarization densities, with the need to determine a consistent electric
potential as a result.
The polarization equation (<ref>) is made non-dimensional by
applying the same normalizations as above, which results in
·√(Γ_0)ε̂_i √(Γ_0)_⊥ϕ = σ̂,
with ε̂_i = N̂_i / B̂^2
and σ̂= - ∑_s Z_s Γ_1sN̂_s ( x).
The gyration operators are already dimensionless by definition.
§.§ Selection of edge and scrape-off-layer model scenarios
The usual applications for quasi-2d simulations of nonlinear drift dynamics in
magnetized plasmas are, for example, fundamental studies on either (a) gyrofluid turbulence
and zonal flows with FLR effects, or on (b) FLR effects on
interchange dynamics of warm “blob” perturbation propagation. Both cases are
commonly treated in separate studies, where for (a) Λ_Bs≡ 0,
and instead for (b) Λ_c ≡ 0 is set respectively, for better
separability and understanding of basic underlying mechanisms.
“Blob” transport is regarded as most relevant in the scrape-off-layer (SOL)
of fusion plasmas. Studies with single or few seeded blobs are important to
reveal fundamental mechanisms, but in tokamaks blobs are presumed to be generated rather
“randomly” around the separatrix, so that turbulent drift wave vortices and
zonal flow effects from the closed field line (CFL) edge region will play a
large role for consistent studies of “blobby” (intermittent) SOL
transport. Several 2d fluid codes (for example as in
refs. <cit.>) take into account both drift wave
and interchange effects and different background conditions by assigning two
regions, that are defined by a “separatrix” location x_s within the same 2d
computational domain.
An additional effect that should be taken into account in the SOL region is
sheath coupling of the open field lines with limiter or divertor material
walls. This effect intrinsically includes parallel kinetics along the magnetic
field direction. Gyrokinetic, gyrofluid and fluid models for sheath coupling
conditions may be approximated under assumption of Bohm conditions. Any
perpendicular 2d fluid-type approximation will likely miss relevant physics here, but
can in principle be included also in the present 2d full-f gyrofluid
model. For test purposes presently a (partly inconsistent) sheath instability
“toy model” is optionally included in the TIFF code with additional coupling
terms added to the right hand side of eq. (<ref>), which needs
further improvement before any application.
For example, a 2d delta-f form of the gyrofluid sheath coupling terms was
introduced by Ribeiro <cit.>, as
N_e Λ̂_Se→γ_D [(1+Λ_D) Ñ_e- ϕ̂]
and Λ̂_Si→γ_D Ñ_e, with sheath coupling
parameter γ_D as defined in ref. <cit.>, and Λ_D =
log(√(m_i/2 π m_e)). This model add-on in principle will allow to
examine simplified edge-SOL coupled turbulence and flow dynamics including
turbulent generation of SOL blobs, once a consistent full-f full-k sheath
coupling term is derived and implemented in the future. For this reason, the
present paper only discusses simulations without sheath coupling.
All coupling terms Λ̂ on the right hand side of eq. (<ref>)
are then selectively only applied in the respective regions of interest, for
example formally by multiplication with Heaviside type step functions
λ(x_s) for a given relative separatrix position 0 ≤ (x_s/L_x) ≤ 1.
Application cases and systematic physics studies will be presented
elsewhere. Here the focus is on introduction of the code TIFF and its
presently underlying model as a reference for later applications and
further developments, such as toroidal geometry and inclusion of thermal and
electromagnetic dynamics in a (field-aligned) 3d extension.
A main aspect here is also on introduction and testing of an efficient dynamically
corrected Fourier solver for the generalized Poisson problem in the polarization.
§ NUMERICAL SOLUTION ALGORITHM
The normalized equations solved in the TIFF code are:
∂̂_t lnN̂_e + {ϕ̂, lnN̂_e } = λ_B Λ̂_Be + λ_S Λ̂_Se + λ_c
Λ̂_c
∂̂_t lnN̂_i + {ϕ̂_i, lnN̂_i } = λ_B Λ̂_Bi + λ_S Λ̂_S i
·ε̂_i _⊥ϕ̂_G = σ̂_G
The general procedure for solution of this set of equations is as follows:
(#1) Specify N̂_e( x) and N̂_i( x) on an equidistant grid in a 2d
rectangular (x,y) domain, either as initial condition, or subsequently
updated in each time step by eqs. (<ref>-<ref>).
(#2) Compute N̂_Gi≡Γ_1iN̂_i. The
constant gyroradius assumption allows to evaluate all gyro-averaging
operations efficiently in k space, here N̂_Gi( k) = N̂_i(
k)/(1+τ_i k̂^2/2).
In the TIFF code presently the 2d discrete Fourier transform from the FFTW3
library <cit.> is used for transformations between k and x space representations.
(#3) Apply boundary conditions (see next section) on N̂_e, N̂_i and N̂_Gi.
(#4) Prepare the input functions to eq. (<ref>) as σ̂_G
= √(Γ_0)^-1σ̂= √(1+τ_ik̂^2) σ̂ with
σ̂= N̂_e - Z_i N̂_Gi, and ε̂_i = N̂_i/B̂^2.
(#5) Obtain ϕ̂_G from eq. (<ref>) with one
of the solvers discussed in the Appendix.
(#6) Compute the electric potential from ϕ̂= √(Γ_0)^-1ϕ̂_G also via k space.
(#7) Compute the gyrofluid ion potential ϕ̂_i = Γ_1iϕ̂+ |∇̂_⊥ϕ̂_G |^2 / (2 B̂^2).
(#9) Update N̂_e and N̂_i in time through
eqs. (<ref>-<ref>) and return to step (#1).
The time step update (#9) first requires evaluation of the advective Poisson brackets and
all coupling terms Λ̂.
The brackets {·, ·} are here presently solved with the energy
and enstrophy conserving (but not shock capturing) fourth order Arakawa scheme
<cit.>.
In Λ̂_Bs the curvature operators K (f) = κ∂̂_y f are evaluated by (fourth order) centered finite differencing
over the (periodic) y direction.
Evaluation of Λ̂_Ss and Λ̂_ce are
straightforward. Λ̂_ce includes calculation of zonal averages
⟨ f ⟩ (x) = (1/n_y) ∑_j f_i,j, which on a 2d rectangular
local grid simply requires summation over all n_y grid points j of the y direction.
Time step updating of f ≡lnN̂_s in the form ∂_t f = F here uses
a fourth order accurate three-step Adams-Bashforth method with Karniadakis
weights <cit.>:
f^(t+1) = c_0 f^(t) - c_1 f^(t-1) + c_2 f^(t-2)
+ c_F Δ t [ 3 F^(t) - 3 F^(t-1) +
F^(t-2) + Λ̂_ν^(t)]
where c_0 = 18/11, c_1 = 9/11, c_2 = 2/11 and c_F = 6/11.
The (normalized) time step size Δ t has to be small enough for CFL stability.
For further numerical stabilization of the otherwise explicit scheme an
artificial sub-grid type viscosity term Λ_ν is added.
For this a hyperviscosity Λ̂_ν = - ν_4 ^4 lnN̂_s is here applied. The coefficient 0 ≤ν_4 ≪ 1 is
heuristically chosen to prevent grid instability at smallest scales, as a sink
for the nonlinear direct vorticity cascade, and to (slightly) damp out Gibbs type
noise that can appear at under-resolved strong gradients due to the not shock
capturing nature of the Arakawa discretization. The value of ν_4(h) needs to
be adapted when spatial grid resolution h is changed to achieve optimum
results. If desired, for example for comparability to other code implementations, an
ordinary “physical” viscous diffusion term Λ̂_μ = + μ (^2
N̂_s) / N̂_s could be added to the right hand side of
eqs. (<ref>-<ref>). For usual application of the
gyrofluid model to hot fusion edge plasmas the actual viscosity μ in general
would be too small to be resolved efficiently by direct numerical simulation.
For solution of the generalized Poisson problem as in step (#5) of the
outlined algorithm, presently three methods are implemented in TIFF as
described in more detail in the Appendix:
An iterative preconditioned conjugate gradient (PCG) solver, an iterative red-black
successive over-relaxation (SOR) solver, and a novel dynamically corrected
Fourier (DCF) solver. All solvers make use of the results for ϕ^(t-1)
and ϕ^(t-2) of the previous time steps, either for extrapolated
initialisation of the iterative schemes, or for the (therefore denoted
“dynamical”) correction of the approximate Fourier method.
The DCF method leads to predictable run times of the generalized Poisson
problem, that only depend on the grid size but not on other simulation
parameters, whereas the number of iterations and run time can vary
significantly for the PCG and SOR solvers for any specified accuracy, in
particular when densities (and thus ε̂_i( x)) are strongly inhomogeneous.
§ INITIAL AND BOUNDARY CONDITIONS
Initially the gyrocenter density N̂_i(x,y) and N̂_e(x,y) fields
have to be specified. For full-f initial background profiles a radially exponential
decline is set, each with N̂_0(x) = N̂_L exp(-x / d) for d = L_p / ln(n_L/N_p).
Here L_p = x_s denotes either the width of the pedestal region if a
separatrix at x_s<L_x is applied, or the width L_x of the whole x domain
if no SOL region is treated. The initial lnN̂_s, profiles (and in its limit
the delta-f Ñ profiles) are thus linear.
On top of this initial background, either single perturbations, such as a
Gaussian density blob, or a pseudo-random “bath” of modes, is added with a
given amplitude.
The initial ion gyrocenter density is either set equal to the electron
density, or a “vorticity free” initialisation with
N̂_i = Γ_1^-1N̂_e is used so that σ̂=0.
Boundary conditions are applied in several instances: on density profiles in order to maintain
a background gradient if required, for the intrisic boundary value problem of
the Poisson equation, for solution of the gyro operators in k space, and on finite
difference operators in x space.
The “poloidal” y direction is assumed periodic. The box length L_y in units of
ρ_0 has to be chosen much larger than typical perpendicular correlation lengths for
turbulence simulations, or much larger than blob or vortex scales for simulation
of such structures, in order to minimize self-interaction effects across the
periodic y boundary.
§.§ Density profile boundary conditions
Density profiles could be maintained by specifying source terms Λ̂_Q(x) around the inner (x=0) radial boundary and sink terms around
the outer (x=L_x) boundary.
The presently considered application scenarios are for example a full-f
gyrofluid generalization of HW turbulence simulations, or seeded blob
simulations on a (usually) constant background density. For comparability to
common delta-f implementations of these scenarios it is adequate to maintain an
average density profile by prescribing fixed boundary densities N̂(x=0)
≡N̂_L and N̂(x=L_x) ≡N̂_R, while the full densities
may still self-consistently evolve in between. Seeded blob simulations
could for example use N̂_L = N̂_R = 1.
“Classical” fluid or delta-f gyrofluid HW turbulence simulations usually decouple a constant
background gradient in the advective derivative as δ^-1{ϕ,Ñ +N_0(x) } = δ^-1{ϕ,Ñ} + g ∂̂_y ϕ with
a gradient parameter g = δ^-1 (ρ_s/L_n). For the “diamagnetic
drift frequency” normalization of time this amounts to g=1, but g could
also be kept as a free parameter <cit.>.
The restriction in delta-f fluid or gyrofluid simulations to small
fluctuations that are decoupled from the background profile enables the use of
periodic boundary conditions in x on Ñ and ϕ. This is not feasible
any more for both delta-f or full-f simulations with global profile evolution.
If δ is specified as an input parameter, then this needs to be chosen
consistently with the required density boundary values for profile driven
simulations as δ = (N̂_L - N̂_R)/L_x. When for example the
radial domain size is L_x = 64 in units of ρ_0, then setting N̂_L =
1 + 0.5 and N̂_R = 1 - 0.5 specifies δ =1/64 ≈ 0.015.
The typical experimental tokamak edge steep density gradient pedestal regions,
which are the usual scenario of interest for HW model simulations, have widths
in the order of around 50 to 100 ρ_0 and drift scales δ∼ O (10^-2).
§.§ Parametric transition to delta-f limit within full-f equations
Specification of the density boundary in this way allows for a consistent treatment
and verification of the delta-f limit of small perturbation amplitudes on
large relative background densities within the full-f code. For gradient
driven turbulence this can be achieved by reducing all of the background
variation, drift scale, and initial perturbation amplitudes by the same small
factor ϵ. For example, setting N̂_L = 1 + ϵ· 0.5
and N̂_R = 1 - ϵ· 0.5 for the same box size of L_x = 64
gives δ≈ϵ· 0.015. When initial (for example blob) perturbation
amplitudes ϵ·Δ n are in the full-f model chosen smaller by the same
factor (for example ϵ = 1/100) as in a corresponding delta-f code
(with initial amplitude Δ n), then this corresponds to the respective
delta-f setup and enables a direct comparison (and code cross-verfication) of
this limit when ϵ→ 0.
§.§ Radial boundary density condition
It is desirable to reduce density fluctuations and maintain zero vorticity
and/or flows on narrow inner and outer radial boundary layers. The boundary
layer region of “width” L_β≪ L_x is here defined by a function β(x̂) =
1 - a_L exp[-x̂^2/L_β^2] - a_R exp[-(1-x̂)^2/L_β^2]
with x̂ = x / L_x. The parameters a_L and a_R are set to 1 for
boundary gradient driven cases (as described above), or can be respectively
set to 0 if source and/or sink terms Λ̂_Q(x), or a free outflow
condition on the outer boundary, are activated.
The vorticity around the radial boundaries can be approximately set to zero,
when equivalently the right hand side σ̂= N̂_e( x) - Z_i
Γ_1iN̂_i ( x) of the polarization equation, corresponding
to the first order polarization density with FLR effects, is set to zero.
For this purpose it is most convenient to first define the boundary values of the gyroaveraged ion
gyrocenter density N̂_Gi and then compute the consistent electron and
ion gyrocenter densities in each time step (#3 in the algorithm), by re-setting
N̂_Gi(x,y) = Γ_1iN̂_i (x,y)→ [ N̂_Gi(x,y)-
N̂_0(x) ] ·β(x̂) + N̂_0(x). This leaves the bulk
density unchanged and gives a smooth radial transition to the initial
profile values N̂_0(x) only in narrow regions of width L_β.
The corrected ion gyrocenter density is then computed by N̂_i =
Γ_1i^-1N̂_Gi in k space and re-set in the boundary
region. The electron density is re-set to N̂_e→ [ N̂_e - N̂_Gi ] ·β(x̂) + N̂_e, which ensures zero
σ̂ in the boundary region. A further correction could be
applied to alternatively ensure zero E-cross-B vorticity Ω̂=
_⊥^2 ϕ̂ at the boundaries, but tests have shown no
significant changes or advantages, as the E-cross-B vorticity is reduced
already jointly with the generalized vorticity at the boundaries in this
approach. (In the delta-f limit both cases are identical.)
This condition σ̂|_b.c.≡ 0 avoids strong vorticity
gradients at the x boundaries (and by this possibly related numerical
instabilities), and in addition ensures well-defined solution of the
polarization equation (see Appendix).
Existence of a unique solution to the generalized Poisson problem
· P = ·εϕ = σ
requires fulfillment of a compatibility boundary condition. Taking the domain
integral over the (x,y) area S, one has ∫_S d x · P = ∫_S d x σ, and therefore ∫_δ S dl P· n_S= ∫_S d x σ. This is ensured by the above
vorticity-free and flow-free (n_S ·ϕ = ∂_x ϕ
= u_y ≡ 0) conditions on the boundary δ S in x (and periodicity in y).
The condition correponds to global conservation of the polarization charge density.
§.§ Mirror padding for Fourier transforms
The algorithm involves four evaluations of gyro-operators (in steps
#2,3,4,6), which is in the present isothermal model achieved with high
accuracy in k space by applying (FFTW3 library) Fourier transforms.
Actually the evaluation of N̂_i = Γ_1i^-1N̂_Gi = [ 1 -
(1/2) τ_i ^2] N̂_Gi in Padé approximation for the vorticity
free boundary condition (step #3) could alternatively be achieved by
(considerably faster but less accurate) finite differencing in x
space. Although more costly, this evaluation is here for consistency also done in k space.
Standard Fourier transformation requires periodic boundary conditions, else
discontinuities would introduce Gibbs noise artefacts.
The radial “physical” domain 0 ≤x̂≤ 1 however usually includes
density profiles with N̂_s (x̂ = 0) ≠N̂_s(x̂ = 1).
For this reason, ghost domains in an extended x̂ direction are introduced for
the (x,y) field arrays before solution by Fourier transforms to k space,
and also before solution of the polarization equation.
Doubling the (initially “quarter-wave” physical) domain to 0 ≤x̂≤ 2, by copying the initial array f(x̂, ŷ) into the region 1 < x̂≤ 2 and
defining symmetrically mirrored f (x̂) ≡f̂(1 - x̂)
for 0 ≤x̂≤ 1 ensures (“half wave”) x̂ periodicity of field functions.
Full-wave input functions to the gyro-averaging Fourier transforms are achieved by a
further anti-symmetric domain doubling with boundary offset correction: the
four-fold extended array includes f(4 - x̂) = 2 f(0) - f (x̂) for 2
< x̂≤ 4. This ensures full-wave radial representation of the
densities in the gyro-averaging Fourier transforms, but for a four-fold
computational cost.
It is as usual favourable to use power-of-two numbers of grid points in the
x̂ and ŷ directions, so that the FFTW3 library will make use of
Fast Fourier Transform (FFT) algorithms. Any other grid size can be chosen, but
FFTW3 then automatically uses somewhat (depending on the grid size) slower
Discrete Fourier Transform (DFT) methods.
§ PARALLELIZATION AND REPRODUCIBILITY
The present development and production run platforms for the TIFF code are
multi-core shared memory office workstations, so that multi-core and/or multi-thread
parallelization is simply achieved by OpenMP (OMP) parallelization of grid array
sized double loops, and use of the respective OMP library of FFTW3 for the
Fourier transforms.
The frequent transistions between single and parallel regions, required by the
above algorithm, prevent good scaling. Efficient speed-up is usually achieved
(depending on the input parameters and on the hardware) when between 8 and 32
parallel threads are used.
Bitwise reproducibility of subsequent executions on the same system can be
achieved when the “ESTIMATE” flag for the transform planner routine of the
FFTW3 library is used, which is desirable for verification and testing
purposes. The “MEASURE” flag is faster in execution (which would be
desirable for long production runs with for example millions of time steps)
but not regularly bitwise reproducible <cit.>, because it might choose
different algorithms depending on the system background load, which can be quickly noticeable in the
turbulent phase of simulations due to the highly nonlinear nature of the
present set of equations. This may not be relevant if anyhow only statistical
diagnostic quantities of a saturated fully turbulent state are of interested.
The TIFF code presently uses only the FFTW and OMP libraries and can be compiled
for example with the GNU gcc compiler, which are all available as standard in
most usual Linux distributions. This ensures usability on most consumer PCs or
workstations.
§ DIAGNOSTIC OUTPUT
Diagnostic outputs are produced only every n_D-th time steps. For n_D
large (in the order of hundreds or thousands of time steps) this
allows real-time diagnostics with little post-processing, which greatly
reduces the necessary output storage space (in comparison to writing all arrays
for each computational time step).
The 2d dynamical field arrays N̂_e(i,j), N̂_i(i,j), ϕ̂(i,j), ω̂(i,j), σ̂(i,j), and Ñ_e(i,j) = N̂_e(i,j) - N̂_0(i) are regularly written out (e.g. to a RAM disc), and can be viewed
(already during run time) with any 2d visualization software (for which
presently a simple gnuplot script is used). In addition, 1d cross sections of
several arrays f(i,j_0) and f(i_0,j), and averaged radial profiles
⟨ f ⟩_j (i) are written.
Spectra ⟨ f(k_x) ⟩_j and ⟨ f(k_y) ⟩_i are computed
at each diagnostic output step for several quantities, such as kinetic energy,
enstrophy, and density and potential power spectra.
Energetic and transport quantities are recorded as time traces of (x,y) domain averages.
The (normalized) E-cross-B advective electron particle transport is obtained as
Q_n(t) = (1/S) ∫ d x N̂_e(x,y) ∂_y ϕ̂(x,y), where S= L_x Ly,
and the y derivative is provided by simple centered finite differencing
(as also in the following diagnostics).
The full-f global thermal free energy is given by (compare ref. <cit.>):
E_T = E_Te + τ_i E_Ti with
E_Ts = (1/S) ∫ d x [ N̂_s ( lnN̂_s - lnN̂_0 ) - (N̂_s - N_0)].
The kinetic energy is E_K = (1/2S) ∫ d x N̂_i (ϕ̂_G)^2.
The total energy is an ideal conserved quantity, and
can be used as a diagnostic for saturation of a turbulent state.
Further diagnostics can be included, for example output and computation of
difference norms for solver testing against a constructed solution, or
center-of-mass calculation of an interchange driven “blob”.
§ DELTA-F LIMIT
The TIFF code also implements the corresponding 2d isothermal delta-f set of
gyrofluid equations concurrent to the full-f model. Where possible both use
the same procedures, or enter forks for specific treatment. This allows
cross-verification of the full-f model in its small-amplitude limit with the
original delta-f model within the same code and for equal methods and (initial
and boundary) conditions.
The equivalently normalized delta-f set of equations is:
∂̂_t N̂_e + {ϕ̂, N̂_e }
= λ_c Λ̂_c + λ_B Λ̂_Be + λ_S
Λ̂_Se
∂̂_t N̂_i + {ϕ̂_i, N̂_i }
= λ_B Λ̂_Bi + λ_S Λ̂_S i
1/τ_i( Γ_0i -1 ) ϕ̂= σ̂
The (modified) delta-f HW term is Λ̂_ce = α̂[ (ϕ̂- ⟨ϕ̂⟩ ) - ( N̂_e - ⟨N̂_e ⟩ ) ].
Sheath coupling is described by Λ̂_Se = γ_D [(1+Λ_D) Ñ_e- ϕ̂]
and Λ̂_Si = γ_D Ñ_e.
The curvature terms are Λ̂_Bs = K (ϕ̂_s) + τ_sK (N̂_s).
Hyperviscosity equivalently now acts on N̂_s.
In σ̂= N̂_e ( x) - Z_i Γ_1iN̂_i (
x) the gyrocenter densities N̂_s = Ñ_s / N_0 here denote the
fluctuating components only. Equivalently to the full-f version,
the radial density profile is also included into the initialization of the
densities and evolved accordingly, and is not decoupled by a gradient
parameter, so that the same (not periodic) boundary conditions on the x
domain apply as for the full-f case described above. This leads to additional
boundary damping in contrast to doubly periodic (physical) domains, so that
the results can only be qualitatively compared with usual fluid HW code results.
The solution algorithm is basically the same as for the full-f model described
above. Steps # 4-6 are reduced to evaluating the delta-f (full-k) polarization
eq. (<ref>) in k space with the Padé approximation of Γ_0i through ϕ̂_k = - [ k̂^2 /(1 + τ_i k̂^2)] σ̂_k. The
long-wavelength (low-k) form of this delta-f polarization is optionally obtained by
setting the term τ_i k̂^2 ≡ 0 herein. The gyrofluid ion
potential (step # 7) is simply ϕ̂_i = Γ_1iϕ̂.
For diagnostics, the delta-f total thermal free energy is evaluated as
E_T = (1/S) ∫ d x [ ( N̂_e -N̂_0 )^2 + τ_i (N̂_i -
N̂_0)^2] and kinetic energy as E_K = (1/2S) ∫ d x (ϕ̂_i)^2, while the transport and other output quantities
remain the same as for the full-f model.
§ TEST OF GENERALIZED POISSON SOLVERS
In step # 5 of the full-f TIFF algorithm, the polarization equation in the
form of a generalized 2d Poisson problem ·εϕ = σ
has to be solved for the unknown ϕ. Equivalent problems arise in other
physical scenarios, such as for spatially variable dielectric with
permittivity ε ( x) and a given (negative) space charge
distribution σ ( x). Iterative
solution algorithms are routinely applied on such problems, but in most
applications the inhomogeneity in ε is usually small, whereas it
can vary strongly in our case of a turbulent edge plasma. Iterative
solvers may converge slowly in such situations, if at all.
The present TIFF code implementation includes a choice between an iterative preconditioned
conjugate gradient (PCG) solver, an iterative red-black successive
over-relaxation (SOR) solver, and a novel “dynamically corrected” Fourier
(DCF) solver. The solution algorithms are described in detail in the
appendix. Other solvers are of course possible, perhaps faster, and may be further implemented
and tested in future. The full-f gyrofluid FELTOR code for example also includes a
discontinuous Galerkin method or a multi-grid scheme, in addition to a
(different) conjugate gradient scheme <cit.>.
Here the PCG, SOR and DCF solvers in TIFF are compared by method of a constructed
solution as a component test, and within the full code setup by
cross-verification with recent “blob” dynamics results from the FELTOR code.
The main aspect here lies on (cross) verification and determination of
accuracy of the new DCF scheme, which is suggested as a stable and
(predictably) efficient solution method for application on the
generalized Poisson problem of the full-f polarization equation, that is
applied in each (small) time step of a dynamically evolving turbulence code.
The dynamical context is important, because the underlying “Teague method”,
introduced in different context, for approximate solution of a generalized
Poisson-type problem, is in itself not very accurate. Here however an efficient dynamical
(or recursive) correction based on the solution from the previous time step is newly applied
with the “Teague method”, which is shown to improve its accuray by two
orders of magnitude, and by that can achieve similar accuracy as iterative
solvers with a “reasonable” (affordable) number of iterations in a turbulence code.
§.§ Teague's original method
In optics, the “transport of intensity equation” (TIE) is an approximate relation for the
intensity and the wave phase of a coherent beam in an optical field
<cit.>. The underlying mathematical form of the TIE is
basically equivalent to the 2d generalized Poisson problem: · I(
x) ψ( x) = -k ∂_z I( x), where I is the
known (measured) planar intensity distribution at a distance z, k the wave number, and
ψ the sought phase of the wave. Various solution methods have been
used on the TIE in the field of applied optics <cit.>, but a now widely used
approximate Fourier method has been suggested by Teague in 1983
<cit.> by introducing what is now commonly known as Teague's auxiliary function.
In the following this original “Teague's method” is described in terms of
the notation introduced in the previous sections in context of the gyrofluid
polarization equation (and not in the TIE notations used in optics). Starting
with
· [ ε ( x) ϕ ( x) ] = σ ( x),
an auxiliary 2d scalar function p( x) is introduced that is supposed to satisfy
ε ( x) ϕ ( x) ≡ p ( x).
This reduces the generalized Poisson problem to an ordinary 2d Poisson equation
^2 p ( x) = σ ( x)
which can, for example, be efficiently inverted and solved by Fourier transformation into k space
(and back-transformation of the solution) as p( k) = - (1/k^2) σ
( k), or by any other fast Poisson solver.
The defining auxiliary relation eq. (<ref>) can be re-written as
ϕ ( x) = [ p ( x) ] / ε ( x),
on which the divergence operator is applied on both sides to obtain:
^2 ϕ ( x) = ·1/ε ( x) p ( x).
The right hand side contains the (by now) known quantities ε (
x) and p ( x), and can be evaluated by standard fourth order 2d
centered finite differencing in x and y. Eq. (<ref>) can
therefore again be solved (e.g. in k space) to obtain ϕ ( x). When Fourier transforms are used, the
method involves four evaluations (two forward and two backward) of the 2d
transform F, which can be formally expressed (compare
ref. <cit.> for the TIE version) as:
ϕ ( x) = F^-1 [ - 1/ k^2 F ·1/ε F^-1 ( - 1/ k^2 Fσ) ].
Apparently the accuracy of Teague's method is considered mostly sufficient for solution
of the TIE in optics, as it is widely applied. However, it has of course been noted that
the introduction of the auxiliary function p in eq. (<ref>) is in
general mathematically incomplete, and this approximation may introduce an
unspecified error into the solution <cit.>. The definition of eq. (<ref>)
would actually hold exactly for a truly conservative nature of the vector
field P ( x) ≡ε ( x) ϕ ( x), but this
condition of irrotationalty is in general not ensured, neither in the TIE of
optics, nor in polarization of electrostatics.
Rather, the complete Helmholtz decomposition of the vector field P( x) is:
P = εϕ≡ p + × H
with a scalar potential p( x) and a vector potential H( x).
This general form provides the same solution path for p( x) like above, as still · P = ^2 p = σ holds. The next step, generalizing the result
of eq. (<ref>), now gives:
^2 ϕ ( x) = ·1/ε ( x) p ( x) + ·1/ε ( x) × H ( x)
= ·1/ε ( x) p ( x) + {1/ε, η}
with · (1/ε) × H = (× H) · (1/ε) ={ (1/ε), η}, where the 2d
Poisson bracket notation (defined above) is used, and η is the z
component of (unknown) H(x,y) = η (x,y) e_z.
To find a constraint on η, one can apply the curl operator on both sides,
instead of the divergence operator ·ϕ = · ..., correspondingly as in eq. (<ref>). This gives the
condition ×ϕ = × (1/ε)
p + × [ (1/ε) × H] = 0,
resulting in:
·( 1/εη) = {1/ε, p }.
This determining relation for the unknown η however again has the form of a
generalized Poisson equation, which sends us back to start...
Another constraint can be generated by taking the curl on P, which
gives the relation × ( εϕ ) = ×× H. This can be rephrased as:
^2 η = {ϕ, ε}.
This first shows that a sufficient condition for the accuracy of Teague's
method would be that {ϕ, ε} = ϕ×ε≡ 0, which holds if the isocontours of ϕ
and ε were aligned everywhere <cit.>. The relation can
also be used to derive constraints for the error norm <cit.>.
If applied to gyrofluid simulations, the error is surely not negligible as
{ϕ, ε}∼{ϕ, N_i } for constant magnetic field, and thus
directly related to the advective ion nonlinearity.
§.§ Iterative and dynamical corrections
Iterative methods have been suggested to correct the error by the original
Teague approximation. In ref. <cit.> a Picard-type iteration is
applied, that uses an initial solution ϕ_0 by Teague's method
(without need to refer to the vector potential) to compute an according source
term σ_0 ≡·εϕ_0, then use
Δσ = σ_0 - σ in another turn of application of Teague's
method to obtain a correction Δϕ, and repeat until Δϕ is
smaller than a specified error.
The set of equations (<ref>) and (<ref>) allows
a further possibilities of an iterative approach. A first approximation for
ϕ^(0) is obtained by setting η^(0) = 0 in eq. (<ref>),
which corresponds to the original Teage approximation; then compute an
approximate η^(1) from exact solution of eq. (<ref>) using
this approximate ϕ^(0), and with that obtain an updated ϕ^(1)
from solution of eq. (<ref>); and iterate further until a desired error
bound is reached.
In the following, this recursive correction (here denoted as “RCF method”)
based on eqs. (<ref>) and (<ref>) will be tested by
means of a constructed solution. However, any such iterative methods involve
multiple evaluations of Fourier transforms (or other fast conventional Poisson
solvers) per iteration step, and can become expensive when the iterative
evaluation has to be carried out in every of very many time steps in a
dynamical turbulence simulation.
It will be shown that already after one or two iterations a high accuracy is
reached. This motivates another possible non-iterative correction to Teague's
method in the context of a dynamical simulation with time evolution of
ϕ(t) in the presence of small time steps.
In simulations of fully developed turbulence, and in particular when an
explicit finite difference scheme is used like in the present code, the time
step Δ t, as it appears in eq. (<ref>), is small, and so
accordingly is the difference between successive solutions ϕ^(t) and ϕ^(t-1).
The idea is to therefore evaluate
η_o ( x) ≡^-2{ϕ^old, ε}
with an “old” solution from previous time steps in a dynamical simulation,
and with this approximation compute
ϕ ( x) = ^-2[ ·1/ε ( ^-2σ ) + {1/ε, η_o
}].
In contrast to iterative schemes applied within each time step, this reduces
the additional expense for the correction to only one (for example FFT based)
inversion of the conventional Poisson problem, and (comparatively cheap)
computation of another Poisson bracket.
In the very first time step one can simply use η_o = 0 and thus obtain an
uncorrected approximate solution for ϕ by the conventional Teague method.
It is feasible to simply use ϕ^old≡ϕ^(t-1) of the previous time
step. In the present code an extrapolation is applied as:
ϕ^old≡ϕ^(t-1) + a · (ϕ^(t-1) - ϕ^(t-2) )
with a free estimator factor a ∈ (0, 1). Because this extension of
Teague's method uses previous results of a time evolving simulation, and
evaluates the multiple occuring inversions of the Poisson problem with Fourier
solvers in k space, it is in the following abbreviated as the “DCF”
(dynamically corrected Fourier) method. The accuracy however will here depend on
the size of the (small) dynamical time step.
§.§ Unit test of generalized Poisson solvers
The specific implementation of the PCG, SOR and DCF solvers as used in the
TIFF code is in detail described in the Appendix. The code can be run with a
unit testing option, by setting a flag in the input parameter file, that
initialises analytical constructed functions ε_c and σ_c and
only calls the specified Poisson solver once, so that the numerical solutions
for ϕ can be directly compared with the analytical function ϕ_c.
The main purpose of this test is to determine the applicability and accuracy of the novel dynamically
corrected Fourier (DCF) solver in comparison to the established SOR and PCG methods.
As constructed solutions, here ϕ_c ≡ϕ_c0sin ( x k_x ) sin ( y k_y ) and
ε_c ≡ 1 - g x + a sin (x k_n) sin (y k_n) are exemplarily specified,
and the corresponding σ_c = ·ε_c ϕ_c =
ε_c (∂_x^2 ϕ_c + ∂_y^2 ϕ_c ) + (∂_x
ε_c )(∂_x ϕ_c ) + (∂_y ε_c )(∂_y ϕ_c )
is also given analytically. In the following test, ϕ_c0 = 1, a=0.2, g
= 0 or 1, k_x = 2 (2 π/L_X), k_y = 3 (2 π / L_y) and k_n = 4 (2
π /L_x) are set.
The constructed and numerical solutions are shown in Fig. <ref>
(left) as a function of x at y_0 = (L_y/2 -5). The differences Δϕ(x,y) between constructed and numerical solutions can be visualised and
compared by their global L_2 norms as a function of resolution or iterations.
In Fig. <ref> (right) the global error err = ||Δϕ||_2
is generally large for g=1 when a background gradient in ε_c
(x) (solid lines) is present and discontinuities in the derivatives occur at
the x boundaries. For comparison, also results for the above constructed
solution but with g=0 (constant background, periodic ε_c (x),
dashed lines) is shown, with reduced more “ideal” errors. In practice, the
g=0 case would for example correspond to a “seeded blob” simulation
scenario, whereas the g=1 case would be relevant to gradient driven drift wave turbulence scenarios.
The error err is shown as a function of the number of iterations n_iter
for different solvers, on a square equidistant grid with N = N_x = N_y =
256. For reference the two horizontal bold lines denote the error for g=1 of the
recursively corrected Fourier (RCF) method after one recursion (grey line) and
after two recursions (brown line). The dynamically corrected Fourier solver
(DCF) uses recursion by means of the value ϕ_0 from the last dynamical
time step and therefore can not be quantified with a unit test, but one might
expect an error of the DCF solver in between the RCF(1) and RCF(2)
solvers. For higher recursion numbers (≥ 4) the error of the RCF solver
approaches the error of the high n_iter limits of the PCG (blue, solid
line) and PSD (green, solid line) solvers, as all of those schemes employ
Fourier methods and fourth order finite differencing. (The pre-conditioned
steepest descent (PSD) solver is described in the Appendix in context of the
PCG solver.)
The error (for g=1) of the PCG scheme is after n_iter = 5 iterations
comparable to the RCF(1) scheme. The computing times of RCF and PCG are here for
this PCG iteration number also similar.
The following table depicts the L_2 error and order for the g=0 test case
above, given with the RCF(4) scheme with four recursive iterations as a
function of grid resolution h ∼ N_0/N. The order is calculated as O =
log( ||Δϕ||_2^(h) / ||Δϕ||_2^(h/2) ) / log( 2/1 ). A
higher number of iterations does not give further improvement in this case.
The table shows the expected scaling with fourth order accuray, which
corresponds to the order of the finite difference schemes used in the evaluation of the
right hand side terms in eq. (<ref>).
The second order accurate SOR scheme (orange lines in Fig. <ref>)
has much slower convergence and does for
this resolution never reach the accuracy of the (fourth order) RCF and PCG
schemes. Usually several hundred SOR iterations are required for acceptable
results in the unit test, where the initial ϕ (n_iter=0) ≡ 0 for the
iteration is zero. The convergence (for a given error tolerance)
however improves drastically within a dynamical simulation, when the initial
ϕ(n_iter=0) of the iteration is projected from the result ϕ(t-1) of the
previous time step.
In Fig. <ref> the error is shown as a function of the resolution
in terms of the number of grid points N_x = N_y, with otherwise same
parameters as above. The case of periodic density (g=0) is shown in the left
figure: for a large number of iterations (n_iter=50) the PCG scheme (bold blue
line) follows the expected fourth order dependence on resolution, whereas the
SOR scheme (bold orange line, n_iter=1500) shows the second order
dependence err ∼ N_x^-2 (i.e., a slope of -2 in the log-log plot). The
“ideal” RCF scheme (bold black line) here denotes a correction
term calculated once from the constructed solution ϕ_c instead of
recursively, which also shows a fourth order (N_x^-4) slope, as expected.
The gradually thicker black lines (from top to bottom) show, topmost, the error of the
original “Teague method” without correction, which is for usual resolutions
always in the range of 2 · 10^-3 for g=0 (left figure). The RCF(1)
scheme uses one recursion of the solver in the unit test problem, and already
gives an improvement of the error by around two orders of magnitude. The
RCF(2) scheme with two recursions follows the “ideal” dependence until
around N_x = 256 and for higher resolution saturates at an error of around
2· 10^-7.
The gradually thicker blue lines (from top to bottom) show the error by the PCG scheme after 4,
5 and 50 iterations. The n_iter=5 case for PCG has for high resolution an
error in between the results from the RCF(1) and RCF(2) schemes.
The right Fig. <ref> shows the corresponding results for g=1
in the presence of a background gradient, which in the present implementation
of the solvers in TIFF overall reduces the achievable accuracy.
The general behaviour of the different solvers is similar to the g=0
results, but no “ideal” scaling of the error (-4 power law) with resolution is obtained any more.
The RCF(1) and PCG(4) schemes also show similar errors for medium to large resolution.
The dynamically corrected Fourier (DCF) scheme needs three evaluations of the
standard Poisson problem, which is here presently achieved by a fast Fourier
solver. In a dynamical simulation the error correction then is calculated from
the previous time step solution. The PCG scheme needs one evaluation of a
standard Poisson inversion (e.g. by Fourier solver) per iteration step, so
that PCG (n=3-4) has approximately the same computational cost as a DCF
evaluation (which in addition needs another call of a Poisson bracket evaluation).
The necessary number of iterations in PCG or SOR to reach a given accuracy can
strongly depend on the complexity of the problem, basically determined by the
degree of variability of ε (x,y,t). The computational expense of
the DCF method only depends on the size of the grid. The DCF scheme
therefore presents itself as a viable alternative method (for moderate accuray) with
predictable run times.
§.§ Cross-verification of full-f full-k blob simulation
The unit testing of the PCG, DCF and SOR solvers above has been applied on the
“pure” generalized Poisson equation ·εϕ
= σ. The implementation in the TIFF code for solution of the full-f
full-k gyrofluid polarization eq. (<ref>) requires for τ_i ≠
0 two additional applications of (here Fourier based) solvers for the
√(Γ_0) operator, and further (also Fourier based) evaluations of
the gyro-averaging operator Γ_1 N_i in σ, and of Γ_1ϕ in the ion gyrocenter density advection.
As a further test, an overall cross-verification of the code is intended by
running a seeded “blob” simulation for parameters that correspond to a
recent study with the full-f full-k gyrofluid isothermal version of the
FELTOR code, described by Held and Wiesenberger in ref. <cit.> and
shown in Figures 4-7 therein.
The simulation plasma input parameters for this case are: ion to electron
temperature ratio τ_i = 4, magnetic curvature κ = 1.5 · 10^-4,
normalization scale δ = 1, non-adiabaticity α̂= 0, and
absence of a sheath (Λ̂_Ss = 0). The square domain is L_x = L_y =
200 ρ_s with resolution N_X = N_y = 1024.
The initial perturbation is a Gaussian “blob” in electron density of width
w = 5 ρ_s and peak amplitude ΔN̂ = 1 centered at x_0 = 25
and y_0 = 100, on a constant background with N̂_0 = 1.
The initial perturbed ion gyrocenter density is either set
equal to the electron density (which introduces an initial blob spin), or for
a “vorticity free” initialization to N̂_i = Γ^-1 N_e.
The x boundary density values are pinned in a small zone
L_β = 2 ρ_s.
The hyperviscosity is set to ν_4 = 10^-5, and in accordance to
ref. <cit.> an additional physical viscosity ν_2 = 3 · 10^-5
is applied.
The time step is set to Δt̂ = 0.1 and run for I_max = 23040
steps with diagnostic outputs at every I_out = 320 steps.
The iterative error bound for the PCG and SOR solvers was set to err =
10^-3 with 500 iterations maximum.
The ExB vorticity Ω̂(x,y)= ^2 ϕ̂ of the evolved
blob at the final time t̂ = 2304 in units of ρ_s/c_s is compared in
Fig. <ref> for simulations with the different PCG, DCF
and SOR solvers used for the polarization. For this plot the zero vorticity
initial conditions is chosen. The blob gradually acquires gyro-induced
FLR spinning and the propagation has characteristic up-down asymmetry.
The three solvers show visually very similar results regarding the
fine structure and global quantities such as the center-of-mass position of
the blob. For comparison the corresponding result obtained with the original
Teague method (without correction) is shown in frame (d). Shape and position
are similar but with clear differences in details. These plots can be compared
with the result of FELTOR simulations of Figure 4 (bottom middle frame) in
ref. <cit.>. Note that the FELTOR blob has initial position x_0^FELTOR = 50,
whereas here it is at x_0^TIFF = 25. A blob front position of x_F^TIFF≈
125 in our results thus corresponds to a position x_F^FELTOR≈ 150 in the
referenced results and figure.
The overall structure and position is similar, but fine details (e.g. the tilt of
the blob head, and structures in the secondary trailing vortices) are clearly
different between the TIFF and FELTOR codes, which employ different solvers
for the Poisson problem and the gyro operators.
Fig. <ref> shows basically the same set-up but with
nonzero vorticity initialisation achieved by setting N̂_i = N̂_e. The initial spin largely compensates the later FLR spin build up, so
that propagation has much straighter radial (“to the right”) direction with
less up-down asymmetry. Again the PCG, DCF and SOR solvers show very close
agreement. Here the result obtained with the original (uncorrected) Teague
method is clearly off and shows a pronounced downward drift. These plots can
again be compared with the corresponding Figure 4 (bottom right) in
ref. <cit.>. The overall structure is again similar, and details in the
vortex trail again differ. The most noteable difference is in the blob front
position, where x_F^TIFF≈ 175 in our present results would be
expected to correspond in FELTOR to 200, which is already the position of
the right boundary.
The actual FELTOR blob front position in ref. <cit.> at the same time
appears to be at x_F^FELTOR≈ 190 only. A probable explanation for
this difference between the FELTOR and TIFF code results could be in the
respective handling of the (in this particular case very close) boundary conditions.
The resolutions also differ between the code results, because in
ref. <cit.> FELTOR uses a discontinuous Galerkin method with 300 grid
cells and 5 polynomial coefficients, which would correspond to 1500 grid
points for the TIFF solvers.
Overall the agreement between the codes and between the different solvers used
in TIFF can be regarded as satisfactory. In particular, the new DCF method has
been shown to achieve strong agreement with the other solvers, and in
particular with the also fourth order PCG scheme.
The results shown in Fig. <ref> have been obtained on a
Threadripper PRO 5975WX Linux workstation with 256 GB RAM, running on 32
threads of the CPU.
The run time was 216 min with the SOR solver, also 216 min for the PCG solver,
67 min for the DCF solver, and 60 min for the uncorrected Teague solver.
The times for the zero vorticity runs (Fig. <ref>) were
212 min for the SOR solver, 116 min for the PCG solver, and 68 min for the
DCF solver, and 59 min for the uncorrected Teague solver.
The new DCF solver clearly is most efficient and independent of the complexity
of the density fields. The additional cost for the DCF correction compared to
the original Teague method is around 10-15 %.
Shorter run times could of course be achieved for the iterative solvers by
reducing accuracy through an increased error tolerance setting.
The resolution with 1024^2 grid points could be reduced for
practical application scenarios, which would also (for reasons of CFL
stability) allow respectively larger time steps: half of the grid points per
dimension in 2d thus means roughly 1/8 run time.
For specific application to study blob dynamics on a homogeneous
background (such as in the verification example above) the code could also be
sped up by using only one, physical domain with periodic boundary conditions,
instead of the more general quarter-wave extended four-fold mirror
domain. This would reduce computation times by another factor of around 4.
§ DRIFT WAVE TURBULENCE SIMULATIONS IN FULL-F AND FULL-K
The paradigmatic Hasegawa-Wakatani (HW) quasi-2d drift wave turbulence fluid
model is in the following extended, as described above, to full-f full-k
isothermal gyrofluid simulations.
The intention here is again on cross-verification between the polarization
solvers. The 2d polarization and gyro operator solvers can be directly
implemented in future 3d field-aligned gyrofluid turbulence codes, as the
perpendicular (to the magnetic field) 2d drift plane can be solved in the same
manner as in a 2d code like here. A 3d (isothermal electrostatic) resistive
drift wave turbulence code would basically replace the approximate HW coupling
term by direct solution of the parallel electron and ion velocities through
an additional set of nonlinear advective dynamical equations.
The morphology of 2d drift wave turbulence in the full-f gyrofluid OHW model
is illustrated in Fig. <ref> as a snapshot in the saturated
turbulent phase, for parameters described below. The left frame shows the
electric potential ϕ(x,y), which acts as a streamfunction for the
advecting turbulent E-cross-B velocity. The right frame shows the vorticity
Ω(x,y) = ^2 ϕ with the characteristic thin vorticity sheaths
induced by FLR spin-up <cit.>.
Fig. <ref> shows a radial cut at y=L_y/2 of the electron density
n_e(x), ion gyrocenter density n_i(x), electric potential ϕ(x) and
vorticity Ω(x) at a snapshot in time during the saturated turbulent phase.
Radial boundary conditions are set to zero vorticity and zero zonal flow. The
densities are pinned to the initial background profiles values at the radial
boundaries.
§.§ Comparison between SOR, PCG and DCF solvers
The different implemented TIFF solvers are compared for an OHW turbulence case with
α̂=0.2, τ_i=1, α̂= 0.2, κ = 0, δ = 0.015,
N̂_L = 1.75, N̂_R = 0.25, L_x=L_y = 96, n_x = n_y = 256,
ν_4 = 0.01, Δ t = 0.0025, and β_x=2. Initialisation is done
with a random bath of amplitude Δ N = 0.05.
Fig. <ref> shows in the left frame the turbulent transport
Q_n(t) for the SOR (thick grey), PCG (medium black) and DCF (thin orange)
solvers, and in the right the corresponding time traces of thermal energy
E_T(t) and kinetic energy E_K(t). All time traces agree closely between
the solvers, both in the initial quasi-linear phase and statistically in the
turbulent phase.
The averages in the time window between 100 ≤ t ≤ 1000 are:
⟨ Q_n ⟩ = 2.50 ± 0.29 (SOR), 2.46 ± 0.26 (PCG), 2.55 ± 0.31 (DCF);
⟨ E_K ⟩ = 2.63 ± 0.21 (SOR), 2.50 ± 0.15 (PCG), 2.61 ± 0.18 (DCF);
⟨ E_T ⟩ = 91.3 ± 6.2 (SOR), 89.6 ± 5.0 (PCG), 89.4 ± 6.1 (DCF).
The (second order accurate) SOR value of the kinetic energy is around 5 % larger compared to
the (fourth order accurate) PCG or DCF results, while the fluctuation
standard deviation of energies for all solvers is also in the order of
5 %. The average values of transport and thermal energy agree very well
within the standard deviation for all three solvers.
The computation times were 201 min (SOR), 134 min (PCG) and 81 min (DCF), here
achieved on 16 threads of a a dual Xeon Haswell E5-2687W-v3 workstation.
The (second order) SOR scheme clearly looses in terms of both performance and
accuracy compared to the (fourth order) PCG and DCF schemes, but still can
have its use for testing purposes as a reference generalized Poisson solver
without need for envoking an FFT library.
The new DCF scheme appears to be both sufficiently accurate and efficient
to be considered for further use.
It should be noted that the 2d FFT evaluation is a memory bound application
and profits from high available bandwidth. The relative performance between
the solvers can therefore differ between hardware systems.
§.§ Mass and energy conservation test with DCF scheme
Total mass and total energy should always be ideally conserved by the numerical
scheme employed for the dynamical turbulence simulation.
The change in time of total energy E(t) = E_K + E_T as the sum of the global thermal free
energy and the global kinetic energy (see section <ref>)
can be computed as the nonlinear growth rate R_E = (1/2E)(Δ E /
Δ t), and similar the relative rate of change of particle number (or
“mass”) is obtained as R_M = (1/M)(Δ M / Δ t) for the total
gyrocenter density as M(t) = (1/2) [ (N̂_e(t) - N̂_0) + (N̂_i(t) - N̂_0)].
These relative energy and mass change rates as a function of time are shown in
Fig. <ref> for the same turbulence simulation parameters as
above, obtained with the DCF scheme. A linear regression for the values in
1000 ≤ t ≤ 5000 gives a relative tendency
⟨ R_E ⟩ (t) ∼ 4 · 10^-6 - 2 · 10^-10 t for the
energy, and ⟨ R_M ⟩ (t) ∼· 10^-8 - 4 ·
10^-10 t for the particle number. Both constitute very small loss rates
and can be regarded as sufficiently good conservation property of the
numerical scheme. The standard deviations of the nonlinear growth rate
fluctuations are s(R_E) = 4 · 10^-3 and s(R_M) = 2 · 10^-4.
§.§ Full-f model in small amplitude limit vs. delta-f model
The full-f full-k model should agree in the limit of small amplitudes with the
delta-f model, as discussed in sections <ref> and <ref>.
To test this we apply largely the same parameters as above to OHW simulations with the
DCF solver: α̂= 0.2, τ_i =1, κ=0, ν_4 = 0.01, L_x = L_y =
96, n_x=n_y=256, and Δ t = 0.0025. The inner boundary density is N̂_L = 1
+ ϵ· 0.75 and outer boundary density N̂_R = 1 - ϵ· 0.75, with drift scale δ = ρ_s / L_n = ϵ·
0.015. A localized “blob”-like Gaussian perturbation with width w = 8 and
amplitude ΔN̂ = ϵ· 0.1 is initialised.
This parameter set is once run with the delta-f model of section
<ref> for ϵ = 1, and once with the complete full-f model but
for ϵ = 10^-3 in the small amplitude limit.
In Fig. <ref> the time traces of Q_n(t), E_T(t) and E_K(t)
are compared for both cases. They show excellent agreement both directly in the
quasi-linear growth phase, and statistically in the nonlinear turbulent phase
after around t>150.
Averages in the time window between 200 ≤ t ≤ 1000 are:
⟨ Q_n ⟩ = 1.95 ± 0.21 (FF-ϵ) vs. 1.97 ± 0.20 (δf);
⟨ E_K ⟩ = 1.96 ± 0.15 (FF-ϵ) vs. 1.93 ± 0.11 (δf);
⟨ E_T ⟩ = 67.6 ± 4.7 (FF-ϵ) vs. 69.6 ± 3.6 (δf).
This demonstrates the correct transition behaviour of the full-f
full-k polarization equation and the dynamical equations towards the delta-f limit.
The computation time here was about twice as long (81 min) for the full-f
simulation than for the delta-f case (41 min) on the Haswell. The computational expense is
in the evaluation of the Fourier solvers (in the polarization and for the gyro
operators), for which there are eight calls in the full-f DCF case, and four in the delta-f case.
§ CONCLUSIONS AND OUTLOOK
An isothermal quasi-2d gyrofluid model and a code (TIFF) for
arbitrary amplitude (full-f) and arbitrary wavelength (full-k) drift
instabilities and turbulence in magnetized plasmas has been introduced.
A major aspect was on testing of a here newly suggested “dynamically
corrected Fourier” (DCF) solver for the generalized Poisson equation, which
is based on the original approximate Teague method. The generalized
(a.k.a. “variable”) Poisson problem appears in the solution of the gyrofluid
(similar to the gyrokinetic) polarization equation, which couples the evolving
gyrocenter plasma densities with the electric potential.
The present approach retains the complete FLR effects and spatial and
dynamical variations of the polarization density. The new DCF solver was shown to
be a viable, sufficiently accurate and efficient method for dynamical
turbulence simulations coupled to the generalized Poisson problem,
which motivates its further use in future extensions of the code.
In its recursive “RCF” form (with for example four iteration) as a high
accuracy extension of Teague's method the scheme could also find applications
in the field of optics as a stand-alone application for solution of the
transport of intensity equation (TIE).
The main purpose of the present work is as reference and test case for the
TIFF model and code, whereas detailed physics studies with the present code,
such as for example on FLR effects on zonal flows, or on properties of
turbulently generated “blobby” (intermittent) transport, will be discussed elsewhere.
The extension of the 2d TIFF code to a magnetic field-aligned flux-tube like
3d geometry can directly make use of the here tested polarization solvers, as
these always only act in the locally perpendicular 2d drift plane. Future
developments of such a corresponding 3d full-f full-k gyrofluid edge turbulence code will
include the addition of temperature evolution equations (and through this
applicability on temperature gradient driven modes and use of Landau damping
mechanisms), and generalization to electromagnetic drift-Alfvén dynamics.
§ ACKNOWLEDGMENT
The author thanks Markus Held (UiT / UIBK) and Matthias
Wiesenberger (DTU) for valuable discussions and collaboration.
§ FUNDING
This work was supported by the Austrian Science Fund (FWF) project P33369.
§ DATA AVAILABILITY
The code TIFF is openly available at: https://git.uibk.ac.at/c7441036/tiff
§ APPENDIX A: PCG SCHEME
Conjugate gradient methods for the solution of linear systems are
introduced in numerical textbooks such as by LeVeque <cit.>. Here a
pre-conditioned conjugate gradient (PCG) scheme, with a pre-conditioner and
algorithm suggested by Fisicaro et al. <cit.>, is applied.
A generalized Poisson operator A≡·ε is defined so that a solution of the system Aϕ = σ is sought.
The residual of an approximate solution ϕ^(n), say obtained in step
n during an iteration, is r^(n) = σ - Aϕ^(n).
Within the context of a dynamical simulation with small time steps Δ t,
an initial guess can be obtained by extrapolating the converged solutions from
previous time steps towards
ϕ^(0) (t) = ϕ (t-1) + a · [ ϕ (t-1) - ϕ (t-2) ],
with a free estimation factor a ∈ (0, 1), and ϕ^(0) (t=0) = 0 in
the first time step. From this the initial residual r^(0)= σ -
Aϕ^(0) for the iteration is obtained.
The PCG scheme basically searches for the minimum of a (quadratic) residual function
f(ϕ) = (1/2) (ϕ, Aϕ) - (ϕ, σ), which is given
from a 2d Hessian Taylor expansion, by evaluating its (conjugate) gradients
f(ϕ) = r = σ - Aϕ.
Here (A,B) = A^T B = ∑_i,j A_i,j B_i,j denotes the inner product of
two matrices A and B.
A pre-conditioning matrix q^(0) is evaluated once (in each dynamical
time step) and remains constant throughout the PCG iteration.
The iteration algorithm (cf. sec. 5.3.5 in ref. <cit.>, and table 2 in
ref. <cit.>) proceeds as:
(0) Pre-process q^(0)≡√(ε)^2 √(ε).
(1) v^(n)≡ P^-1 (r^(n)), with a precondition operator
P specified below.
(2) β^(n) = (v^(n), r^(n)) / (v^(n-1),
r^(n-1)), for n ≠ 0.
(3) p^(n) = v^(n) + β^(n) p^(n-1).
(4) w^(n) = A p^(n) = A v^(n) + β^(n)
A p^(n-1) = r^(n) - q^(0) v^(n) + β^(n) w^(n-1).
(5) α^(n) = (v^(n), r^(n)) / (p^(n), w^(n)).
(6) ϕ^(n+1) = ϕ^(n) + α^(n) p^(n).
(7) r^(n+1) = r^(n) - α^(n) w^(n).
(8) Return to step (1) until the residual
||r^(n+1)|| is below a specified limit.
The algorithm allows to efficiently re-use several already calculated terms
and products.
A classical conjugate gradient (CG) solver without preconditioning of
the residual in step (1) would set P = 1 and v^(n) = r^(n) in the algorithm.
The idea is that a preconditioned residual v = P^-1 r =
P^-1σ - P^-1 Aϕ is minimized in the same way
as the original r.
Application of a suitable preconditioner can speed up convergence
significantly, but needs to be chosen carefully. Here the preconditioner
suggested in ref. <cit.> for the generalized Poisson problem is
applied as P () ≡√(ε)^2 √(ε)
(), and accordingly q^(0)≡√(ε)^2
√(ε) above.
The evaluation of A v = ·ε v =
√(ε)^2 v √(ε) - v √(ε)^2 √(ε) = P (v) - v q^(0) = r - v q^(0) in step
(4) is greatly simplified by this preconditioner <cit.>.
For the inversion in step (1) here v^(n) =
P^-1 (r^(n)) = (1/√(ε)) ^-2 ( r^(n) /√(ε) )
needs to be evaluated including the application of a standard fast Poisson solver,
which is here presently achieved by an FFT solver in k space.
The additional expense of calling a standard Poisson (^2 u = f)
solver at each iteration step is paid off by the in general rapid convergence
of this PCG scheme. The default choice for the standard Poisson solver in the
TIFF code is by FFT, but optionally also a (generally slower) SOR scheme is available.
The order of this PCG solver is determined by the order of evaluation of the
Laplacians (in q^(0), or in a non-FFT standard Poisson solver), which is
here achieved in fourth order accuracy.
The algorithm above could also be converted into a (preconditioned) steppest
descent scheme by setting β^(n) = 0, which more serves didactical than
practical purposes because of its generally slower convergence.
§ APPENDIX B: SOR SCHEME
Successive over-relaxation (SOR) is an established method for
iterative solution of elliptic equations, such as the generalized Poisson
equation ·εϕ = σ of electrostatics,
through an appropriate finite difference discretization. Here a basic
algorithm and second order discretization as outlined in the textbooks by
LeVeque <cit.> and Humphries <cit.> is followed.
In one dimension, the inner term ε∂_x ϕ≈ W Δϕ / Δ x is first discretized with interpolated coefficients W =
(ε_i + ε_i-1)/2, and Δϕ = ϕ_i -
ϕ_i-1. In the following an equidistant rectangular grid with
Δ x = Δ y ≡ h is assumed. Applying likewise the outer
derivative, one gets W_i+1 (ϕ_i+1 - ϕ_i) - W_i-1 (ϕ_i -
ϕ_i-1) = h^2 σ, with coefficients W_i+1 = (ε_i+1 +
ε_i)/2 and W_i-1 = (ε_i + ε_i-1)/2,
which can be re-arranged to define ϕ_i at grid node x_i.
In the same manner, the 2d relation for ϕ_i,j in 2nd order 5pt-stencil is obtained as
ϕ_i,j = 1/W_0( W_i+1,jϕ_i+1,j + W_i-1,jϕ_i-1,j + .
. + W_i,j+1ϕ_i,j+1 + W_i,j-1ϕ_i,j-1 - h^2
σ_i,j)
with W_0 = W_i+1,j + W_i-1,j + W_i,j+1 + W_i,j-1, and
W_i+1,j = (ε_i+1,j + ε_i,j)/2,
W_i-1,j = (ε_i-1,j + ε_i,j)/2,
W_i,j+1 = (ε_i,j+1 + ε_i,j)/2,
W_i,j-1 = (ε_i,j-1 + ε_i,j)/2.
Starting with an initial guess ϕ^0( x), the relation (<ref>)
can be iterated, with grid values of ϕ_i,j^(n) applied on the r.h.s. in
order compute the updated ϕ_i,j^(n+1) on the left, until the
residual error norm
|| R^(n+1) || with R^(n+1) = ϕ^(n+1) - ϕ^(n) is smaller than a specified limit.
The SOR method <cit.> improves convergence by applying a correction factor ω as
ϕ^(n+1) = ϕ^(n) + ω R^(n).
The grid array is swept in odd (“red”) - even (“black”) order, as in
eq. (<ref>) the values of ϕ_i,j^(n) for even indices i or
j depend only on odd indexed grid values of ϕ^(n-1), and vice versa.
The over-relaxation parameter ω is determined by Chebyshev acceleration
according to ω_odd^0 = 1 and ω_even^0 = 1/(1-r^2/2) as
initial values, and further ω = 1/(1-r^2 ω/4), updated in each half-sweep.
Here the spectral radius is calculated as r = [ cos(π / n_x) + cos(π /
n_y)]/2 for a homogeneous equidistant grid with grid point numbers n_x and n_y.
Each “red” and “black” sweep through the 2d checkerboard-like grid is loop
parallelized with OpenMP, respectively.
Convergence can be significantly accelerated by an initial guess for ϕ^0(
x) based on the solution from previous time step(s) within a dynamical
simulation for small time steps Δ t, by extrapolating (in similar
spirit as for the over-relaxation factor above)
ϕ^0≡ϕ(t-1) + a · (ϕ(t-1) - ϕ(t-2) ).
with a free prediction factor a ∈ (0, 1). The first time step uses
ϕ^0 = 0, in the second step a=0, and in later times a factor
between 0.5 and 1.0 has been found to be most efficient.
The rate of convergence depends on the degree of (non)uniformity of
ε( x). In classical electrostatics the permittivity is
usually only weakly varying within one medium, but may have discontinuities
between neighbouring media, and is often set in complicated geometries, so
that mostly rather finite element schemes instead of finite
difference schemes are employed. In the present application on the gyrofluid
polarization equation in a small local section of an edge plasma, the grid can
be simply chosen as rectangular, but the polarization density can be strongly inhomogeneous for
large amplitude turbulent fluctuations.
The present SOR implementation is only second order accurate, but straightforward
to implement and can serve as a reference for the fourth order PCG and DCF
schemes.
For consistency, all dynamical simulations shown in this publication that have
been obtained with the second order SOR scheme also use the second order
versions of the Arakawa scheme (for the advecting Poisson brackets) and of the
curvature operator. For all other generalized Poisson sovers (DCF, PCG) the
respective consistent fourth order versions are used.
For evaluation of the standard Poisson problem (^2 u = f) also a
fourth order accurate SOR scheme is available in the TIFF code, which uses
discretization by a Collatz Mehrstellenverfahren (9pt stencil for the Laplacian
on u, and a 5pt stencil correction for f).
§ APPENDIX C: DCF SCHEME
The here introduced “dynamically corrected Fourier” (DCF) approach for the
generalized Poisson problem adds a correction term, computed from the result
of the previous time step within a dynamical simulation, to “Teague's method”
(compare main text above).
Teague's approximate method <cit.> had originally been devised for
solution of the TIE (transport of intensity equation) in optics <cit.>,
and can be efficiently evaluated in Fourier space <cit.>.
It is here innovatively applied on solution of the electric potential ϕ
( x) from the gyrofluid (or gyrokinetic) polarization equation, and
within a dynamical context. For small time step sizes Δ t this allows
to re-use solutions of ϕ^old from past times, instead of a (more
expensive) iterative error correction. An extrapolated prediction with a free
estimation parameter a ∈ (0, 1) based
on two previous solutions is used for the corrector, as
ϕ^old≡ϕ^(t-1) + a · (ϕ^(t-1) - ϕ^(t-2) ).
The generalized Poisson equation ·εϕ = σ
is formulated by means of a polarization density vector field P≡εϕ≡ p + × H in terms of a scalar
potential p and a vector potential H = η e_z.
Input quantities are the known 2d fields σ ( x) and ε ( x).
By application of the divergence on · P = σ
the scalar field p is first obtained from:
p = ^-2σ.
The inversion of the Laplacian in the standard Poisson problem is here
obtained in k space by Fourier transforms, using the FFTW3 library, as
p = - F^-1 k^-2 Fσ.
Further, an approximation for A is obtained (see main text above) from × P as:
η_o ( x) ≡^-2{ϕ^old, ε}.
This estimated quantity is used in the extended, dynamically corrected relation:
ϕ ( x) = ^-2[ ·1/ε p + {1/ε, η_o
}].
Like above, the inverted Laplacians in eqs. (<ref>) and
(<ref>) can be efficiently solved in Fourier space. The term
· (1/ε) p is evaluated by standard
(fourth order) centered finite differences, and the Poisson bracket can be
computed either also by simple fourth order centered differences, or by
re-using the Arakawa scheme (introduced for the advective terms in the
dynamical gyrocenter density equations).
For a solver unit test (with a constructed solution) the set of equations
(<ref>) and (<ref>) can also be applied recursively
(instead of within a dynamical context) by re-setting ϕ→ϕ^old directly after each iteration step, starting with η_o = 0.
In the main text above this is refered to as “RCF” for recursively corrected
Fourier scheme.
The computational bottleneck in this DCF solver for the generalized Poisson
equation is provided by the three forward plus backward 2d
Fourier transforms (in the three ^-2 = - F^-1 k^-2
F operations).
To put this in context, note that in addition, Fourier solvers are here presently
also used for evaluation of all gyro-operators:
application of any (PCG, SOR or DCF) solver on the full-k polarization
equation requires two further, here also Fourier based, evaluations of the
√(Γ_0)^-1 operator (in TIFF code algorithm steps #4 and 5),
and in general another in computing N̂_Gi for use in σ in step
#2, one more for the ion potential ϕ_i, and a further one in the
boundary conditions (step #4).
In total eight forward plus backward transforms are employed per time step.
The execution time per Fourier solver application depends only on the size of
the grid arrays (and the chosen transform method or library), which by
extrapolation allows predictable run time estimates.
For cold ion cases (τ_i = 0) without FLR effects the four gyro-operations herein
can be bypassed, and computation time is significantly reduced.
For comparison, the delta-f polarization equation requires only one call of a
Fourier solver routine (regardless of cold or warm ions) for a given σ,
so the execution time for a full-f full-k polarization solve is more than five
times longer compared to a delta-f solution. (When iterative solvers such as
PCG or SOR are used, the execution time could be tweaked by compromising on accuracy.)
00
§ REFERENCES
Tynan09
G.R. Tynan, A. Fujisawa, and G. McKee.
Plasma Physics and Controlled Fusion 51, 113001 (2009).
[Scott(2021 Vol. 1)]Scott21a
B. Scott,
Turbulence and instabilities in magnetized plasmas: fluid drift
turbulence. Volume 1. IOP Publishing, Bristol (2021).
[Scott(Vol. 2)]Scott21b
B. Scott,
Turbulence and instabilities in magnetized plasmas: gyrokinetic theory
and gyrofluid turbulence. Volume 2. IOP Publishing, Bristol (2021).
Knorr88
G. Knorr, F.R. Hansen, J.P. Lynov, H.L. Pécseli, and J.J. Rasmussen.
Physica Scripta 38, 829 (1988)
Hammett90
G. W. Hammett, W. Dorland, and F. W. Perkins.
Physics of Fluids B 4, 2051 (1992).
Dorland93
W. Dorland and G. Hammett,
Phys. Fluids B 5, 812 (1993).
Beer96
M.A. Beer and G.W. Hammett,
Phys. Plasmas 3, 4046 (1996).
Snyder01
P.B. Snyder and G.W. Hammett,
Physics of Plasmas 8, 3199 (2001).
Scott00
B.D. Scott,
Physics of Plasmas 7, 1845 (2000).
Scott05
B.D. Scott,
Physics of Plasmas 12, 102307 (2005).
Kendl10
A. Kendl, B.D. Scott, and T.T. Ribeiro,
Physics of Plasmas 17, 072302 (2010).
Scott06
B.D. Scott,
Contributions to Plasma Physics 46, 714 (2006).
Scott07
B.D. Scott,
Physics of Plasmas 14, 102318 (2007).
Scott10
B.D. Scott,
Physics of Plasmas 17, 102306 (2010).
Scott10CPP
B.D. Scott, A. Kendl and T. Ribeiro,
Contributions to Plasma Physics 50, 228 (2010).
Strintzi04
D. Strintzi and B. Scott,
Phys. Plasmas 11, 5452 (2004).
Strintzi05
D. Strintzi and B. Scott,
Phys. Plasmas 12, 072301 (2005).
Madsen13
J. Madsen,
Phys. Plasmas 20, 072301 (2013).
Lee83
W.W. Lee,
Physics of Fluids 26, 556 (1983).
Hahm88
T.S. Hahm,
Physics of Fluids 31, 2670 (1988).
Sugama00
H. Sugama,
Physics of Plasmas 7, 466 (2000).
Brizard07
A.J. Brizard and T.S. Hahm,
Reviews of Modern Physics 79, 421 (2007).
Scott10GK
B. Scott and J. Smirnov,
Physics of Plasmas 17, 112302 (2010).
Krommes12
J A. Krommes,
Annual Review of Fluid Mechanics 44, 175 (2012).
Scott16
B. Scott,
Contributions to Plasma Physics 56, 534 (2016).
Endler95
M. Endler, H. Niedermeyer, L. Giannone, E. Kolzhauer, A. Rudyj, G. Theimer and N. Tsois,
Nuclear Fusion 35, 1307 (1995).
Zweben07
S.J. Zweben, J.A. Boedo, O. Grulke, C. Hidalgo, B. LaBombard, R.J. Maqueda, P. Scarin and J.L. Terry,
Plasma Physics and Controlled Fusion 49, S1-S23 (2007).
Heikkinen08
J.A. Heikkinen, S.J. Janhunen, T.P. Kiviniemi, F. Ogando,
Journal of Computational Physics 27, 5582 (2008).
Grandgirard07
V. Grandgirard, Y. Sarazin, P. Angelino, A. Bottino, N. crouseilles,
G. Darmet, G. Dif-Pradalier, X. Garbet, Ph. Gendrih, S. Jolliet, G. Latu,
E. Sonnendrücker and L. Villard,
Plasma Physics and Controlled Fusion 49, B173 (2007).
Ku09
S. Ku, C. S. Chang and P. H. Diamond,
Nuclear Fusion 49, 115021 (2009).
Dorf16
M.A. Dorf, M.R. Dorr, J.A. Hittinger, R.H. Cohen and T.D. Rognlien,
Physics of Plasmas 23, 056102 (2016).
Grandgirard16
V. Grandgirard, J. Abiteboul, J. Bigot, T. Cartier-Michaud, N. Crouseilles,
G. Dif-Pradalier, Ch. Ehrlacher, D. Esteve, X. Garbet, Ph. Gendrih, G. Latu,
M. Mehrenberger, C. Norscini, Ch. Passeron, F. Rozar, Y. Sarazin,
E. Sonnendrücker, A. Strugarek, D. Zarzoso,
Computer Physics Communications 207, 35 (2016).
Idomura08
Y. Idomura, M. Ida, T. Kano, N. Aiba and S. Tokuda,
Computer Physics Communications 179, 391 (2008).
Idomura14
Y. Idomura,
Physics of Plasmas 21, 022517 (2014).
Pan18
Q. Pan, D. Told, E. L. Shi, G. W. Hammett and F. Jenko,
Physics of Plasmas 25, 062303 (2018).
Shi19
E.L. Shi, G.W. Hammett, T. Stoltzfus-Dueck and A. Hakim,
Physics of Plasmas 26, 012307 (2019).
michels21
D. Michels, A. Stegmeir, P. Ulbl, D. Jarema and F. Jenko,
Computer Physics Communications 264, 107986 (2021).
Held17Thesis
M. Held, Full-F gyro-fluid modelling of the tokamak edge and scrape-off
layer, PhD Thesis, Universität Innsbruck (2017),
urn:nbn:at:at-ubi:1-6853, http://diglib.uibk.ac.at/ulbtirolhs/download/pdf/1530595.
FeltorV6
M. Wiesenberger and M. Held,
Feltor V.6.
feltor-dev.github.io, Zenodo, https://doi.org/10.5281/zenodo.6201099.
Wiesenberger19
M. Wiesenberger, L. Einkemmer, M. Held, A. Gutierrez-Milla, X. Sáez, R. Iakymchuk,
Computer Physics Communications 238, 145-156 (2019)
Wiesenberger14
M. Wiesenberger, J. Madsen, A. Kendl,
Phys. Plasmas 21, 092301 (2014)
Held16
M. Held, M. Wiesenberger, J. Madsen, A. Kendl,
Nuclear Fusion 56 126005 (2016).
Held18
M. Held, M. Wiesenberger, R. Kube, A. Kendl
Nuclear Fusion 58 104001 (2018).
Held19
M. Held, M. Wiesenberger, A. Kendl,
Nuclear Fusion 59, 026015 (2019).
Scott02
B. Scott,
New Journal of Physics 4, 52.1 (2002).
Scott03
B. Scott,
Plasma Phys. Control. Fusion 45, A385 (2003).
Ribeiro08
T.T. Ribeiro, B. Scott,
Plasma Phys. Control. Fusion 50, 055007 (2008).
Kendl14
A. Kendl,
International Journal of Mass Spectrometry 365/366, 106 (2014).
Meyer16
O.H.H. Meyer, A. Kendl,
Plasma Phys. Control. Fusion 58, 115008 (2016).
Meyer17
O.H.H. Meyer, A. Kendl,
Plasma Phys. Control. Fusion 59, 065001 (2017).
Meyer17b
O.H.H. Meyer, A. Kendl,
Nuclear Fusion 57, 126066 (2017).
Kendl12
A. Kendl,
Physics of Plasmas 19, 112301 (2012).
Kendl18
A. Kendl,
Plasma Phys. Control. Fusion 60, 025017 (2018).
Kendl15
A. Kendl,
Plasma Phys. Control. Fusion 57, 045012 (2015).
Kendl17
A. Kendl, G. Danler, M. Wiesenberger, M. Held,
Physical Review Letters 118, 235001 (2017).
Kendl18b
A. Kendl,
Physics of Plasmas 25, 102111 (2018).
Reiter23
E. Reiter, M. Wiesenberger, M. Held, G.W. Zarate-Segura, A. Kendl,
Journal of Plasma Physics 89, 905890110 (2023).
Held20
M. Held, M. Wiesenberger, A. Kendl,
Nuclear Fusion 60, 066014 (2020).
Held23
M. Held, M. Wiesenberger,
Nuclear Fusion 63, 026008 (2023).
Hasegawa83
A. Hasegawa, M. Wakatani,
Physics of Plasmas 14, 102312 (2007).
Numata07
R. Numata, R. Ball and R.L. Dewar,
Physics of Plasmas 14, 102312 (2007).
Halpern16
F.D. Halpern, P. Ricci, S. Jolliet, J. Loizu, J. Morales, A. Mosetto, F. Musil, F. Riva, T. M.
Tran, C. Wersal,
Journal ofComputational Physics 315 388 (2016).
Dudson17
B.D. Dudson, J. Leddy,
Plasma Phys. Control. Fusion 59, 054010 (2017).
Madsen16
J. Madsen, V. Naulin, A.H. Nielsen, J.J. Rasmussen,
Physics of Plasmas 23, 032306 (2016).
fftw3
M. Frigo, and S.G. Johnson,
Proceedings of the IEEE 93 (2), 216-231 (2005).
Arakawa66
A. Arakawa,
J. Comput. Phys. 1, 119 (1966).
Naulin03
V. Naulin, A. Nielsen,
SIAM J. Sci Comput. 25, 104 (2003).
Karniadakis91
G.E. Karniadakis, M. Israeli, S.A. Orszag,
J. Comput. Phys. 97, 414 (1991).
teague83
M.R. Teague,
Journal of the Optical Society of America 73, 1434 (1983).
zuo20
C. Zuo, J. Li, J. Sun, Y. Fan, J. Zhang, L. Lu, R. Zhang, B. Wang, L. Huang,
Q. Chen,
Optics and Lasers in Engineering 135, 106187 (2020).
paganin98
D. Paganin, K.A. Nugent,
Physical Review Letters 80, 2586 (1998).
schmalz11
J.A. Schmalz, T.E. Gureyev, D.M. Paganin, K.M. Pavlov,
Physical Review E 84, 023808 (2011).
ferrari14
J.A. Ferrari, G.A. Ayubi, J.L. Flores, C.D. Perciante,
Optics Communication 318, 133 (2014).
zuo14
C. Zuo, Q. Chen, L. Huang, A. Asundi,
Optics Express 22, 17172 (2014).
leveque
R.J. LeVeque,
Finite difference methods for ordinary and partial differential equations.
SIAM, Philadelphia, 2007.
humphries
S. Humphries,
Field solution on computers. CRC Press, 1997.
Electronic version (2010): http://www.fieldp.com/femethods.html
fisicaro16
G. Fisicaro, L. Genovese, O. Andreussi, N. Marzari, S. Goedecker,
The Journal of Chemical Physics 144, 014103 (2016).
|
http://arxiv.org/abs/2306.12326v1
|
20230621150955
|
What is the nature of the HESS J1731-347 compact object?
|
[
"Violetta Sagun",
"Edoardo Giangrandi",
"Tim Dietrich",
"Oleksii Ivanytskyi",
"Rodrigo Negreiros",
"Constança Providência"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"gr-qc",
"nucl-th"
] |
0000-0001-5854-1617]Violetta Sagun
CFisUC, Department of Physics, University of Coimbra, Rua Larga P-3004-516, Coimbra, Portugal
0000-0001-9545-466X]Edoardo Giangrandi
CFisUC, Department of Physics, University of Coimbra, Rua Larga P-3004-516, Coimbra, Portugal
Institut für Physik und Astronomie, Universität Potsdam, Haus 28,
Karl-Liebknecht-Str. 24/25, Potsdam, Germany
0000-0003-2374-307X]Tim Dietrich
Institut für Physik und Astronomie, Universität Potsdam, Haus 28,
Karl-Liebknecht-Str. 24/25, Potsdam, Germany
Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, Potsdam 14476, Germany
0000-0002-4947-8721]Oleksii Ivanytskyi
Institute of Theoretical Physics, University of Wroclaw, 50-204, Wroclaw, Poland
0000-0000-0000-0000]Rodrigo Negreiros
Instituto de Física, Universidade Federal Fluminense–UFF, Niterói-RJ, 24210-346, Brasil
0000-0001-6464-8023]Constança Providência
CFisUC, Department of Physics, University of Coimbra, Rua Larga P-3004-516, Coimbra, Portugal
Once further confirmed in future analyses, the radius and mass measurement of HESS J1731-347 with M=0.77^+0.20_-0.17 M_⊙ and R=10.4^+0.86_-0.78 km will be among the lightest and smallest compact object ever detected. This raises many questions about its nature and opens up the window for different theories to explain such a measurement. In this article, we use the information from <cit.> on the mass, radius, and surface temperature together with the multi-messenger observations of neutron stars to investigate the possibility that HESS J1731-347 is one of the lightest observed neutron star (NS), a strange star (SS), a hybrid star (HS) with an early deconfinement phase transition, or a dark matter (DM) admixed neutron star. The nucleonic and quark matter are modeled within realistic equation of states (EoSs) with a self-consistent calculation of the pairing gaps in quark matter. By performing the joint analysis of the thermal evolution and mass-radius constraint, we find evidence that within a 1σ confidence level, HESS J1731-347 is consistent with the neutron star scenario with the soft EoS as well as with a strange and hybrid star with the early deconfinement phase transition with a strong quark pairing and neutron star admixed with dark matter.
§ INTRODUCTION
One open question of modern physics is it to find out if strongly interacting matter at high densities undergoes a phase transition to deconfined quarks and gluons. To tackle this problem, many fields of physics have been working together, including gravitational wave (GW) and multi-messenger astrophysics, nuclear physics, and high-energy physics. While the latter probes finite temperature regimes, astrophysical observations of compact stars (CSs) give mainly access to vanishing temperature and high baryon density regimes, which can not be probed with terrestrial experiments. In fact, recent detections of binary neutron star (NS) and NS-black hole mergers <cit.> have opened a way to constrain zero-temperature NS matter properties during the inspiral phase, whereas the next generation of the GW telescopes is planned to reach enough sensitivity to relate it to the finite temperature equation of state (EoS) during the merger and post-merger phases <cit.>.
Considering the existing observational data of the heaviest known NSs <cit.>, the EoS of strongly interacting matter at densities above twice the normal saturation density (2n_0) is required to be stiff leading to typical NS radius measurements of ∼ 11-14 km at M ≳ 1.4 M_⊙. Therefore, despite being in agreement with theoretical calculations for the minimum mass of NSs, e.g., M=0.88–1.28 M_⊙ <cit.>, the recently announced measurement of HESS J1731-347 with M=0.77^+0.20_-0.17 M_⊙ and R=10.4^+0.86_-0.78 km <cit.> challenges our understanding of the EoS at densities 1-2 n_0. In fact, simulations of supernova explosions predict the gravitational mass of the lightest possible NS to be M=1.17 M_⊙, corresponding to a baryonic mass of M=1.25–1.31M_⊙ <cit.>.
The HESS J1731-347 measurement is also interesting as the first simultaneous measurement of the mass, radius, and surface temperature of a CS and opens the possibility to study its thermal evolution. However, following <cit.>, we want to point out caveats regarding the mass and radius measurement of HESS J1731-347. The estimated M=0.77^+0.20_-0.17 M_⊙ relies on the fact that the object has a uniform-temperature carbon atmosphere and that the star is located at a distance of 2.5 kpc. Hence, further studies will be needed to understand the validity of the obtained results.
It was proposed that HESS J1731-347 could be a candidate for a strange star (SS) comprising all existing mass-radius measurements of CS <cit.>. However, unpaired quark matter undergoes fast cooling, leading to difficulties in reproducing the surface temperature, presented by <cit.>. In general, the cooling of a CS is defined by the internal composition, which is determined by the processes that operate inside the star <cit.>. The fastest cooling direct Urca (DU) process corresponds to β- and inverse β-decay of neutron and d-quark in nuclear and quark matter, respectively. The nucleonic DU process has a threshold controlled by the number density of electrons according to the momentum conservation. As soon as the DU process is on, it leads to an intense neutrino emission and a consequent rapid drop in the surface temperature. On the other hand, the quark DU threshold condition is always satisfied. In fact, such a fast cooling of SS contradicts the estimated red-shifted surface temperature of HESS J1731-347 compact object T_s^∞ =2.05^+0.09_-0.06 MK <cit.> itself being rather high as for the age of ∼ 27 kyr <cit.>. This difficulty could be overcome within the SS or hybrid star (HS) with an early quark deconfinement <cit.> scenarios, where strong quark pairing leads to color superconductivity and strongly suppresses cooling of the quark core <cit.>, while the baryonic matter (BM) provides a moderate cooling. The possible suppression of the nucleonic cooling is related to the superfluidity of neutrons and superconductivity of protons via the Cooper pairs breaking and formation (PBF) <cit.>.
The questions at what density the deconfinement phase transition occurs and what the signals of the quark matter formation are still open. The elliptic flow in heavy-ion collisions and a combined analysis of multi-messenger constraints <cit.> suggest that strongly interacting matter softens at a high density, which might correspond to a phase transition to quark-gluon plasma.
In this work, we examine the nature of HESS J1731-347 by considering how the simultaneously measured mass, radius, and surface temperature agree with the present theoretical understanding of the properties of strongly interacting matter and color superconductivity at high densities.
We also discuss an alternative origin of HESS J1731-347 as a dark matter (DM) admixed NS, a scenario that gained a lot of attention recently <cit.>. In fact, DM could be accumulated in the core of a NS leading to a decrease of the total gravitational mass, radius, and tidal deformability, which we will perceive as the effect similar to softening of the EoS <cit.>. At the same time, this scenario of asymmetric non-interacting DM agrees with cosmological and astrophysical observations, e.g. the Bullet Cluster <cit.>, and provides a description of HESS J1731-347.
The letter is organized as follows. In Sections <ref>, <ref> and <ref>, we present the NS, SS, and HS scenarios, respectively. A detailed description of the thermal evolution of CSs is presented in Appendix <ref>. The possibility of HESS J1731-347 to be a DM-admixed NS with a 1σ confidence interval (CI) is studied in Section <ref> with a detailed explanation in Appendix <ref>. Section <ref> summarizes the results.
§ SOURCE CLASSES
§.§ Neutron star
Modeling the internal structure of HESS J1731-347 requires consistency with the properties of the nuclear matter ground state, chiral effective field theory (cEFT) <cit.>, existing constraints on the mass-radius relation <cit.> and tidal deformability of NSs <cit.>.
The scenario of purely hadronic matter described with an EoS, which respects the above requirements, suggests a minimal assumption about the HESS J1731-347 nature.
At the densities expected inside a NS of sub-solar mass such a nuclear EoS can be strongly constrained by the microscopic Brueckner-Hartree-Fock calculations based on realistic nuclear potentials fitted to the nuclear scattering data <cit.>.
However, due to the exploratory reasons in this letter, we prefer to consider the possibilities of soft and stiff nuclear EoSs instead of relying on the results of the microscopic calculations.
For this purpose, we utilize the set B of the IST <cit.> and BigApple <cit.> EoSs, respectively.
As is seen from Fig. <ref>, while stiff hadronic EoS is completely outside the 2σ confidence interval, the soft one is able to fit HESS J1731-347 constraint within the 1σ confidence level.
Soft hadronic EoS allows the NS matter to reach the densities supporting fast cooling processes unless nucleons are paired (see Appendix <ref>).
At the same time, the recently reported data on the thermal evolution HESS J1731-347 suggest its slow cooling <cit.>.
In order to reproduce these data we assume ^1S_0 and ^3P_2 neutron superfluidity and ^1S_0 proton superconductivity described by the SFB model <cit.>, phenomenological gap obtained from the fit of Cas A <cit.> and CCDK <cit.> model, respectively (cf. Appendix <ref>).
As is seen from the upper panel of Fig. <ref>, the scenario of soft hadronic matter with paired nucleons and light-elements envelope (the green band with solid curves) is consistent with the observational data on the thermal evolution of HESS J1731-347.
§.§ Quark star
The small mass and radius of the HESS J1731-347 object assume its gravitational binding energy to be large, which can be provided by the scenario of a strange quark star, recently considered by <cit.>.
The simplest description of the strange quark matter corresponds to the MIT bag model-like EoS, which relates the stellar matter pressure p and energy density ε via the relation p=ε/3-4B/3 <cit.>.
The orange solid curve in Fig. <ref> passes through the point M=0.77 M_⊙, R=10.4 km and is obtained for the central value of the bag constant range B^1/4=134^+12_-11 MeV.
This range of B is obtained by fitting the HESS J1731-347 mass-radius constraint within the 1σ CI with the present EoS.
§.§ Hybrid star
Positive bag constant phenomenologically models quark confinement at small densities, when quark matter with negative pressure is dynamically unstable
against conversion to hadronic matter with p>0.
This assumes the existence of a hadron envelope enclosing the quark core of a NS and motivates us to consider HESS J1731-347 as a hybrid quark-hadron object.
For this we utilize the hybrid EoS developed by <cit.>.
Its quark part is based on a chirally symmetric relativistic density functional (RDF) approach for two-flavor color superconducting (2SC) quark matter.
Within this RDF approach, quark confinement is phenomenologically modeled by the fast growth of the quark quasiparticle self-energy in the confining region, where the quark EoS is matched to the DD2npY - T hadronic one <cit.> by means of the Maxwell construction.
While most of the RDF approach parameters are fitted to vacuum phenomenology of QCD, values of the dimensionless couplings controlling strength of the vector repulsion between quarks η_V=0.265 and diquark pairing η_D=0.555 were chosen by <cit.> in order to provide the best agreement with the observational constraints on the NS mass-radius diagram shown in Fig. <ref> and on a 1.4 M_⊙ NS tidal deformability extracted from the GW170817 GW signal <cit.>.
As is seen from Fig. <ref>, this parameterization of a hybrid quark-hadron EoS agrees with the HESS J1731-347 mass-radius constraint within the 1σ CI.
Furthermore, strong quark pairing suppresses cooling of the 2SC quark matter making the HS scenario also consistent with the data on the thermal evolution of HESS J1731-347 (see lower panel of Fig. <ref>).
§.§ Dark matter admixed neutron star
We consider DM particles as a relativistic Fermi gas of non-interacting particles with spin 1/2 and mass m_χ accumulated inside NSs with a relative fraction f_χ.
For more details about the DM EoS and two-fluid approach, see Appendix <ref>. The dash-dotted curves in Fig. <ref> show the effect of DM particles with m_χ=2.8 GeV and f_χ=4.75% accumulated in the core of NSs. To exclude the uncertainties of the underlying BM EoS from consideration the scan is performed for the soft IST EoS (blue curve) <cit.> and stiffer BigApple EoS (green curve) <cit.>. The values of the DM particle's mass and relative fraction were chosen to provide an agreement with the 2σ constraints of HESS J1731-347 for both BM EoSs.
As can be seen, the presence of a dense DM core leads to a strong reduction of the total gravitational mass and radius of
stars, which resembles a softening of the BM EoS. This degeneracy between the effect of DM and possible change of the strongly interacting matter properties at high density was studied in <cit.>. In fact, the scan over mass and fraction of DM in Fig. <ref> shows that the same effect could be seen by increasing the DM particle's mass for a fixed fraction. The color map represents the total maximum gravitational mass of DM-admixed NSs for the BigApple EoS. The overlap between the two scans yields the allowed region of mass and fraction of DM (above the red curve in Fig. <ref>) that reproduces M and R measurement of HESS J1731-347 (2σ CL), free from the BM EoS uncertainties.
Moreover, the DM-admixed NS scenario remains the thermal evolution unaffected, as DM, within the considered candidate, does not take part in cooling. Effectively, the thermal evolution of a NS admixed with asymmetric DM interacting only through gravity with BM is equivalent to cooling of a pure NS of a smaller mass. Thus, this scenario is consistent with the HESS J1731-347 data.
§ CONCLUSIONS
We analyzed four different scenarios of the possible internal composition of a central compact object within the supernova remnant HESS J1731-347 in light of the recent measurement presented by <cit.>. We discuss that the soft nucleonic EoS is able to simultaneously reproduce the mass and radius measurements within the 1σ confidence interval, and give a good description of the surface temperature. This scenario also includes the possibility of the soft hadronic EoS at 1-2n_0, while at n≥ 2n_0 the EoS could attain an extra stiffening due to, e.g. density dependent repulsion between the constituents. Such behavior is also consistent with the proton flow constraint <cit.>.
From the analysis of various models of pairing gaps, we conclude that a combination of n and p singlet pairing with n triplet pairing described within the SFB, CCDK models, and phenomenological gap obtained from the fit of Cas A, respectively, with light-elements envelope provide the best fit of the data.
HESS J1731-347 could also be explained as a HS with an early deconfinement phase transition that occurs below twice nuclear saturation density. Thus, HS would contain a big quark-gluon plasma core, which could potentially lead to rapid cooling related to an operating quark DU process. However, a self-consistent derivation of the quark pairing gap and RDF model formulation makes us conclude that quarks exist in the 2SC phase that suppresses rapid cooling and provides an agreement with the surface temperature measurement.
Moreover, the SS scenario can also reproduce mass, radius, and surface temperature very well, as, similar to the HS scenario, paired quarks suppress neutrino emission, while, in comparison to the HS case, the photon emission from the surface will be even lower due to the smaller star's radius. However, the three above-mentioned scenarios are in contradiction with the recent supernova simulations that predict the lowest compact star to be 1.17M_⊙.
As an alternative scenario, we considered HESS J1731-347 to be a NS admixed with DM, which results in the effective softening of the EoS and creation of more compact configurations. This scenario leaves the thermal evolution unaffected, as asymmetric, non-interacting DM interacts only through gravity with BM. Based on the performed scan over model parameters we found that fermionic DM particles with mass above 1.15 GeV and fraction above 4.2% provide a full agreement with HESS J1731-347 2σ CI measurement for both stiff and soft baryonic EoSs. The analysis was made for two different EoSs that cover the parameter range to exclude the BM uncertainties from consideration. The performed scan over mass and fraction of DM shows that the same effect could be seen by increasing the DM particle's mass and decreasing its fraction.
We argue that in comparison to GW170817, GW190425, NICER, and heaviest CSs measurements that probe the properties of strongly interacting matter at high densities, HESS J1731-347 provides an important piece of information in the range of 1-2 nuclear saturation density.
Future observations of the HESS J1731-347 object are required, as well as the study of the impact of different effects, e.g. existence of the possible hot/cold spots on the surface of the star, atmosphere composition, distance to the object, etc.
While the low mass and radius measurement is confirmed, it will put the most stringent constraint on the strongly interacting matter at 1-2 normal nuclear density range and possible DM-rich environment around the star.
The work of E.G., C.P., and V.S. was supported by national funds from FCT – Fundação para a Ciência e a Tecnologia, I.P., within the Projects No. UIDB/04564/2020, UIDP/04564/2020, EXPL/FIS-AST/0735/2021. E.G. also acknowledges the support from Project No. PRT/BD/152267/2021. C.P. is supported by Project No. PTDC/FIS-AST/28920/2017. The work of O.I. was supported by the Polish National Science Center under grant No. 2019/33/BST/03059. R.N. acknowledges financial support from CAPES, CNPq, and FAPERJ. This work is part of the project INCT-FNA Proc. No. 464898/2014-5 as well as FAPERJ JCNE Proc. No. E-26/201.432/2021.
§ THERMAL EVOLUTION OF CSS
Born in supernova explosions or through the coalescence of light CSs, NSs cool down through a combination of neutrino emission from their interior and thermal radiation from the surface. The former process is directly determined by the internal composition of a star. Low and medium mass stars
usually cool down through so-called slow and intermediate cooling processes, i.e. modified Urca, bremsstrahlung, and Cooper pair breaking and formation (PBF) <cit.>.
Once the relative abundances of involved baryonic and leptonic species are high enough, the direct Urca process starts to operate leading to rapid cooling. It corresponds to the β-decay and the electron capture that operates after the triangle inequality of Fermi momenta p_F,i + p_F,j≥ p_F,k is satisfied. Accounting for charge neutrality and the relation between the Fermi momenta and the number density of each particle, we obtain the DU threshold corresponding to the minimal proton fraction of ∼ 11% of the total baryon density <cit.>.
In quark matter the threshold for the DU reactions, d → u + e^- + ν̅_e and u + e^-→ d + ν_e, is very low leading to an emissivity of ϵ∼ T^6.
For comparison, the modified Urca processes give only ϵ∼ T^8 <cit.>.
At vanishing temperature, an attraction between nucleons or quarks leads to the existence of pairs, where the particle excitations are gapped and the cooling mechanism is drastically suppressed. At temperatures below the critical temperature of nuclear superfluidity, T ≪ T_c, the neutrino emission is suppressed by a Boltzman factor e^-Δ/T, where Δ is the energy gap. At T_c, the effect of pair breaking and formation results in neutrino emissivity of ϵ∼ T^7 <cit.>.
In quark matter depending on the abundance of strange quarks, it is possible to distinguish the color-flavor-locked (CFL) phase, in which the quarks form Cooper pairs, whose color properties are correlated with their flavor properties in a one-to-one correspondence between three color pairs and three flavor pairs, and the two-color superconducting (2SC) phase, characterizes by the absence of the s-quark and the appearance of u-d diquark condensate in a selected direction in color space <cit.>. In fact, the quark pairing could be more diverse, e.g. gapless 2SC, crystalline CSC, gapless CFL (gCFL), etc. <cit.>. However, they are out of the scope of this article.
We performed calculations of the thermal evolution of NSs modeled within the IST EoS as this model provides an agreement with HESS J1731-347 mass-radius measurements within 1σ CI. The IST EoS is formulated in terms of nucleons characterized by an effective hard-core radius providing a short-range repulsion between them. The latter was fixed from the fit of heavy-ion collision data <cit.>, while the IST contribution was implemented by accounting for an interparticle interaction at high density. The model was applied to describe the nuclear liquid-gas phase transition and its critical point <cit.>, proton flow constraint <cit.>, and further generalized to describe NSs showing a big application range of the unified IST approach <cit.>. The considered parameterization gives the values of the symmetry energy E_sym = 30.0 MeV, symmetry energy slope L = 93.2 MeV and nuclear incompressibility factor K_0 = 201.0 MeV at the normal nuclear density <cit.>. For realistic modeling of the outer layers, the IST EoS is supplemented by the Haensel-Zdunik (HZ) EoS for the outer crust and the Negele-Vautherin (NV) EoS for the inner crust <cit.>.
The pairing of nucleons in simulations of the thermal evolution of NSs depends on the pairing channel and the considered gap model. By adopting the thermal evolution code described in <cit.> we found that the best agreement with HESS J1731-347 data is obtained for ^1S_0 neutron and proton pairing, in the inner crust and core, respectively, described by the SFB <cit.> and CCDK <cit.> models, as well as ^3P_2 pairing of neutrons <cit.>. These results are very much in line with predictions of other works <cit.>. The results are presented on the upper panel of Fig. <ref>).
We also analyze the effect of different envelope compositions: a hydrogen-rich envelope that contains the fraction of light elements η=Δ M/M= 10^-7 (depicted by solid curves in Fig. <ref>) and one containing more heavy elements (dashed curves in Fig. <ref>). Here Δ M is the mass of light elements in the upper envelope. The presence of light elements, such as H, in the outer layers, may signal the episodes of accretion onto the NS after its formation, while elements between He and C occur as a result of diffusive nuclear burning on the surface of the star.
The cooling simulations presented on the lower panel of Fig. <ref> were performed for HSs described within the DD2 EoS (hadron phase) and RDF approach (quark phase). The transition between the phases is found due to the Gibbs criteria of phase equilibrium. Modeling of the thermal evolution of HSs incorporated 2SC pairing between quarks obtained in a self-consistent calculation within the RDF model. Moreover, for the inner and outer crusts, we adopted the same HZ and NV EoSs <cit.> as for the NS case.
As the 2SC pairing yields a very good agreement with the HESS J1731-347 measured surface temperature, we do not see the necessity to include the CFL phase, as it will cause even stronger neutrino suppression providing an equivalently good data fit.
The observational data were taken from <cit.>. We consider 2-σ error bars for the available data, otherwise a factor of 0.5 and 2 in both the temperature and the age, excluding the upper limits. The sources are 0 - CasA NS, 1 - PSR J0205+6449 (in 3C58), 2 - PSR B0531+21 (Crab), 3 - PSR J1119-6127, 4 - RX J0822-4300 (in PupA), 5 - PSR J1357-6429, 6 - PSR B1706-44, 7 - PSR B0833-45 (Vela), 9 - PSR J0538+2817, 10 - PSR B2334+61, 11 - PSR B0656+14, 12 - PSR B0633+1748 (Geminga), 13 - PSR J1741-2054, 14 - RX J1856.4-3754, 15 - PSR J0357+3205 (Morla), 16 - PSR B1055-52, 17 - PSR J2043+2740, 18 - RX J0720.4-3125. The object 8 - XMMU J1731-347 in <cit.> was substituted by the HESS J1731-347 <cit.>. An updated data shows a slight increase in the surface temperature.
§ TWO-FLUID APPROACH
In this work, we consider the DM component as a relativistic Fermi gas composed of non-interacting massive particles possessing a spin of one-half. The corresponding EoS has been extensively studied in the literature, e.g. by <cit.>.
Following the constraint from the Bullet Cluster <cit.> on the negligible cross-section between BM and DM, we assume two components to interact only through gravity. As a result, stress-energy tensors of two fluids (i=D,B) are conserved separately leading to two coupled Tolman-Oppenheimer-Volkoff (TOV) equations <cit.>
dp_i/dr=-(ϵ_i +p_i)(M_tot+4π r^3p_tot)/r^2(1-2M_tot/r),
where M_tot=M_DM+M_BM and p_tot=p_DM+p_BM are the total gravitational mass and pressure, respectively. Since we have a two-fluid system we define the value of the central density for each component. After the integration of the TOV equations, we get the gravitational masses of each of the components. From it, the DM fraction can be expressed as f_χ = M_DM/M_tot. By adjusting the central energy densities of each component we are able to obtain different scenarios of admixed stars, and, in particular, stars with different DM fractions. As was shown by <cit.> the chemical potentials of two components are related to each other as
d lnμ_B/dr=d lnμ_χ/dr = -M_tot+4π r^3 p_tot/r^2(1-2M_tot/r).
aasjournal
|
http://arxiv.org/abs/2306.02316v1
|
20230604094943
|
Temporal Dynamic Quantization for Diffusion Models
|
[
"Junhyuk So",
"Jungwon Lee",
"Daehyun Ahn",
"Hyungjun Kim",
"Eunhyeok Park"
] |
cs.CV
|
[
"cs.CV"
] |
Learning Test-Mutant Relationship for Accurate Fault Localisation
[
July 31, 2023
=================================================================
The diffusion model has gained popularity in vision applications due to its remarkable generative performance and versatility. However, high storage and computation demands, resulting from the model size and iterative generation, hinder its use on mobile devices. Existing quantization techniques struggle to maintain performance even in 8-bit precision due to the diffusion model's unique property of temporal variation in activation. We introduce a novel quantization method that dynamically adjusts the quantization interval based on time step information, significantly improving output quality. Unlike conventional dynamic quantization techniques, our approach has no computational overhead during inference and is compatible with both post-training quantization (PTQ) and quantization-aware training (QAT). Our extensive experiments demonstrate substantial improvements in output quality with the quantized diffusion model across various datasets.
§ INTRODUCTION
Generative modeling is crucial in machine learning for applications such as image <cit.>, voice <cit.>, and text synthesis <cit.>. Diffusion models <cit.>, which progressively refine input images through a denoising process involving hundreds of iterative inferences, have recently gained prominence due to their superior performance compared to alternatives like GANs <cit.>. However, the high cost of diffusion models presents a significant barrier to their widespread adoption. These models have large sizes, often reaching several gigabytes, and demand enormous computation of iterative inference for a single image generation. Consequently, executing diffusion models on resource-limited mobile devices is practically infeasible, thus most applications are currently implemented on expensive, high-performance servers.
To fully exploit the potential of diffusion models, several methods to reduce the computational cost and memory requirement of diffusion models while preserving the generative performance have been proposed.
For example, J. Song et al. <cit.> and L. Liu et al. <cit.> proposed a more efficient sampling scheduler, and T. Salimans & J. Ho proposed to reduce the number of sampling steps using knowledge distillation technique <cit.>.
As a result, high-fidelity images can be generated with fewer sampling steps, making the use of diffusion models more affordable and feasible.
Despite these advancements, the denoising process of diffusion models still demands a substantial computational cost, necessitating further performance enhancements and model compression.
While the majority of previous approaches have focused on reducing the number of sampling steps to accelerate the denoising process, it is also important to lighten the individual denoising steps. Since the single denoising step can be regarded as a conventional deep learning model inference, various model compression techniques can be used. Quantization is a widely used compression technique where both weights and activations are mapped to the low-precision domain. While advanced quantization schemes have been extensively studied for conventional Convolution Neural Networks (CNNs) and language models, their application to diffusion models has shown significant performance degradation. Due to the unique property of diffusion models, such as the significant changes in the activation distribution throughout the iterative inferences, the output is heavily distorted as the activation bit-width decreases <cit.>. Existing quantization techniques, including both quantization-aware training (QAT) <cit.> and Post-training quantization (PTQ) <cit.>, are designed to address a specific distribution in existing DNNs, therefore cannot deal with time-variant activation distributions in diffusion models.
To tackle the unique challenges of diffusion model quantization, we introduce a novel design called Temporal Dynamic Quantization (TDQ) module. The proposed TDQ module generates a time-dependent optimal quantization configuration that minimizes activation quantization errors. The strong benefit of the TDQ module is the seamless integration of the existing QAT and PTQ algorithms, where the TDQ module extends these algorithms to create a time-dependent optimal quantization configuration that minimizes activation quantization errors. Specifically, the module is designed to generate no additional computational overhead during inference, making it compatible with existing acceleration frameworks without requiring modifications. The TDQ module significantly improves quality over traditional quantization schemes by generating optimal quantization parameters at each time step while preserving the advantages of quantization.
§ BACKGROUNDS AND RELATED WORKS
§.§ Diffusion Model
Diffusion models have been first introduced in 2015 <cit.> and revolutionized image generation by characterizing it as a sequential denoising process. As shown in Fig. <ref>, the forward diffusion process gradually transforms the image(x_0) into a random data(x_T) which follows standard normal distribution by adding small Gaussian noise at each time step.
The reverse diffusion process generates a clean image(x_0) from a random data(x_T) by gradually removing noise from the data through iterative denoising steps.
Therefore, a diffusion model learns the reverse process which is estimating the noise amount on a given noisy data at each time step (x_t). The forward(q) and reverse(p_θ) processes can be described as
q(x_t|x_t-1) = N(x_t; √(1-β_t)x_t-1, β_t I),
p_θ(x_t-1|x_t) = N(x_t-1; μ_θ(x_t, t), σ_t^2 I),
where β_t denotes the magnitude of Gaussian noise.
<cit.> introduced a reparameterization trick for μ_θ and the corresponding loss function, which facilitates the training of diffusion models.
μ_θ(x_t, t) = 1/√(α_t)(x_t - β_t/√(1-α̅_̅t̅)ϵ_θ(x_t, t))
L_simple = E_t, x_0, ϵ[||ϵ-ϵ_θ(x_t, t)||]
While diffusion models can produce high-quality images, the iterative denoising process makes diffusion models difficult to be used for real-world scenarios. The early works on diffusion models such as DDPM <cit.> required hundreds to thousands of iterative inferences to generate single image, resulting extremely slow sampling speed. Therefore, numerous studies have been investigating algorithmic enhancements and various optimizations to boost performance, aiming to make diffusion models more efficient and suitable for real-world applications. DDIM <cit.> introduced an implicit probabilistic model that reinterprets the Markov process of the DDPM method achieving competitive image quality with one-tenth of denoising stpes. Distillation-based methods <cit.> proposed to reduce the number of denoising steps using knowledge distillation techniques.
On the other hand, to address the significant computational overhead associated with generating high-resolution images, <cit.> proposed a Latent Diffusion Model (LDM) where the diffusion model takes latent variables instead of images. Especially, the large-scale diffusion model (e.g., Stable Diffusion <cit.>) has leveraged LDM and learned from a large-scale dataset (LAION-Dataset <cit.>), enabling the creation of high-quality, high-resolution images conditioned on textual input.
§.§ Quantization
Quantization is a prominent neural network optimization technique that reduces storage requirements with low-precision representation and performance improvement when the corresponding low-precision acceleration is available. With b-bit precision, only 2^b quantization levels are accessible, which considerably limits the degree of freedom of the quantized model compared to floating-point representation. Therefore, the final quality significantly varies depending on the tuning of quantization hyperparameters, such as the quantization interval or zero offset. To preserve the quality of output in low-precision, it is crucial to update the quantization parameters as well as the model parameters, taking into account the specific features and requirements of the target task.
§.§ Quantization-aware Training and Post-training Quantization
Quantization algorithms can be broadly categorized into two types: Quantization-Aware Training (QAT) <cit.> and Post-Training Quantization (PTQ) <cit.>. QAT applies additional training after introducing quantization operators, allowing the update of network parameters toward minimizing the final loss value considering the effect of the quantization operators. On the other hand, PTQ does not apply end-to-end forward/backward propagation after quantization. Instead, it focuses on reducing block-wise reconstruction errors induced by quantization. QAT typically outperforms PTQ in low-precision scenarios, but it may not always be applicable due to limitations such as the deficiencies of datasets, training pipelines, and resource constraints. Due to its practical usefulness, PTQ has been actively researched recently. In the literature of diffusion models, <cit.> introduced a dedicated 8-bit post-training quantization method, demonstrating high-fidelity image generation performance.
On the other hand, while most QAT and PTQ algorithms focus on static quantization, recent research has highlighted the benefits of input-dependent dynamic quantization <cit.>. Dynamic quantization enables the adjustment of quantization intervals based on the varying input-dependent distribution of activations, which has the potential to reduce quantization error. However, implementing these dynamic approaches often incurs additional costs for extracting statistical information from activations, making it challenging to achieve performance improvements in practice.
While a number of previous works proposed different methods to accelerate sampling process of diffusion models, there have been limited works tried to exploit dynamic nature of diffusion models. In this paper, we propose a novel quantization scheme for diffusion models which minimizes activation quantization errors by enabling the generation of suitable quantization interval based on the time step information, all without incurring additional inference costs.
§ TEMPORAL DYNAMIC QUANTIZATION
§.§ Quantization Methods
Before elaborating the proposed method, we define the quantization function used in our work. In this study, we focus on b-bit linear quantization, where the 2^b possible quantization levels are evenly spaced. Linear quantization involves two key hyperparameters: the quantization interval s and the zero offset z. Given the full-precision data x, the quantized data x̂ can be calculated as follows:
x̅ = clip ( ⌊ x / s ⌉ + z, n, p),
x̂ = s · (x̅ - z),
where n(p) is the smallest(largest) quantization index, clip(·) is a clipping function and ⌊·⌉ is a rounding function. For practical acceleration purposes, we utilize symmetric quantization for weights, where z=0 and p = -n = 2^(b-1)-1. On the other hand, for activation quantization, we assume asymmetric quantization with z ≠ 0, n = 0, and p = 2^b-1, which is a commonly adopted approach <cit.>.
§.§ Challenges of Diffusion Model Quantization
The biggest challenge of quantizing diffusion models is to find optimal quantization parameters (s and z) for activation that minimizes the quantization error. As shown in Fig. <ref>, activation distribution of diffusion models have a unique property due to the iterative denoising process, whose distribution highly varies depending on the time step (t) regardless of the layer index. Therefore, using static values for quantization parameters causes significant quantization error for different time steps, as shown in Fig. <ref>. Previous studies <cit.> has also reported the dynamic property of activations in diffusion models and attempted to address it by sampling the calibration dataset across overall time frames. However, despite these efforts, these studies still relied on static parameters, resulting in sub-optimal convergence of minimizing quantization error. To tackle this problem fundamentally, it is crucial to enable the update of quantization parameters considering the dynamics in the input activation distribution.
§.§ Implementation of TDQ Module
To address the rapid changes in input activation, one easy way of think is input-dependent dynamic quantization. Previous studies have demonstrated that incorporating a quantization module that generates the quantization parameters based on input features such as minimum, maximum, and average values can lead to significant improvements in accuracy for specific applications. According to our observation, as will be presented in the supplementary material, our own implementation of input-dependent dynamic quantization offers notable quality improvement. However, the process of gathering statistics on the given activation introduces complex implementation and notable overhead. Despite the potential of quality improvement, this approach may not be an attractive solution in practice.
Instead, we propose a novel dynamic quantization that utilizes temporal information rather than input activation. Although the activation distribution might change based on the input data, the overall trends are similar within the same time frame (See Fig <ref>). Therefore, we can depicts the optimal interval based on the differences in time steps, leading to a more reliable and robust quantization results.
The implementation details are presented in Fig. <ref> (b). In the TDQ module, the dynamic interval s is generated based on the provided time step t as follows:
I = enc(t), s = f(I),
where enc(·) represents the encoding function of the time step, which will be described in Section 3.4. Here, I is the encoded feature, and f(·) signifies the generator module function.
In our approach, the TDQ module is attached to each quantization operator, and the independent dynamic quantization interval is generated based on the given time step. As illustrated in the Fig. <ref>, the generator is implemented by simply stacking multiple linear layers with a softplus function to constrain the data range to non-negative value. Please note that all components of the generator are differentiable. As a result, during the PTQ or QAT process, the interval can be updated toward minimizing quantization error using gradient descent.
For instance, when employing a well-known QAT algorithm such as LSQ <cit.>, the static interval can be substituted by the output of the TDQ module.
The quantization function from Eq. <ref> is then modified as follows:
x̅ = clip ( ⌊ x / s⌉ + z, n, p),
x̂ = s· (x̅ - z).
When we use a straight-through estimator <cit.>, the gradient can propagate through the learnable parameters of the generator. These parameters are then updated via iterative gradient descent, with the goal of minimizing the final task loss. The same pipeline can be applied to PTQ algorithms that utilize local gradient of minimizing reconstruction error <cit.>. Consequently, the TDQ module is easily applicable to both QAT and PTQ schemes.
However, unlike input-dependent dynamic quantization, the TDQ module enables cost-free inference. After the PTQ or QAT process, the time-dependent interval can be pre-computed offline, and during inference, we can utilize the pre-computed value. In addition, the pre-computed interval provides a strong advantage by enabling the seamless integration of TDQ module on existing frameworks without modifications.
α = Softplus( MLP(T) )
X^Q = {
-(2^bit-1 -1), X ≤ -(2^bit-1 -1)
⌊ X / α⌉ , -(2^bit-1 -1)< X < 2^bit-1 -1
2^bit-1 -1, x ≥ 2^bit-1 -1,
.
X'=α· X^Q
이때, step size generator의 출력을 0이 아닌 양수범위로 매핑시키기 위해 Softplus()함수를 마지막에 사용하였다.
그러나 앞서 말했듯 식(7)에 수반되는 rounding opreation은 non-diffrentiable한 연산이기 때문에, gradient back propagation을 할 수 없어 step size generator를 학습 시킬 수 없게 된다.이를 위해, 우리는 STE를 사용하여 step size generator로 gradient가 흐르게 하였다. (Fake) quantized output X' 로 부터 흘러오는 α에 대한 gradient는 chain rule로 부터 Eq. 9와 같게 된다
∂ X'/∂α = {
-(2^bit-1 -1), X'/α≤ -(2^bit-1 -1)
X / α - ⌊ X / α⌉ , -(2^bit-1 -1) < X'/α < 2^bit-1 -1
2^bit-1 -1, X'/α≥ 2^bit-1 -1,
.
이러한 step size generator는 QAT의 경우엔 task loss로 부터의 gradient로 학습이되고, PTQ의 경우엔 Blockwise output reconstruction loss로 부터 학습이 된다.
Note that 위와 같은 scheme은 activation quantization에서만 사용된다. Weight quantization의 경우엔 time마다 분포가 변화하지 않으므로 standard 한 static step size quantization(LSQ)를 사용하였다.
§.§ Engineering Details
We conducted a series of experiments to enhance the stability and effectiveness of the TDQ module, and figured out that several engineering improvements play a key role in achieving reliable results. This section provides a detailed description of these improvements and their impact on the performance of the TDQ module.
Frequency Encoding of Time Step
Feeding the time step directly to the generator results in inferior convergence quality. This is primarily due to the well-known low-frequency inductive bias of neural networks <cit.>. If the time step is directly input to the generator, it tends to produce an interval that barely changes regardless of the time step. To mitigate this low-frequency bias, we use geometric Fourier encoding for time step <cit.>, as described in Eq. <ref>.
I = enc(t) = ( sin(t/t^0/d_max),cos(t/t^0/d_max),sin(t/t^2/d_max),cos(t/t^2/d_max), ... , sin(t/t^d/d_max),cos(t/t^d/d_max) ),
where t is the current time step, d is the dimension of encoding vector, and I is the encoded vector. This encoding approach allows the TDQ module to accommodate high-frequency dynamics of time step. In this paper, we set t_max as 10000, empirically.
Initialization of TDQ Module
Proper initialization of quantization interval is crucial, as incorrect initialization can lead to instability in QAT or PTQ processes. Existing quantization techniques only need to initialize the static step value, but we need to initialize the TDQ module's output to the desired value. To achieve this, we utilize He initialization <cit.> for the weights and set the bias of the TDQ module (MLP)'s last linear layer to the desired value. Given that the input to the MLP (geometric Fourier encoding) can be treated as a random variable with a mean of zero, the output of the He-initialized MLP will also have a mean of zero. Thus, we can control the mean of the MLP output to a desired value via bias adjustment. After extracting 1000 samples for the entire time step, we initialized the quantization interval to minimize the overall error and then conducted an update to allow adaptation to each time step.
§ EXPERIMENTAL SETUP
In order to demonstrate the superior performance of TDQ, we conducted tests using two different models: DDIM <cit.>, a pixel space diffusion model, and LDM <cit.>, a latent space diffusion model. For the DDIM experiments, we utilized the CIFAR-10 dataset <cit.> (32x32), while for the LDM experiments, we employed the LSUN Churches dataset <cit.> (256x256). This allowed us to showcase the effectiveness of the proposed method in both low and high-resolution image generation scenarios. We applied PTQ and QAT to both models. However, it is important to note that while the latent diffusion model consists of a VAE and a diffusion model, we focused on quantizing the diffusion model and did not perform quantization on the VAE component.
In the absence of prior QAT studies for the diffusion model, we experimented with well-known static quantization methods, i.e., PACT <cit.>, LSQ(+) <cit.>, and NIPQ <cit.> as baselines. Our idea is integrated on top of LSQ, by replacing the static interval as an output of TDQ module. Per-layer quantization was applied to activations and weights of all convolutional and linear layers, including the activation of the attention layer. The models were trained for 200K iterations on CIFAR-10 and LSUN-churches, with respective batch sizes of 128 and 32. The learning rate schedule was consistent with the full precision model.
In PTQ, we used PTQ4DM <cit.>, a state-of-the-art study, as a baseline for extensive analysis. For a fair comparison, we mirrored the experimental settings of PTQ4DM but modified the activation quantization operator with a TDQ module for dynamic quantization interval generation. Like PTQ4DM, we used per-channel asymmetric quantization for weight and per-tensor asymmetric quantization for activation. The weight quantization range was determined by per-channel minimum/maximum values, and the activation quantization range was trained via gradient descent to minimize blockwise reconstruction loss, as in methods like BRECQ <cit.>. Quantization was applied to all layers, but in the LDM PTQ experiment, the activation of the attrition matrix was not quantized. Following PTQ4DM's approach, we used a calibration set of 5120 samples for PTQ, consisting of 256 images with 20 random time steps selected per image.
To measure the performance of the models, we used Fréchet Inception Distance (FID) <cit.> and Inception Score (IS) <cit.> for CIFAR-10, and FID for LSUN-churches. For evaluation, we generated 50,000 images using QAT with DDIM 200 step sampling, and 50,000 images using PTQ with DDIM 100 steps. In the case of QAT, we selected 5 checkpoints with the lowest validation loss and reported the scores from the best performing models.
All of experiments were conducted on the high performance servers having 4xA100 GPUs and 8xGTX3090 GPUs with PyTorch <cit.> 2.0 framework. The source code will be available after the review process. Besides, we use the notation WxAy to represent x-bit weight & y-bit activation quantization for brevity.
§ RESULTS
§.§ Quality Analysis after QAT and PTQ
Table <ref> compares the TDQ module with existing static quantization methods. As the activation bit decreases, all static quantization methods show inferior quality, while our approach shows consistent output. TDQ module gives substantial quality improvement even in 8-bit, and the benefit becomes even large in 4-bit precision, showing negligible quality degradation to full-precision output. TDQ achieves these benefits by efficiently allocating the limited activation quantization levels.
Besides, NIPQ was introduced to address the instability of the straight-through estimator by applying pseudo quantization based on artificial noise. However, the noise of NIPQ is indistinguishable from the input noise, hindering the diffusion model's convergence. Additional efforts are required to exploit the benefit of PQN-based QAT for diffusion models.
Table <ref> presents PTQ comparison of TDQ module against existing static PTQ schemes. The Min-Max method represents a naive linear quantization approach where the range is determined by the minimum and maximum values of the target tensor. The experiments demonstrate that while all baselines maintain a good level of FID when the activation bit is high, they experience significant performance degradation as the activation bit decreases. In contrast, TDQ exhibits only a slight level of FID degradation, indicating that the TDQ module is a robust methodology that performs well in both QAT and PTQ scenarios.
Fig. <ref> to <ref> visualize the generated images of quantized diffusion models. Our method consistently outperforms other quantization techniques in producing images with high-fidelity within the same bit-width configuration. Figs. <ref> and <ref> further illustrate this, as conventional QAT and PTQ produce blurred and unrecognizable images, whereas our method generates realistic images. The integration of temporal information in activation quantization is shown to be highly effective in maintaining the perceptual quality of the output.
§.§ Generalization Performance of TDQ Module
This section presents the experimental results on the performance of the TDQ module for fast-forward inference. Training is carried out to fully encompass all time steps from 1 to 1000, however, inference can be executed with fewer time steps (between 50 to 100) for enhanced performance. This changes the distribution of the time steps during training/testing, thus, the TDQ module's generation requires good generalization abilities.
Fig. <ref> displays the FID measurement results for the DDIM model quantized by the LSQ algorithm for both W8A4 and W4A4 configurations. The time step gradually decreases from 100 to 10 during inference. As depicted, LSQ's performance significantly deteriorates as the number of time step decreases, whereas our method's performance declines similarly to the full-precision baseline. This experiment demonstrates that the TDQ module functions effectively even when the sampling time step varies.
§.§ Ablation Studies of TDQ module
To investigate the output dynamics of the TDQ module, we visualized the updates of the dynamic interval in relation to time steps (Fig. <ref>). The interval, trained using W4A4 LSQ on DDIM, demonstrates a tendency to change in alignment with activation variations. However, this pattern is not consistently observed across all layers. The inconsistencies could potentially indicate that the TDQ module is attempting to generate an interval that minimizes the final loss value, as LSQ adjusts the quantization interval accordingly.
In order to provide a more comprehensive analysis of the advantages of the TDQ module, we conducted a comparison with existing input-dependent dynamic quantization methods and investigated the influence of the temporal encoding scheme of TDQ module. The detailed results of this experiment can be found in the supplementary material.
§ CONCLUSION
In this paper, we explore the challenge of activation quantization in the diffusion model, specifically the dynamics of activation across time steps, and show that existing static quantization methods fall short of effectively addressing this issue. We introduce the TDQ module, seamlessly integrated into both QAT and PTQ, while enhancing the output quality significantly based on the dynamic quantization interval generation. Since our approach can be implemented without any overhead during inference, we expect our study to help improve performance while preserving quality in low-bit diffusion models for mobile and edge devices.
unsrt
Supplementary Materials
§ INTRODUCTION
In this Supplementary material, we present the results of the experiments mentioned in the paper, along with additional experiments. The following items are provided:
* A comparison between Dynamic Quantization and Temporal Dynamic Quantization in Section <ref>.
* Ablation study on time step encoding in Section <ref>.
* Various non-cherry-picked results of generated images in Section <ref>.
* Detailed experimental results on the Output dynamics of the TDQ module in Section <ref>.
* Detailed experimental results on the Evolution of Activation Distribution in Section <ref>.
§ DYNAMIC QUANT VS TDQ
In Table <ref>, We compare the performance of Dynamic Quantization and TDQ on CIFAR-10 DDIM W4A4 QAT. In the implementation of Dynamic Quantization, three approaches were compared: (a) determining the quantization interval for each layer dynamically using the formula s = x.abs().max()/(n_lv), (b) passing layer statistics information (Min, Max, Mean, Var) to a trainable generator for interval generation, and (c) combining these statistics with time information for interval generation.
Notably, the results demonstrate that TDQ achieves superior performance compared to Dynamic Quantization, even without direct utilization of the layer's distribution information. Moreover, it is observed that when time information and distribution information are used together, the performance is better when utilizing time information alone.
We believe that this is because directly using the distribution information of each layer to determine the quantization interval can result in suboptimal performance due to the sensitivity of the generated intervals to rapid changes in the layer's distribution, such as outlier data.
§ ABLATION STUDY OF TIME STEP ENCODING
In this section, an ablation study was conducted on the time step encoding, which serves as the input for the TDQ module. Table <ref> presents the ablation study on the methods for encoding time step information into an TDQ module. The experiment is conducted on CIFAR-10 DDIM W4A4 QAT. As observed from the table, when no preprocessing is applied, the training becomes infeasible due to significant differences in values. Training stabilizes when normalization is applied; however, it is noted that encoding techniques such as frequency encoding or random Fourier encoding result in improved performance by incorporating high-frequency inductive biases. Although Random Fourier encoding <cit.> exhibits slightly superior performance, it introduces additional training costs. Hence, a simpler and deterministic frequency encoding approach was employed.
§ RANDOM SAMPLING RESULTS
In this section, we provide our non-cherry-picked generated images using TDQ, under 4 diffrent bitwidth setting(W8A8, W8A4, W4A8, W4A4). Results are shown in the Fig. <ref>, <ref>, <ref>, <ref>
§ OUTPUT DYNAMICS OF TDQ MODULE
In this section, we provide dynamics of TDQ module in more diffrent layers and models. Results are shown in Figure <ref>, <ref>
§ EVOLUTION OF ACTIVATION DISTRIBUTION
In Fig. <ref>, we demonstrate the different layer's temporal evolutions of the activation distribution trained on the CIFAR-10 DDIM model. From the downsampling layer to the attention layer, we can observe that all layers possess temporal dependency.
|
http://arxiv.org/abs/2306.03663v1
|
20230606132635
|
Bayesian inference for group-level cortical surface image-on-scalar-regression with Gaussian process priors
|
[
"Andrew S. Whiteman",
"Timothy D. Johnson",
"Jian Kang"
] |
stat.ME
|
[
"stat.ME"
] |
plain
A New Approach to Measure Fundamental Microstructural Influences on the Magnetic Properties of Electrical Steel using a Miniaturized Single Sheet Tester
[
========================================================================================================================================================
In regression-based analyses of group-level neuroimage data
researchers typically fit a series of marginal general linear models
to image outcomes at each spatially-referenced pixel.
Spatial regularization of effects of interest is usually induced
indirectly by applying spatial smoothing to the data during
preprocessing. While this procedure often works well, resulting
inference can be poorly calibrated. Spatial modeling of effects
of interest leads to more powerful analyses, however the
number of locations in a typical neuroimage can preclude standard
computation with explicitly spatial models.
Here we contribute a Bayesian spatial regression model for
group-level neuroimaging analyses.
We induce regularization of spatially varying
regression coefficient functions through Gaussian process
priors. When combined with a simple nonstationary model
for the error process, our prior hierarchy can lead to more
data-adaptive smoothing than standard methods. We achieve
computational tractability through Vecchia approximation of our
prior which, critically, can be constructed for a wide class of
spatial correlation functions and results in prior models that
retain full spatial rank. We outline several ways to work with
our model in practice and compare performance against standard
vertex-wise analyses. Finally we illustrate our method in
an analysis of cortical surface fMRI task contrast data from a large
cohort of children enrolled in the Adolescent Brain Cognitive
Development study.
§ INTRODUCTION
Modern large-scale neuroimaging studies collect massive amounts of
data, often across thousands of patients, sometimes across
several years
<cit.>.
Typically these studies collect multiple
structural and/or functional scans, with the aim to probe
relationships between the images and patient-level characteristics.
We focus here on an image-on-scalar regression treatment for this
general framework, where patients' images are taken to be the
response, and covariates are individual-level scalars.
Since neuroimages are spatially referenced data, we can cast the
image-on-scalar problem as a functional regression of the form,
y_i() = _i() + ω_i() +
ϵ_i().
In (<ref>) we take y_i() to be the imaging outcome for
patient i (i = 1, …, N) at location ∈, and
coefficients of interest (·) : →^P
are treated as spatially varying. Further, we decompose the error into
a sum of ω_i(·) and ϵ_i(·) terms, where
ω_i(·) reflects individual-level deviations from the mean
with an assumed spatial structure, and ϵ_i(·) is taken to
be a white noise process.
Many classical analysis methods in imaging can be cast within this
framework. For example, in the typical group-level functional magnetic
resonance imaging (fMRI) analysis, the
y_i(·) might represent contrasts of parameter estimates from
within-participant first level time series analyses, and
_i ∈^P might include an intercept term along with any
relevant covariate information.
Often, univariate models are fit marginally to the data
from each location in practice
<cit.>.
This procedure tremendously simplifies estimation by avoiding modeling
spatial correlations in (·) and ω_i(·), but can
lead to poorly calibrated inference (for example, see attempts to
improve the power of tests derived from marginal ordinary least
squares models by spatially pooling variance estimates in
).
For model (<ref>) to make sense practically, the
images must have reasonably comparable support in the spatial domain
. Though it is still an area of active research, a
tremendous amount of study has focused on methods to preprocess
raw neuroimage data to help coregister the images across patients and
data collection sites
<cit.>.
In particular, certain neuroimage preprocessing tools compute
state-of-the art cross-subject alignment of cortical features by first
mapping each hemisphere of the cortex onto the surface of a sphere
with minimal distortion
<cit.>.
Fig. <ref> gives an example of such a mapping.
This procedure standardizes the spatial support for each hemisphere of
cortex, and has already been shown to lead to reduced spatial
signal contamination and result in more sensitive analyses
<cit.>.
Part of the gain from this methodology is due to the natural
construction of a gray matter surface-based coordinate system which
more accurately reflects the topology of primate cortex versus simple
Euclidean distance in 3D space <cit.>.
Recently, within the statistical community,
<cit.> highlighted this preprocessing pipeline by
developing a cortical-surface-on-scalar regression model for
task-based fMRI data. In their paper <cit.>, the
authors propose a joint multi-subject spatio-temporal regression
model, model their spatial regression coefficients with Gaussian
random fields, and derive an integrated nested Laplace approximation
routine for approximate Bayesian inference. Per their data
application, develop their
model primarily for analysis of multi-subject fMRI data where the
number of subjects is not large <cit.>.
Such joint multi-subject spatio-temporal methods are not easily
extensible to large-scale imaging studies.
The number of spatial locations in a conventional neuroimage typically
precludes Bayesian computation in most computing environments
except by methods that either approximate (a) the spatial process by
low-rank projection or downsampling, or that approximate (b) the
posterior distribution with variational or Laplace family
approximations
<cit.>.
In general, low-rank projection methods can tend to
miss or over smooth local features in data
<cit.>, and both low-rank projection and
variational approximation can commonly underestimate posterior
variance <cit.>.
Integrated nested Laplace approximation, moreover, is thought to
give accurate and scalable approximations within a wide class of
posterior distributions <cit.>, but
its accuracy can sometimes suffer when model structure is complex
<cit.>.
Here, we expand on this body of work and show how a Bayesian model
with a prior hierarchy related to that in <cit.> can
permit estimation of coefficient functions that are realizations of a
full-rank spatial process. To be able to extend our method to
large-scale imaging studies we contribute a spatial regression model
intended primarily for group-level analyses of data indexed by
locations on the cortical surface. In the context of group-level fMRI
studies, for example, our method could simply be “plugged in” at the
classical second-stage analysis, with individual-level task contrast
images taken to be the response.
Our method can also be flexibly applied to analysis of
cortical thickness outcomes, or other structural indicators.
We model the probability law governing prior uncertainty in the
functions (·) and ω_i(·) with Gaussian
processes. Posterior computation is enabled by Vecchia approximation
of the spatial process
<cit.> and empirical Bayesian estimation of the
spatial process hyperparameters.
Our model can be reasonably fit to the data from whole hemispheres of
cortex using fast optimization or scalable Markov chain Monte Carlo
(MCMC) routines without the need to downsample the original
data. Additionally, we elaborate on an approximate working model and
related Bayesian sampling scheme with computational complexity that
scales almost independently of N, further allowing our method to be
viable for application to large-scale neuroimaging studies.
Model computation with MCMC permits posterior inference on the
spatial extent of activation regions with simultaneous credible bands,
which can facilitate spatial inference that is inherently adjusted
for multiplicity.
We show our method's accuracy and sensitivity
estimating the spatial coefficient functions in simulation. Finally,
we use our method to analyze n-back task contrast data (z-statistic
images) from the second annual release of the Adolescent Brain
Cognitive Development (ABCD) imaging collective data.
The body of this paper contains an elaboration of our spatial
regression model hierarchy at the beginning of section
<ref>. Sections <ref>,
<ref>, and
<ref> develop schemas of using
this model and a related working model in practice.
Performance of these approaches is compared to standard
vertex-wise univariate regression methods in section
<ref>. We next illustrate use of our working model in a
real fMRI task contrast data analysis in section <ref>.
Section <ref> concludes with a discussion of method
limitations and possible extensions.
§ METHODS
Throughout this work, we assume the single hemisphere, cortical
surface-based, spherical coordinate system of
<cit.>. By isolating data from the cortical
sheet we gain anatomical specificity and a better connection to
the underlying neurobiology.
As has been discussed by
<cit.>,
geodesic distances along the cortical surface are
more meaningful than, say, simple Euclidean distances in the compact
3D volume. This is due to the fact that primate cortex is thought to
be organized by function topographically
<cit.>, and exhibits a folded structure
in higher mammals to accommodate a larger cell body area
<cit.>.
We simplify notation by considering the left and right hemispheres of
cortex as separate outcomes in separate analyses.
Let denote the set of coordinates on a sphere with a
known radius R, and let 𝒮⊂ denote the
set of vertices for a single hemisphere of cortex at which we have
observed MRI data. For reference, the data in our application have all
been mapped to a normalized template brain space with approximately
30,000 vertices in 𝒮. In native patient brain space,
𝒮 may contain on the order of 150,000 vertices.
For any two , ' ∈, let d(, ')
measure the great-circle distance between and '.
Great-circle distance is sufficient for our purpose; more generally,
however, might represent some topological surface, etc.,
and d(·, ·) any appropriate metric.
Beginning from (<ref>), we model the data likelihood as
multivariate Gaussian with a particular error structure.
We assume:
y_i() ∼( _i() + ω_i(),
σ^2()), i = 1, …, N, and ∈𝒮,
where (μ, Σ) denotes the Normal distribution with
mean μ and variance Σ; _i ∈^P are covariates;
(·) : →^P are the primary effects of
interest; and ω_i(·) : → reflect
individual-level deviations from _i(·). Conditional on _i, (·), and
ω_i(·), we model the errors as a non-stationary white noise
process with spatial variances denoted by
σ^2(·) : →_>0. Given the nature of
typical data in group-level functional or structural MR image
analyses, this data-level model may be sufficient for a variety of
studies.
Spatial dependence in our model arises entirely through our prior
hierarchy on the effects (·) and ω_i(·).
Let C_{d(, ')} denote a positive
definite stationary spatial correlation function defined on
with parameter . For simplicity, we
drop the subscript throughout and use C(·) to
represent a correlation function with implicit dependence on
. We specify the prior distributions of each spatially
varying coefficient function β_j(·) as mean zero Gaussian
processes with marginal variances ζ_j^2 τ^2, i.e.,
β_j() ∼(0, ζ_j^2 τ^2 C{d(, ')}),
j = 0, …, P-1.
This class of prior for functional regression coefficients has been
adopted by <cit.> for general spatial regression
problems. We write the coefficient processes this way without loss of
generality: while zero mean processes are reasonable in our
application (where outcomes are task contrast z-statistic images,
see section <ref>),
data from other imaging modalities may
require centering at the global mean for zero mean priors to make the
most sense.
We treat the individual-level deviations ω_i(·)
as spatially varying random effects with mean zero and marginal
variance τ^2,
ω_i() ∼(0, τ^2 C{d(, ')}).
Next, we specify a relatively simple nonstationary process for the
error precisions,
σ^-2() |ξiid∼Gamma(1/2, ξ), ξ∼Gamma(1/2, 1),
using the shape-rate parameterization of the Gamma distribution.
To complete our model hierarchy, we place weakly informative priors
on the remaining spatial variance components,
τ^-2∼Gamma(1, 1/2), ζ_j^-2iid∼Gamma(1, 1/2).
As noted above, the correlation function C(·) can in
general be any positive definite kernel function defined so that
C(0) = 1 and C(α) ≤ 1 for all
α > 0. Given the substantial history of Gaussian smoothing in
applied MRI analysis, we will work chiefly with the two parameter
exponential radial basis function,
C(α) = exp( -ψ|α|^ν ),
= (ψ, ν), ψ > 0, ν∈ (0, 2],
which is stationary, isotropic, and synonymous with the Gaussian
kernel when ν = 2. In (<ref>), ψ is sometimes called
the bandwidth or inverse length-scale parameter and controls how
rapidly the correlations decay, and ν is the kernel exponent or
smoothness parameter. Alternative correlation functions could be used
just as easily. We will discuss one data-driven way
the correlation function might be selected in practice in section
<ref>; the same method can also be used to
estimate the correlation parameters for a given functional
family.
§.§ Conditional Model
We outline two ways of working with model (<ref>) in our
setting, and also study the relative behavior of an approximate
working model with connections to the standard vertex-wise analysis
framework. The regression model that we have outlined is difficult to
work with without simplification for two reasons. The first and
perhaps most obvious reason is the dimension of the parameter
space. Computational strategies for spatial modeling typically involve
decomposition of a dense spatial covariance matrix.
In our case, naive decomposition of the joint covariance of
the β_j(·) and the ω_i(·) would be an
(M^3(N + P)^3) operation, where M is number of vertices in
𝒮, and N and P are the sample size
and number of regression predictors, respectively. In Bayesian
sampling algorithms, this decomposition often needs to be recomputed
for each sample, which is prohibitively expensive in our
setting.
The other difficulty working with the model as written is that
decomposing the error structure into the sum of two spatially varying
terms (i.e., the ω_i(·) and the ϵ_i(·))
renders the whole model at best weakly identifiable.
As we lay out in greater detail in the Supporting Information, we
overcome the first difficulty by using a conditional independence
approximation to the model parameters' spatial covariance, inducing
sparsity in the parameters' spatial precision.
This type of approximation can
greatly reduce the computational burden while retaining a covariance
structure with full spatial rank, leading to high accuracy and
scalability <cit.>. We overcome the second difficulty in several
different ways, and we first introduce what we term the
“conditional” approach to working with our model.
To explain our conditional estimation strategy, we first observe that
if we knew the correct ω_i(·) the remaining terms in the
model would be relatively easy to estimate. For this approach, our
strategy will be first to obtain an approximate maximum a posteriori
estimate of the ω_i(·), and second to condition on those
estimates, sampling the other model parameters in an Empirically
Bayesian way. To obtain these estimates, we work with an approximate
model that considers σ^2() ≡σ^2 constant over
all vertices in 𝒮, and alternate conditional maximization
of (·) and
the ω_i(·) until convergence.
Once we have obtained our estimate of the
ω_i(·) in this way we simply subtract the ω_i(·)
from the y_i(·), and switch to an efficient Bayesian sampling
algorithm for the remaining parameters in the model.
§.§ Marginal Model
Alternatively, since the individual deviations ω_i(·) are
not typically of direct interest, we can first integrate them out,
leading to a marginal model with respect to the β_j(·),
σ^2(·), etc. Marginalizing out the ω_i(·) is
relatively straightforward given the conjugacy in our model hierarchy.
Marginalization leads to the equivalency,
y_i() = _i() + ϵ_i^*(), ϵ_i^*() ∼(0, H{d(, ')}),
when H{d(, ')} = τ^2 C{d(, ')} +
σ^2() {d(, ') = 0}, and (𝒜) is the
event indicator function ((𝒜) = 1 if event 𝒜
occurs, and 0 otherwise). A computational approach to working with
model (<ref>) can then follow by additional
application of a conditional independence approximation
<cit.> to the
covariance of the ϵ_i^*(·).
In the Supporting Information, we outline a
means of computing with model (<ref>) based on
estimating , τ^2, and σ^2(·) in an
Empirically Bayesian way. Briefly, we take a two stage approach to
computation, first obtaining approximate (up to
optimization tolerance) maximum a posteriori estimates of
(·), , τ^2, and σ^2(·). Second,
we fix the covariance parameters , τ^2, and
σ^2(·) at their approximate posterior modes and switch to
an efficient MCMC routine to sample from the conditional posterior of
(·).
§.§ Working Model
We also introduce a third, working model as a way to obtain
approximate inference on the β_j(·). In general, including
the ω_i(·) as a separate correlated error component will
not influence standard estimators of the center of the posterior of
the β_j(·), such as the posterior mean. If out of sample
prediction of imaging outcomes is not a goal of the analysis,
then the primary reason to include a spatially correlated error
component is to reduce the posterior variance of the
β_j(·). In a large data setting, the gain in efficiency from
including a correlated error component can be minimal to negligible. A
natural question, then is if
we replace the likelihood in (<ref>) with the
approximation,
y_i() = _i^w() + ϵ_i^w(), ϵ_i^w() ∼(0, σ^2()),
and keep the prior structure on the β_j^w(·) and
σ^2(·) is the same as in (<ref>) and
(<ref>) above, how well does the resulting model
perform? We term this approximation our “working” model,
and note that it can be viewed as a generalization of the
standard vertex-wise GLM analysis paradigm in a spatial Bayesian
context. The model implied by fitting vertex-wise marginal GLMs is a
limiting case of our working model as τ^2 →∞ for select
choices of the correlation function,
C(α) = (α = 0), and (improper) prior on the
σ^-2(·) ∼Gamma(1, 0).
Considering comparisons among our suite of methods, we will show in
simulation that for moderate to large sample sizes, the posterior of
the β_j^w(·) for our working model is quite similar to
that of the β_j(·) from either our conditional or marginal
models.
§.§ Posterior computation
The Supporting Information provides a detailed description of our
approach to posterior computation. Space does not permit us to
elaborate here. Very briefly, we follow work on “Nearest Neighbor
Gaussian Processes”
<cit.>
to develop sparse Vecchia-type approximations to the inverses of key
spatial covariance matrices. These approximations require some
definition of a spatial neighborhood within which the spatial
precisions are non-sparse. For both our simulations and applied
analysis, we have used discs with 8 mm radii to constitute these
neighborhoods. Sensitivity analyses over this choice are presented in
the Supporting Information.
We combine Vecchia approximation with a quasi-Newton
Hamiltonian Monte Carlo (HMC) algorithm to sample from the conditional
posterior of the regression coefficients supported on the fixed
spatial domain 𝒮. Each gradient step of our HMC algorithm
is scaled by a sparse preconditioning matrix related to the prior
precision of the regression parameters on 𝒮.
The algorithm we have briefly outlined above can be used
for efficient posterior computation in very large data sets.
In fact, given sufficient statistics that can be computed
with a single pass through the outcome images, all of the parameter
updates in our working model can be performed without reference to the
original data. This leads to computational time complexity that, save
for an initial data streaming step, is independent of N.
In large data regimes the advantage of this is obvious.
A common applied fMRI use-case when working
with task contrast images is to use a simple set of predictors:
practitioners often fit an intercept-only model, or perhaps
additionally control for select covariates like age and sex. We
benchmarked our working model software for these use cases, analyzing
right hemisphere task contrast data from over 3,000 participants
(≈ 30,000 vertices; see section <ref> for a more
in-depth analysis).
Streaming the images typically took around 100 ms or less per image
(CIFTI/NIFTI-2 file format:
<https://www.nitrc.org/projects/cifti/>).
After streaming, analysis with HMC took around 3.3 min per 1,000
iterations for the intercept-only model, or around 18.8 min per 1,000
iterations for the three predictor model (intercept, age, and
sex). Each analysis used less than 300 Mb of free RAM,
demonstrating the scalability of our approach. We ran this comparison
on a Dell PowerEdge R440 server with
Intel®
Xeon® Gold 6230 processors (2.1 GHz),
limiting our processes to use eight cores each.
§.§ Estimation of and C(·)
Commonly used methods to estimate the spatial correlation function
include variogram or covariogram estimation
<cit.>,
and maximum marginal likelihood methods
<cit.>. These methods can also be used to
select the correlation function itself by taking, for example, the
correlation function resulting in the best fit to the variogram or the
highest marginal likelihood. Here, we have used a maximum marginal
likelihood-based approach for a surrogate model to estimate the
correlation function and corresponding parameters in the spirit of
Empirical Bayes. The Supporting Infomation provides a full description
of this selection method for interested readers. In our analysis
of the ABCD study data (section <ref>), we estimated
= (0.17, 1.38), which corresponds to a sub-Gaussian
correlation function with 5.57 mm full-width-at-half-maximum
(FWHMs).
In addition to likelihood-based or (co)variogram estimation, experience
can guide practitioners selecting C(·) and to a large
extent. In applied imaging it is common to apply
a Gaussian smoothing kernel to data prior to analyses. In part, the
goal of this practice is to approximate a full spatial model for
effects of interest <cit.>.
Commonly applied smoothing kernels are specified by their
FWHMs, which are often chosen to be
within a 4–12 mm range <cit.>.
A 6 mm FWHM Gaussian
kernel, for example, is nominally equivalent to a radial basis
correlation function (<ref>) with bandwidth parameter
ψ = 0.077 and exponent parameter ν = 2. In our setting with
moderate to large N, we often find that posterior inference is
not be overly sensitive to the choice of .
§ SIMULATION STUDY
Our goal in simulation was to compare the performance of our two
methods for estimating our model against our “working model” and the
standard vertex-wise marginal linear model approach.
In all cases, data were simulated from model (<ref>) on
a disc of 2,000 vertices on the cortical surface. We designed our
simulation to mimic spatial smoothness and signal-to-noise ratios
estimated from real data.
Fig. <ref> illustrates this approach.
For each simulated data set, we generated spatially
correlated and sparse
_j = [β_j()]_∈𝒮, j = 0, 1, 2,
by hard thresholding draws from independent Gaussian
processes with 6 mm FWHM exponential correlation functions.
Below, we will also use without the subscript to refer to the
vector of concatenated random fields
= (_0, …, _P)
We set the marginal variance parameter of each Gaussian field drawn
this way to 0.04 and thresholded the result at 0.08 so that each
_j would be approximately 30% sparse on average.
This level of sparsity roughly matches pseudo-sparsity in the real
data we analyze in section <ref>: applying standard
vertex-wise GLM methods with a Bonferroni correction-based p-value
threshold to these data resulted in significant findings over about
70% of the cortical surface.
Since our prior model in (<ref>) is
non-sparse, simulating the _j in this way actually reflects a
setting with slight model misspecification. It is important to
consider such a setting, however, in order to evaluate models using
measures of inferential accuracy.
We treated the field _0 as a spatially varying intercept
parameter and paired _1 and _2 with covariates:
for subject i = 1, …, N, corresponding covariates
(x_1,i, x_2,i)
were drawn jointly from a multivariate Gaussian distribution with
mean zero, unit marginal variance, and correlation parameter 0.5.
In each simulation, we also generated the subject-level deviations
_i = [ω_i()]_∈𝒮 as draws from
independent Gaussian processes with 6 mm FWHM exponential correlation
functions. Again to mirror estimates from the real data, we set the
marginal variance of each _i to 1.75. In a similar vein, we
drew each
_i = [ϵ_i()]_∈𝒮 following a
white-noise process with spatially constant
variance 1.25. Under the above parameter settings, we controlled the
spatial signal-to-noise ratio to be approximately 0.04 (or
equivalently, the spatial R^2 was controlled to be approximately
3.8%). As can be seen from Fig. <ref>, the error terms
_i and _i largely dominate the spatial signal.
Within this regime, we studied the behavior of our various comparison
methods for increasing sample size, replicating the simulation 50
times per sample size.
In all cases, we then fit our suite of three methods to compare
against the standard vertex-wise GLM conditioning on the true
correlation parameters (or smoothing the outcome images with exactly a
6 mm FWHM exponential kernel in the case of the
standard method).
For comparison, we give the standard vertex-wise analysis paradigm a
Bayesian treatment by replacing our priors on the β_j(·) and
σ^2(·) with independent Jeffreys priors (as alluded to in
section <ref>). Since the full conditional
posterior distributions of the resulting model parameters are quite
easy to sample from we fit the vertex-wise models using Gibbs
sampling. Working with the model in this fashion allowed us to compare
the standard vertex-wise analysis to our proposed models in terms of
full posterior inference. Namely, we used posterior credible bands as
a way to summarize the joint uncertainty in the β_j(·) over
all vertices simultaneously.
In a spatial modeling context, posterior
credible bands are a natural, fully Bayesian approach to
inference, and can be easily estimated from MCMC samples
<cit.>. Since credible bands reflect
posterior probability statements about
the joint behavior of the β_j(·) for all spatial locations,
inference derived from them is inherently multiplicity-adjusted.
§.§ Results of simulation comparisons
Table <ref> summarizes the results of our simulation
for increasing sample size. For each method in the table, we report
scaled absolute bias and variance as well as sensitivity and
specificity rates (True + and True -, respectively;
expressed as percentages. Since the scale of each β_j(·)
is the same for all j in simulation we report absolute bias and
variance as averages over the entire parameter vector , so
that the values in Table <ref> can be interpreted as
the scaled expected point-wise bias or marginal variance of each
β_j(). For example, the absolute bias column reports
10^3 ∑_j, |β̂_j() -
β_j()| / (3 × 2,000), where β̂_j(·) is
the posterior mean estimate for a given method (three predictors;
2,000 spatial locations; scaled by a factor of 10^3 to enhance the
clarity of the table). We constructed example inferential
decisions based on 80% simultaneous credible bands.
The 80% credibility threshold was chosen to represent a selection
that might reasonably be applied in practice rather than by optimizing
any kind of inferential criterion.
In Table <ref>, the True + column corresponds
to the average percentage of cases where the true
β_j() ≠ 0 and the corresponding credible band does not
include zero. Similarly, the True - column reports the
average percentage of cases β_j() = 0 and the corresponding
credible band covers zero.
The most immediate result of our simulations is that the marginal
method of estimating our model typically leads to the most accurate
posterior mean estimate of in the mean squared error sense.
For all methods under consideration, the point-wise bias dominates the
point-wise variance across all simulation settings. At large sample
size we note that both the marginal and conditional methods of
estimating our model as well as our working model tend to produce very
similar estimates and results. This pattern is explored further in
Fig. <ref>, which summarizes the similarity of the
full posterior distribution of , as estimated with our suite
of methods. It is clear from the figure that differences in estimation
between methods decay as the sample size increases.
Interestingly, at small sample sizes, our conditional and working
model methods have higher absolute bias than the vertex-wise GLM, but
quickly overtake the vertex-wise method as the sample size
increases. Absolute bias does not decrease with increasing N as
rapidly for the vertex-wise GLM as for our suite of methods. The other
major result indicated by our simulations is that, using simultaneous
credible bands for inference, the conditional method of estimating our
model is the most powerful or sensitive among our methods under
comparison. The sensitivity of this method, however, is modestly
lacking compared to our marginal and working model methods, which make
virtually no false positive errors even at smaller sample sizes.
This pattern of results can be somewhat difficult to summarize. Based
on this simulation, we cannot uniformly recommend any one method
without knowledge of the goals of the intended analysis. If
estimation of is of primary concern, then we would generally
recommend our marginal method if the sample size is small, or any of
our suite of methods at larger sample sizes. If inference is of
primary concern, we might recommend our conditional method, for
example, for its high sensitivity. Alternatively, we might also
recommend either our marginal or working methods for their rather low
false positive to true positive ratio.
§ DATA ANALYSIS
§.§ Description of the data and model terms
To illustrate use of our methodology, we applied our working model to
analyze n-back task contrast data from the ABCD study, release 2.0.1
<cit.>.
A brief comparison of estimation differences between our working,
conditional, and marginal model variants is available for these data
in the Supporting Information.
The ABCD study is the product of a large
collaborative effort to study longitudinal changes in the developing
brain through childhood and adolescence, and to track biological and
environmental correlates of development
<cit.>. Data collection and
processing has been harmonized across 21 research sites in the
continental United States. At the time of writing, the study has
collected baseline environmental, behavioral, genetic, and neuroimage
data from over 11,800 children between the ages of 9–10
years. Longitudinal data has been collected at six month intervals for
a subset of children in the study; already over 3,600 children have
been enrolled for over 2.5 years. Details regarding study design and
recruitment <cit.>,
neurocognitive assessment <cit.>, and
neuroimage acquisition <cit.> are available in
published literature.
The ABCD data repository grows and changes over time. The ABCD data
used in this report came from NDA study 2573. DOIs can be found at
<https://nda.nih.gov/study.html?id=721>.
Preprocessing of the fMRI task data was accomplished through use of a
published standardized pipeline
<cit.>.
We focus our analysis on the relationships between task-related
activation and individual-level task accuracy, which has been studied
previously <cit.>.
In concert, our analysis controls for various child-level
characteristics and family-level demographic information.
We took 2- vs 0-back task contrast data (z-statistic scale) as our
primary outcome and modeled it as a function of 2-back task accuracy;
child fluid intelligence; child age (months); child gender (binary);
parental education (five levels); parental marital status (binary);
and family income (three levels). We included
first-order interactions between child gender and parental education;
child age and parental education; child age and child gender; child
age and 2-back accuracy; and child gender and 2-back accuracy.
For interpretive purposes,
we centered continuous covariates in the analysis on their respective
in-sample means, and we treated the in-sample modal demographic
categories as baseline (female child from a married household, at
least one parent with a post graduate degree, and household income
greater than $100,000 USD/year).
The intercept parameters in our spatial regression can be interpreted
thus as the expected task contrast image for a typical in-sample
female child of average fluid intelligence that scored 80% correct on
the 2-back task condition.
Covariates were chosen largely on the basis of known
associations with general n-back task accuracy
<cit.>. In addition, we performed sets of
exploratory analyses in the classic vertex-wise framework without any
spatial smoothing (not shown). These analyses served to help us
visualize and understand several important aspects of the data.
Interested readers can find a more comprehensive report of
non-intercept regression effects in our Supporting Information.
§.§ Summary of primary results
The primary results of our analysis are presented in
Fig. <ref>.
We consolidate the output by focusing on results in the right
hemisphere, and we note that in general results in the left
hemisphere are highly symmetric.
In particular, Fig. <ref> shows the posterior
mean estimate of our model intercept parameters (_0) and also
gives a region of interest level summary of this term.
Regions of interest were taken from the Gordon 2016 cortical surface
parcellation atlas, which was created in part from resting state
functional connectivity maps and naturally groups brain regions within
a network community structure <cit.>. The atlas
delimits 172 brain regions in the right hemisphere (161 in the left),
each grouped within one of 13 functional network communities.
To summarize our model intercept by
brain region, we fit a series of mixed effect models to MCMC samples
of _0, taking advantage of the Gordon atlas's grouped
structure. We used this modeling strategy to obtain region-level
averages of our spatial intercept parameters, where each region-level
average is shrunk towards its network community mean. By repeatedly
fitting this model to each sample of _0, we obtain
Bayesian point and interval estimates for the region-level
averages. The bottom panel of Fig. <ref>
displays point and multiple comparisons
consistent 95% interval estimates for a subset of regions in
the Gordon 2016 atlas. Although we model and adjust the intervals
based on all 172 regions in the atlas, we only show the region-level
estimates for regions belonging to communities:
Cingulo-Opercular network, Default mode network,
Dorsal attention network, Fronto-Parietal network, and “None”
(see Fig. <ref>). Results of this analysis show
that the largest activations occur in regions associated with the
Dorsal attention, Fronto-Parietal, and Cingulo-Opercular networks,
with a handful of regions associated with the Default mode network
also showing significant activations. Similar conclusions were reached
by <cit.> in a smaller, preliminary subset of
these data (N = 517). Data from another large collective imaging
study also support similar results in adults aged 22–37 years
<cit.>.
Fig. <ref> depicts example spatial inference on
the intercept parameters in the right hemisphere, thresholding at what
we might consider a small to medium effect size. In the figure,
colored regions denote areas where the posterior mean estimate of
|β_0()| is greater than 0.4 (see online version for
color figures).
Since we are modeling z-statistic outcomes, β_0() > 0.4
can be interpreted to mean that roughly 2 out of every 3 “average”
children in our sample would show task-related activation at location
(versus 1 in 3 showing deactivation; the statement can be
reversed for β_0() < 0.4). Regions of darker color in
Fig. <ref> mark areas where our analysis
suggests the probability that |β_0()| > 0.4 is greater than
or equal to 80% simultaneously for all vertices within those
areas. This interpretation is similar to the notion of “upper
confidence sets” from <cit.>. Here, as in
in Section <ref>, we use posterior credible bands
<cit.> to create these inferential
summaries.
Model residual standard deviations for the right hemisphere is shown
in Fig. <ref>. In general, areas with the highest
residual variance overlap with areas activated in the 2- vs 0-back
contrast (confer from Figs. <ref> and
<ref>). This result indicates substantial
variability in individual responses in these regions. Overall, our
fitted model explained about 6.2% of the total variance in
the task contrast images.
§.§ Goodness-of-fit evaluation
Finally, we assess the fit of our model using posterior predictive
simulation <cit.> and
analysis of model residuals. Selected results of these comparisons are
presented in Fig. <ref>. In the figure, we summarize
the extent of discrepancy between the observed data and posterior
predictions the model would make for replicated data. To do this, we
again leveraged the Gordon 2016 cortical surface parcellation
<cit.> and computed test descriptive
statistics across subjects within each brain region, comparing against
the same statistics computed over synthetic data of the same size
simulated from our fitted model. We explored the
discrepancies in the predictive and empirical data distributions based
on measures of central tendency, spread, and several
quantiles. Absolute differences in the empirical values and the
posterior predictive mean value are shown in
Fig. <ref> for each brain region and for three such
test statistics. To give a sense of scale, in the figure the largest
regional difference is < 0.2 (10th Quantile
panel), whereas the range of the data is approximately -13.7 to
17.1. In general, we found that discrepancies between the empirical
and predictive distributions were extremely low for test statistics
that summarize features of the central bulk of the
distributions. Features in the very tails of the empirical
distribution were less well captured in the predictive distribution,
as might be expected for a Gaussian likelihood model (not shown). In
Fig. <ref>, we also summarize goodness-of-fit by
comparing standardized residual histograms for each brain region. In
the lower panel of Fig. <ref>, we ranked each brain
region by their discrepancy with a normal model and show residual
histograms for the best, median, and worst-case regions. In general,
we again see evidence of excellent model fit throughout the central
bulk of the data. Interestingly, region 192 (worst-case fit) contained
the highest overall mean parameter estimate within the
Dorsal-Attention network community for both the intercept
and linear 2-back accuracy term
(Fig. <ref>;
as in Fig. <ref>,
“192” corresponds to the Freesurfer label for the Gordon atlas
region; see <https://surfer.nmr.mgh.harvard.edu/fswiki>).
This result may indicate, for example, that while the (relatively
simple) model we have used for 2-back accuracy provides a reasonable
fit to the task contrast data across most of the right hemisphere, it
may fail to perfectly encapsulate the complex task-related activation
patterns in this sample.
§ DISCUSSION
Here we propose a Bayesian spatial model for group-level
image-on-scalar regression analyses, and illustrate several ways to
consider working with the model in practice. We also show how the
spatial Gaussian process prior formulation and related approximation
through conditional independence methods can enable flexible and
reasonably efficient computation with MCMC.
Critically, our approach allows us to work with
full-rank spatial processes numerically, and does not rely on lossy
compression schemes like down-sampling or low-rank projection, which
can be dissatisfying in practice <cit.>. We
have shown in simulation that our strategy can improve on the standard
analysis stream in terms of finite sample bias, sensitivity, and
specificity. We have also shown that an approximate working model
produces similar inference on the spatially varying coefficients
(·) in settings with moderate to large N. Our working
model is relatively easy to compute with, and can be thought of as a
generalization of the standard analysis stream. Finally, we illustrate
use of our method on task (n-back) contrast data from the Adolescent
Brain Cognitive Development study.
With the exception of a white-noise component our model error process,
we express our model as a sum of terms assigned stationary spatial
priors in section <ref>. In general, spatial stationarity
is not considered a realistic assumption for imaging data
<cit.>. Although we
use stationary priors throughout for simplicity, in practice, given
the data the posterior distribution of our model parameters can still
reflect non-stationary processes. In fact Fig.
<ref>
illustrates clear posterior mean field non-stationarity.
In particular, the posterior mean of our model intercept and
linear 2-back accuracy rate coefficients suggest obvious mean-field
non-stationarity. Moreover, since our model on the white-noise process
is inherently non-stationary, our prior hierarchy can lead to more
data-adaptive smoothing in the regression coefficients compared to
standard analysis streams where stationary spatial smoothing is
applied, at some level, to the data.
As we alluded to in Section <ref>, it may be of
interest to build extensions to our method to incorporate additional
variance components for more complex or specific study designs.
Such an extension might correspond, for example, to the addition of
random family effects in analyses of the greater ABCD study sample.
While our present model is technically capable
of estimating such effects by pooling the corresponding ζ^2_j
across related terms, including a large number of random spatial
effects in the analysis can be extremely demanding
computationally. One workaround might be to omit modeling spatial
correlation structures for these terms, and treat them as pure
nuisance parameters. At the time of writing, we have not yet studied
the practical consequences of doing so.
§ ACKNOWLEDGEMENTS
The authors would like to thank Mike Angstadt, Dr. Chandra Sripada,
and Dr. Mary Heitzeg, who oversaw preprocessing of the fMRI data we
used to illustrate our methods, and who provided helpful feedback on
our work. This work was partially supported by NIH R01 DA048993
(Kang and Johnson).
§ SUPPORTING INFORMATION
Appendices including Tables and Figures referenced in Sections
<ref> and <ref> are provided below.
Software for our methods is available online at
<https://github.com/asw221/gourd>.
§ POSTERIOR COMPUTATION
We outline our general approach to computation using the working model
in the main text as a running example, since this
variant is the easiest to work with. Posterior
computation with the conditional and marginal models can be
accomplished in very similar fashion.
Since we typically work on a fixed spatial domain 𝒮, let
_j (dropping the superscript w for simplicity)
denote the random field [β_j^w()]_∈𝒮
for j = 0, …, P-1, and let
= (_0, …, _P-1).
Let = [C{d(, ')}]_, ' ∈𝒮
represent the (M × M) spatial correlation matrix such that the
prior on each _j is a zero-mean Gaussian random field with
covariance ζ_j^2 τ^2.
Similarly, let represent the variance of
ϵ_i^w(·), here an (M × M) diagonal matrix with
the σ^2(), ∈𝒮 on the diagonal;
let denote the (N × P) matrix of participant-level
covariates; let _i = [y_i()]_∈𝒮 denote the
vectorized outcome image for participant i; and let
= (_1, …, _N) represent the
(NM × 1) vector of concatenated subject outcomes.
With the data in this “long” format, the model can be conveniently
expressed in terms of Kronecker products.
With = (ζ_0^2, …, ζ_P-1^2), the conditional
posterior variance of can be written,
(|, ·) =
( ⊗^-1 +
^-1⊗τ^-2^-1)^-1,
using shorthand to express conditioning on , ,
, and τ^2.
Since the dimension of grows rapidly with P, it can be
difficult or even impossible to work with
(<ref>) directly. Instead, we outline two
strategies to enable efficient posterior computation at this
scale. The first strategy, as alluded to above, is to replace
^-1 with a sparse approximation ^-1 such that
≈. In doing so, we follow work on the so
called “Nearest Neighbor Gaussian Process”
<cit.>, replacing the
idea of k-nearest neighbors with small neighborhoods of fixed
physical radius r. Briefly, we replace ^-1 with a conditional
independence approximation, enforcing that C̃_ij^-1 = 0
if d(_i, _j) > r for _i, _j ∈𝒮. Similar ideas have been alternately called Vecchia
approximation
<cit.>,
composite likelihood <cit.>, or
Markov random field approximation <cit.>, but in
general can lead to highly accurate and scalable approximations of
full rank spatial models
<cit.>. Working with such an approximation of course
introduces a hyperparameter, r, for the neighborhood radius size. In
practice we found that in a large data setting choice of r had very
little effect on our analysis (see Web Appendix
<ref> for a sensitivity analysis).
In a small N setting, however, when the prior
has more influence on the posterior, r must generally be chosen
large enough to obtain a good approximation of the log
prior. Anecdotally, we found that taking r ≥ 6 mm worked well in
simulation.
Although replacing ^-1 with ^-1 in
(<ref>) above lends sparsity and
efficiency to computation in our setting, it can still be burdensome
to evaluate or decompose (<ref>) even for
moderate P. To overcome this issue we propose an approximate
quasi-Newton Hamiltonian Monte Carlo (HMC) algorithm for sampling from
the posterior of , conditional on the other model
parameters. HMC is a hybrid, gradient-based MCMC method that is
often more efficient in high dimensions than other MCMC
algorithms <cit.>, and can be used to
avoid direct computation with the very high dimensional covariance
matrix (<ref>) here. In the general HMC
algorithm, sampling can be improved by scaling the gradients by a
carefully chosen “mass matrix,” . In their highly influential
paper, Girolami and Calderhead showed that the most efficient choice
updates to be proportional to the posterior
Fisher information matrix of the updated parameter
<cit.>. Instead, we can choose to use the
prior information matrix to “estimate” the posterior information in
the spirit of a quasi-Newton algorithm: doing so results in a
computationally tractable and efficient alternative.
Taking ∝ (^-1⊗τ^-2^-1) and
plugging in a sparse approximation of ^-1 as above can result
in dramatic improvement in Markov chain mixing with minimal increase
in computation time. In practice, we found that we need not use the
same ^-1 in as in our approximation of the log
prior. In fact, we found it better to use smaller neighborhood radii
in our construction of , and that keeping the neighborhood radius
within the 2–4 mm range here resulted in the best Markov chain
mixing.
§.§ Computation for our working model
In this section we begin with a description of our posterior
computation strategy for the working model, and then proceed by
showing how this general plan can be modified to estimate the
coefficients of our conditional and marginal model variants.
Let represent the variance of
ϵ_i^w(·), here an (M × M) diagonal matrix with
the σ^2(), ∈𝒮 on the diagonal;
let denote the (N × P) matrix of participant-level
covariates; let _i = [y_i()]_∈𝒮 denote
the vectorized outcome image for participant i; and let
= (_1, …, _N).
Finally, let = (ζ_0^2, …, ζ_P-1^2).
To help stabilize our computational steps, we first compute a rank
revealing decomposition of the covariate matrix . We will work
here with the singular value decomposition (SVD)
=, though the QR decomposition and its rank
revealing variants, etc. would work in the same way.
In general, computing the SVD is an
(N P^2) operation when P ≤ N; even for relatively large
P computing the SVD of takes minimal time
compared to MCMC. For simplicity, we will assume here that is
full column rank. Let = (⊗_M)
denote our parameter of interest, rotated by . The effective
prior on is simply,
∼( 0, ⊗τ^2 ),
which, as noted in the main text, can be efficiently approximated by
plugging in a sparse matrix ^-1 such that
≈. Given C(·) we can easily
construct such a ^-1 following recipes from
<cit.>. In turn, the log prior and its
gradient can be approximated via,
ln(|, , τ^2) ≈ -1/2(
^-1⊗τ^-2^-1) + K(, , τ^2),
where K(, , τ^2) is the log normalization constant
and,
∇_ln(|, , τ^2) ≈
- ( ^-1⊗τ^-2^-1) .
Kronecker identities facilitate numerical evaluation of these
quantities.
Similarly, the log likelihood can be rewritten in terms of
. Up to the integration constant, the log likelihood of our
working model can be written,
ln(|, ) = -1/2γ( ^2 ⊗^-1) +
( ⊗^-1)
-1/2( _N ⊗^-1) .
From this expression, it can be seen that the part of the log
likelihood that includes
depends on the data only through the sufficient statistic
(⊗_M).
This implies that, within our working model framework, gradients and
Metropolis-Hastings ratios can be computed efficiently with respect to
.
Similarly, it can be shown that the residual sum of squares depends on
the data only through
(⊗_M) and an additional sufficient
statistic, ∑_i _i^∘ 2, where we use
^∘ b = (a_i^b) to denote element-wise or Hadamard
exponentiation. This additional fact suggests that σ^2(·)
can be easily updated without reference to the original data. With
these two pieces in hand, we write our posterior computation algorithm
to alternate updating through Hamiltonian Monte Carlo (as
discussed in the main text), and updating the variance parameters with
Gibbs sampling.
Samples of can easily be rotated back into samples of
by applying the reverse transformation,
= (⊗ I_M).
Within each HMC iteration, we update the
algorithm's mass matrix via,
(, τ^2) = ^-1⊗τ^-2_M^-1,
where _M^-1 is a sparse matrix again constructed so
that _M ≈. We discussed the logic for doing
this in the main text.
§.§ Approximation for our “Conditional” model
Our computational strategy for the conditional method relies on the
observation that the full conditional distribution of the _i
is relatively easy to work with. Although it is too burdensome to
fully sample the _i at each iteration of an MCMC routine, it
takes only a modest amount of time to find a maximum a posteriori
(MAP) estimate of the _i given an estimate of . As we
have shown above, gradient-based updates are efficient to compute for
in our working model. We first obtain an approximate MAP
estimate of using our working model with the restriction that
σ^2() ≡σ^2 for all locations
∈𝒮. An estimate of this parameter can be computed
quite quickly using gradient ascent. With estimates of ,
τ^2, and in hand, the _i can be set to their
conditional posterior mode analytically,
_i (τ^-2^-1 + ^-1)^-1^-1{_i - (_i⊗_m) }.
To do this, we again construct a sparse, Vecchia-type approximation of
the matrix (τ^-2^-1 + ^-1)^-1.
Maximizing with respect to and _i can be iterated if
necessary for convergence. Once we have a satisfactory estimate of
_i, we can easily subtract it from _i and switch to our
working model HMC algorithm for inference on if desired.
§.§ Approximation for our “Marginal” model
Rather than fix a point estimate of the _i as above, our
strategy for the marginal model will be instead to obtain a fixed
estimate of the correlated error variance—= τ^2 + in the main text—and use this estimate in our general HMC
algorithm (described above).
To compute with the marginal method, we first obtain an initial
estimate of using gradient ascent in our working model
approximation as above. With this estimate in hand, we can estimate
the marginal or sill variance (τ^2 + σ^2()) for each
location using the standard formula
∑_i {y_i() - _i()}^2 / (N - 1).
Then, again following <cit.>, it is
straightforward to construct a Vecchia-type
approximation ^-1 such that
≈, and so that contains our
estimates of the spatial sills on the diagonal. To work with
MCMC, can simply be substituted in place of in
our working model HMC outline above. For computational savings, we do
not update over MCMC iterations when we work with the
model in this way.
§ ESTIMATION OF THROUGH MAXIMUM MARGINAL
LIKELIHOOD
In general spatial kriging applications, it is common to estimate
by maximum marginal likelihood
<cit.>. This can be done,
for example by integrating out the mean model parameters and
optimizing the resulting marginal likelihood with respect to the
covariance and correlation parameters.
Retaining the vector-based notation from our posterior computation
sections and integrating the _j and _i out of
equation (1) in the main text, the marginal log likelihood (less the
integration constant) for our spatial regression model is,
f(|, , , τ^2) =
- 1/2∑_i ln_i +
_i^-1_i _i,
where
_i = τ^2 (1 + ∑_j ζ_j^2 x_ij^2 ) +,
and is the (M × M) sparse matrix with
[ σ^2() ]_∈𝒮 on the diagonal.
Equation (<ref>) can of course be maximized
directly, but at the cost of also solving for M + P + 1 additional
parameters in , τ^2, and the ζ_j^2. Also, from a
practical point of view, it is somewhat undesirable that the marginal
variance of _i depends on _i, implying the need to
re-optimize (<ref>) every time a covariate is
added to or removed from the model. Conceptually, it does not make
much sense to imagine that the spatial correlation structure of the
model mean parameters may change depending on the inclusion or
exclusion of given covariates.
Instead of working with (<ref>) directly, we
choose to estimate by optimizing the marginal
log likelihood for a surrogate simpler model. To estimate ,
we replace (<ref>) above with,
f̃(|, , τ^2) =
-N/2ln (τ^2 + ) +
-1/2∑_i _i (τ^2 + )^-1_i,
which, incidentally, is the unnormalized marginal likelihood for our
working model with an intercept as the only predictor. Equation
(<ref>) can be evaluated approximately
either through use of a Vecchia-type approximation of
the matrix (τ^2 + )^-1, or by down-sampling the
_i to a more manageable number of spatial locations. We chose the
former option in the present paper, and in practice mean-center each
image _i prior to optimization.
While this approach can work well, we have
noticed anecdotally that it can also tend to underestimate the width
of the correlation function. Obtaining a good estimate of in
more complex settings—as in (<ref>)—remains
an open research question. We do not, however, expect inference on
(·) or other model parameters, to be overly sensitive to
the choice of , given reasonable data (e.g. see our
sensitivity analysis in Web Appendix <ref>).
Finally, we have used the gradient-free
optimization routine BOBYQA <cit.> to maximize
(<ref>), which, surprisingly, improved
performance over gradient-based optimizers (both run time and
stability). The BOBYQA algorithm works by iteratively constructing a
quadratic approximation to the objective function at a set of
interpolation points, which are themselves updated as a trust region
is progressively estimated <cit.>. The algorithm
may fail if, for example, (<ref>) exhibits
local behavior that cannot be well approximated by a quadratic
function.
§ ADDITIONAL ABCD STUDY DATA DESCRIPTION, ANALYSIS, AND
RESULTS
Data used in the preparation of this article were obtained from the
Adolescent Brain Cognitive Development (ABCD) Study
(<https://abcdstudy.org>), held in the NIMH Data Archive
(NDA). This is a multisite, longitudinal study designed to recruit
more than 10,000 children age 9-10 and follow them over 10 years into
early adulthood. The ABCD Study is supported by the National
Institutes of Health and additional federal partners under award
numbers
U01DA041048, U01DA050989, U01DA051016, U01DA041022, U01DA051018,
U01DA051037, U01DA050987, U01DA041174, U01DA041106, U01DA041117,
U01DA041028, U01DA041134, U01DA050988, U01DA051039, U01DA041156,
U01DA041025, U01DA041120, U01DA051038, U01DA041148, U01DA041093,
U01DA041089, U24DA041123, U24DA041147. A full list of supporters is
available at <https://abcdstudy.org/federal-partners.html>. A
listing of participating sites and a complete listing of the study
investigators can be found at
<https://abcdstudy.org/consortium_members/>.
ABCD consortium investigators designed and implemented the study
and/or provided data but did not necessarily participate in analysis
or writing of this report. This manuscript reflects the views of the
authors and may not reflect the opinions or views of the NIH or ABCD
consortium investigators.
§.§ fMRI Preprocessing
Preprocessing of the fMRI task data was accomplished through use of a
published standardized pipeline
<cit.>.
Briefly, time series acquisitions were filtered with a 0.005 Hz high
pass filter and spatially smoothed with a surface-based 2 mm
kernel. Within patient task-based modeling was accomplished using
tools from FSL <cit.>, removing high motion time
points (frame-wise displacement > 0.9 mm) from the data. Additional
regressors of no interest included 24 total motion parameters (linear
and quadratic terms for each of six estimated motion
parameters—three rotational, three translational—and
their derivatives); five white matter principal components estimated
with CompCor software <cit.>; and five
cerebrospinal fluid principal components also estimated with CompCor.
Contrast images we analyzed in the present paper were derived from the
results of these first-level task-based models.
§.§ Additional Data Description
For this illustration we will work exclusively with data from a subset
of 3,267 children in the baseline cohort that were scanned while
performing an n-back task
<cit.> with pictures of
human faces expressing emotion as stimuli.
The n-back task has enjoyed wide use in the neuropsychological and
imaging community for its relationship with executive function
and as a correlate of working memory processes
<cit.>.
Our subsample of children is limited to those who scored
at or above 60% correct on both 0-back and 2-back task conditions.
Web Table <ref> gives a summary of the
demographic information for this sample.
As a final note,
the ABCD study more broadly contains imaging data acquired from
siblings. Around 20% of families
in the ABCD release 2.0 baseline data have two or more children
enrolled in the study. This might additionally suggest the need for
an analysis with random family effects.
We avoid this issue entirely here: the cohort that we analyze contains
data from only one child per family in our subset.
While our method is capable of estimating effects like this in
general, it would be very slow computationally to give a fully
Bayesian treatment to a large number of random spatial effects.
A more specific tool could be built on top of the methods we present
here to include such variance components and/or treat them as nuisance
parameters.
§.§ Additional Brain Region-Level Coefficient Summaries
Here we complete our report of demographic effects on the 2- vs 0-back
task contrast data from the ABCD study. We again summarize results
from the right hemisphere by averaging over all vertices within brain
regions from the Gordon 2016 cortical surface parcellation
<cit.>. Figures follow the same format as the
primary model intercept and 2-back accuracy rate results figures from
the main text. Results in the left hemisphere were generally highly
symmetric.
As noted in the main text, covariates were chosen on the basis of
known associations with n-back task accuracy
<cit.> and through preliminary exploratory
analyses.
Exploratory analyses served to help us
visualize and understand several important aspects of the data.
First, we observed modest but present nonlinear patterns in the
relationship between the contrast data and 2-back accuracy. Preferring
simplicity here, we found that these trends were reasonably well
characterized by a quadratic model for 2-back accuracy.
Including this term in the analysis resulted in a total of P = 24
predictors including the global intercept.
We scaled each continuous covariate by two standard deviations
<cit.> so that resultant coefficient images are
more directly comparable with coefficient images for categorical
covariates.
Similarly to Fig. 4 in the main text,
Web Fig. <ref> summarizes results for
the effect of 2-back accuracy rate on the 2- vs 0-back
contrast. In the figure, coefficients for the linear and quadratic
accuracy terms reflect the expected change in activation between ten
year old female children scoring 96% and 80% correct on the 2-back
condition, respectively, holding all other demographic covariates
constant. Our analysis suggests high spatial overlap between the
intercept and areas where average activation increased linearly with
increasing 2-back accuracy (confer from main text
Fig. 4 and Web Fig. <ref>).
Interestingly, however, the quadratic accuracy term largely seems to
reflect areas where average activation increased supra-linearly with
increasing 2-back accuracy. Based on our analysis, these areas are
more constrained to regions associated with the Dorsal-Attention and
Fronto-Parietal networks (Web Fig. <ref>).
Since the ABCD data are naturally grouped by the study's
21 data collection sites, we explored the utility of including random
site effects. For these data, the
random site effects explained less than 1% of the total variance in
over 97% of vertices, and less than 0.1% of the variance in nearly
half of vertices. We ultimately concluded that site-specific random
effects do not critically influence results here. Again preferring
simplicity, the results we show here and in the main text do not
include site effects as a variance component.
Web Fig. <ref> displays posterior mean
estimates of site effects for the five largest and five smallest
collection sites.
§ ABCD DATA ANALYSIS: MCMC DIAGNOSTICS AND SENSITIVITY
ANALYSES
In this section we describe additional sensitivity analyses and MCMC
diagnostics we have performed within the scope of the ABCD study
data.
We fit our model with Hamiltonian Monte Carlo (HMC) as noted in
the main text.
For this analysis, we ran eight chains of 7,000 iterations
each, discarding the first 5,000 as adaptation and burnin, and saving
200 samples from the final 2,000 iterations of each
chain. Convergence was assessed via univariate folded and non-folded
rank-normalized split R̂ <cit.> for each
parameter β_j(·), and by visual examination of trace plots
for subsets of these parameters. The folded split R̂ statistic
was below the recommended threshold of 1.01 for over 99.9% of the
β_j(·) (the worst case scenario was 1.02), indicating
reasonable convergence in the posterior spread and tail behavior for
these parameters. Similarly, the worst-case non-folded split R̂
statistic was 1.04 across all β_j(·), indicating reasonable
convergence of the center of the posterior distribution for these
parameters. We set the neighborhood radius of the Vecchia
approximation of our prior precision to 8 mm, and the neighborhood
radius of our HMC mass matrix to 3 mm. While the algorithm can be
quite sensitive to the choice of mass matrix neighborhood radius,
values in the range 2–4 mm led to efficient and well-mixing chains
both here and in simulation.
For readers familiar with Hamiltonian Monte Carlo:
Metropolis-Hastings rates were tuned during burnin to be
approximately 65%; automatic tuning was achieved using the
dual-averaging method presented in <cit.>.
Additionally, we fixed the number of numerical integration steps in
our HMC to 35, which we noted produced well-mixing chains.
We noted (in the main text and in Web Appendix
<ref>) that computationally we use a
specific sparse precision matrix approximation to induce conditional
independence between parameters at locations outside of an
r-neighborhood of each other. A natural question in this context is
how sensitive the analyses are to the choice of the neighborhood
radius r. We briefly explored this question by repeatedly fitting
our working model to the ABCD study data, using a spatial intercept as
the only predictor, and varying r in the construction of our Vecchia
approximation to the prior. Web Fig. <ref> summarizes
the results of this sensitivity analysis. In the figure, the
posterior mean estimate (top row) is not visibly sensitive to the
choice of r within a 2–12 mm range. The uncertainty in the spatial
intercept (bottom row), moreover, is at worst only modestly sensitive
to small r.
A related question is how sensitive results are to the correlation
function parameters . As above, we repeatedly fit our working
model using a spatial intercept as the only predictor. For these
analyses, we fixed our conditional independence neighborhood radius
r = 8 mm and used radial basis correlation functions with exponent
parameter 1.38 as in the main text. Here we varied only the width
of the correlation to probe for sensitivity in the
analysis. Web Fig. <ref> summarizes the results of
this analysis across the varying correlation widths. As before, the
posterior mean (top row) is not visibly sensitive to the width of the
correlation within a 2–20 mm range. The uncertainty in the spatial
intercept (bottom row) is again modestly sensitive to the correlation
width. The estimate of the spatial standard error for the 20 mm
full-width-at-half-maximum correlation appears perhaps deteriorated
(bottom right panel).
We also show an example MCMC convergence diagnostic for our
analysis of ABCD study data from the main text.
Web Fig. <ref> shows representative posterior density
estimates for the linear 2-back accuracy rate coefficient from three
vertices, constructed from 8 HMC chains. In the figure, we have
rank-ordered the selected vertices by the univariate split folded
R̂ statistic <cit.> for MCMC convergence
(left to right, R̂ = 1 to R̂ = 1.01). The posterior
densities show reasonable convergence across the MCMC chains.
Finally, we give an informal comparison of realized estimation
differences arising from use of our conditional, marginal, and working
model variants in practice. For this comparison, we fit our various
models to the real ABCD study data following the protocol described in
the main text.
Web Figs. <ref>
and <ref> summarize the results of this
comparison due to both modeling and algorithmic differences between
the three methods. In particular, Web Fig. <ref>
shows how the posterior means of the β_j() can be
quite similar across our proposed methods despite differences in
estimation strategy. Web Fig. <ref> on the other hand
shows that, relative to our working model variant, marginal posterior
variances of the β_j() were systematically larger for the
marginal model and smaller for the conditional model in these data. We
take these differences at face value here, and note only that in our
simulation studies, both the marginal and working models performed
quite well when data were generated directly from the conditional
model (see e.g. Table 1 in the main text).
|
http://arxiv.org/abs/2306.01616v1
|
20230602152716
|
Blockchain Model for Environment/Infrastructure Monitoring in Cloud-Enabled High-Altitude Platform Systems
|
[
"Khaleel Mershad",
"Hayssam Dahrouj"
] |
cs.CR
|
[
"cs.CR"
] |
inst1]Khaleel Mershadcor1
[email protected]
[inst1]organization=Computer Science and Mathematics Department,
addressline=Lebanese American University (LAU),
city=Beirut,
country=Lebanon
inst2]Hayssam Dahrouj
[inst2]organization=Electrical Engineering Department,
addressline=University of Sharjah (UoS),
city=Sharjah,
country=United Arab Emirates
[cor1]corresponding author
The recently accentuated features of augmenting conventional wireless networks with high altitude platform systems (HAPS) have fueled a plethora of applications, which promise to offer new services to ground users, as well to enhance the efficiency and pervasion of existing applications. Cloud-enabled HAPS, which aims to create HAPS-based datacenters that offer cloud services to users, has particularly emerged as a promising key enabler to provide large-scale equitable services from the sky. Although offering cloud services from the HAPS proves to be efficient, its practical deployment at the stratosphere level still faces many challenges such as high energy requirements, physical maintenance, and is particularly prone to security considerations. Safeguarding the cloud-enabled HAPS against various cyberattacks is a necessity to guarantee its safe operation.
This paper proposes a blockchain model to secure cloud-enabled HAPS networks that contain a large number of HAPS stations from recurring cyberattacks within the context of the environment and infrastructure monitoring (EIM) application.
To this end, the paper first presents a detailed blockchain framework, and describes the ways of integrating the developed framework into the various system components. We then discuss the details of the system implementation, including the storing and consuming of cloud transactions, the generation of new blocks, and the blockchain consensus protocol that is tailored to the EIM requirements. Finally, we present numerical simulations that illustrate the performance of the system in terms of throughput, latency, and resilience to attacks.
High-altitude platform system cloud computing blockchain sensor network environment and infrastructure monitoring consensus protocol
§ INTRODUCTION
The premises of High Altitude Platform Systems (HAPS) at enhancing ground-level networks have developed drastically over the recent years. In HAPS platforms, a HAPS station is deployed in the stratosphere at altitudes in the order of 18-20 km to enhance the terrestrial network architecture and serve ground users through providing wireless coverage, backhauling small and isolated base stations (BSs), supporting Internet of Things (IoT) applications, assisting intelligent transportation systems (ITS), and handling LEO satellite handoffs <cit.>. HAPS networks have several advantages that distinguish them from other aerial networks such as satellite and UAV networks, e.g., <cit.>. These advantages include enhanced quality-of-service in dense areas, quasi-stationary deployment for robust sky-level communication, reduced round-trip delay, and an expansive wireless footprint <cit.>.
Among the key-enablers which are expected to proliferate HAPS applications is the cloud-enabled HAPS (C-HAPS) <cit.>. C-HAPS allows the HAPS stations to be utilized as flying cloud datacenters that offer cloud services from the sky. Such integration of physical and cloud services improves the cloud scalability and enhances the quality, speed, and range of the offered services for ground users. Further, the HAPS strategic position enables cloud providers to reach out to a larger range of customers.
Moreover, augmenting the HAPS with cloud-computing capabilities allows to better serve remote and disconnected areas either directly or via ground/aerial gateways <cit.> by means of implementing interference management schemes at the cloud level. Such powerful functionalities of cloud-enabled HAPS, in fact, spearhead a handful of cloud-computing applications from the sky, thereby increasing the system storage capacity, improving data processing, minimizing the cost savings, and increasing the system scalability. In an integrated large-scale air-ground system, the HAPS can also be viewed as an edge or a fog layer, which serves to better extend the system connectivity and computational capabilities.
Despite the multiple prospects of C-HAPS, several challenges need to be addressed in order to successfully realize the integration of cloud services into HAPS <cit.>. Among such challenges is the system security issues, especially those related to HAPS communications, data, and programs. In general, the security of cloud services is one of the most important factors to ensure their success and continuity. When offered from the HAPS, cloud services need to be carefully designed to prevent attackers from eavesdropping on the connections between the HAPS station and the ground/aerial nodes in attempts to access private data. Equally important is to store the cloud data within the HAPS station in a secure fashion. To this end, this paper proposes a blockchain model as a possible solution for securing the C-HAPS operations. Utilizing the blockchain offers several benefits to cloud platforms, such as data immutability, built-in cryptography, and distributed management. Moreover, the blockchain can safeguard the HAPS services from major cyberattacks such as Man-in-the-Middle, Denial of Service (DoS), unauthorized access, etc.
The model considered in this paper focuses on one specific timely application of C-HAPS, namely the one related to environment and infrastructure monitoring (EIM). The EIM application is usually deployed within a large-scale network that contains multiple wireless sensor networks (WSN) comprising a huge number of sensor nodes. Examples of such networks include forests' WSNs, smart grid WSNs, maritime WSNs, etc. In such networks, several HAPS stations are used to connect to the WSNs via ground sinks/gateways that are equipped with HAPS communication modules. Each HAPS station can be standalone or part of a mini-HAPS constellation. The IoT sensors within a WSN are clustered such that each cluster head (CH) collects the readings of the sensors in the cluster and sends them to the sink/gateway, which in turn forwards them to the nearest HAPS station. Within the HAPS datacenter, the sensors' readings are analyzed and processed to produce results that are consumed by the cloud users. Among the many premises of the EIM application is that it is delay-sensitive, as it deals with emergencies and situations that require fast reaction. Current cloud systems that depend on terrestrial networks fail to satisfy the strict latency requirements of the EIM application while providing strong security mechanisms that are able to detect and prevent cyberattacks. Hence, the motivation behind the proposed system is twofold, mainly to strengthen the security of the EIM cloud service via the blockchain, and to achieve acceptable latency by modifying the blockchain architecture and consensus model. The simulations performed in Section <ref> show that our proposed model reduces the latency of data transactions from 1 sec to 200 ms and is able to detect attacks with an average of 85% as compared to 65% that is achieved by a ground-based system.
The contributions of this paper can be summarized as follows:
* We study the implementation of a large-scale EIM application within the context of C-HAPS. Current systems that implement the EIM utilize the terrestrial networks and cloud datacenters <cit.>, with some systems exploiting unmanned aerial vehicle (UAV) networks to assist in the monitoring operations <cit.>. In the proposed model, the EIM application is deployed within the C-HAPS to make use of the reduced delay characteristic of HAPS. In the simulations, we show the efficiency of this method as compared to the traditional ground implementation. Based on the simulation results, our proposed system reduces the data transaction latency from an average of 1 sec in ground-based systems to 200 ms, which is a crucial improvement that plays a major role in the success of a large-scale EIM application.
* We design and implement a blockchain model for securing the C-HAPS EIM application. The proposed model makes use of the powerful and secure HAPS stations to generate the blockchain blocks, and of ground/aerial gateway stations to validate the blocks and strengthen the blockchain consensus protocol. This method reduces the time needed to add a transaction to the blockchain and satisfies the delay requirements of the EIM application. In addition, the proposed system is much more resistant to attacks as compared to a system that does not utilize the blockchain. From the simulation results, the rate of malicious node detection is higher by 20% (on average) in our system as compared to a traditional cloud system. To the best of our knowledge, this is the first paper of its kind that studies the blockchain as a secure and efficient solution for the C-HAPS EIM application. We prove that the introduction of the blockchain highly strengthens the security of the cloud service.
* We validate the proposed system by testing its performance using the NS-3 network simulation software. The results illustrate the superiority of our model in terms of blockchain throughput, transaction latency, blockchain consensus time, and attack detection rate, as compared to a ground-based similar system.
The remaining of this paper is organized as follows: In the next section, we review the background of C-HAPS and blockchain. Section <ref> summarizes the state of the art related to the EIM application. Section <ref> describes the details of the proposed blockchain system, its integration into the C-HAPS environment, and its transaction management and consensus operations. In Section <ref>, we present the simulations that we performed to test the proposed system using the network simulator 3 tool. Finally, Section <ref> concludes the paper by shedding light on future work and potential enhancements.
§ BACKGROUND
The paper first provides a brief background related to the general prospects of C-HAPS and blockchain technology, before delving into their integration specifics in Section <ref>.
§.§ C-HAPS
As highlighted above, the appealing features of HAPS offer a chance for service providers to exploit the HAPS strategic location in reaching out to a broader range of consumers, by means of augmenting HAPS with cloud computing functionalities. The system-level prospective of such C-HAPS would further allow the cloud providers to jointly enhance the quality of their services and expand their infrastructure capacities. Given the HAPS long-term vision to connect users in rural and isolated areas (see Fig. <ref>), C-HAPS has the valuable additional potential to ultra-connect the users in urban areas by offering extra space and resources to increase the users' quality-of-service (QoS). C-HAPS prospects can be further enhanced to provide several applications and services with a multitude of socioeconomic, environmental, and technological impacts. In <cit.>, we identified several cloud services that can be efficiently deployed within C-HAPS. Examples of these services include:
* Satellite as a Service: Satellite applications can be offloaded to the HAPS cloud. In such cases, customers will be able to consume these services with a better QoS such as reduced delay and enhanced accuracy (such as in localization).
* Sensor as a Service: The HAPS network can enhance and enrich current IoT cloud platforms. HAPS ground gateways can connect Wireless Sensor Networks (WSN) to HAPS stations, which is extremely useful to WSNs that are deployed in isolated areas (such as mountains, deserts, oceans, etc.). In addition, tethered balloons (TBs) can increase the coverage area of the HAPS station to cover the WSN network. The system proposed in this paper focuses on enhancing the performance and security of one of the important applications of this service.
* Transportation as a Service: HAPS nodes possess high storage and processing capabilities and are suitable for storing and analyzing continuous data streams of smart city applications. In such scenario, HAPS gateways can be placed on high locations (such as roofs of buildings) and connected to roadside units (RSUs). This allows the application to offload ITS data to the C-HAPS and process it fairly quickly.
* Aerial Network as a Service: UAV networks in isolated areas can exploit the existence of a HAPS station to connect to the cloud. Some UAV nodes could be equipped with HAPS gateways to enable this connection.
* Other services: Numerous cloud services can be deployed in the C-HAPS once the latter is mature. Examples include Routing as a Service, Gaming as a Service, Social Network as a Service, Crowdsourcing as a Service, etc.
To best assess the gains of C-HAPS vis-à-vis the ever-increasing security attacks facing the information and communication technology advancements, this paper designs a blockchain system to safeguard future C-HAPS platforms. The paper particularly focuses on the application of EIM using IoT, the details of which are explained in Section <ref>.
§.§ Blockchain
A Blockchain is a special type of distributed database that is built as a series of blocks.
The blockchain is an append-only database in which blocks are linked via a cryptographic hash.
Each transaction is signed by its creator and the signature is appended to the transaction for validation. A block consists of a header and body. The header comprises metadata such as the block height, timestamp, hash of the previous block, and Merkle tree root;
while the body contains the block transactions.
Blockchain nodes that create the blocks are called miners.
Among the main components of the blockchain is the consensus protocol that allows the blockchain nodes to reach an agreement on various blockchain-related decisions. Mainly, when a new block is generated, the consensus protocol is executed by the blockchain nodes. After the protocol terminates, a single decision should be reached at all nodes, which is usually to add the new block to the blockchain or reject it <cit.>. A large number of consensus mechanisms have been proposed, such as Proof of Work (PoW), Proof of Stake (PoS), Practical Byzantine Fault Tolerance (PBFT), etc <cit.>. Each blockchain application requires a specific consensus protocol that suits the applications' needs. In Section <ref>, we describe the consensus algorithm that we propose for the EIM application to guarantee its latency requirements.
A large number of sectors and applications have exploited the blockchain to secure their data. Examples of these sectors include finance, healthcare, agriculture, supply chain, etc. Among these sectors is the cloud, as the use of blockchains in cloud computing is one of the growing research directions and is expected to depict a fast expansion across the fields. There are many benefits of blockchain technology concerning cloud computing, including those associated with business data handling, encryption, and privacy. While previous works proposed several methods for implementing the blockchain in ground-based cloud systems; to the best of our knowledge, this paper is the first attempt to integrate the blockchain into C-HAPS. Such integration is expected to strengthen the C-HAPS system by making the data stored in the C-HAPS data center immutable, which would increase the customers' trust in the system. In addition, we propose a permissioned private blockchain network, in which only customers who are certified to access the blockchain service would be able to execute the smart contract of the service and consume it. Moreover, we describe a method that encrypts the sensors' data while in transit. The combination of security measures described in this paper, including the blockchain, secures the C-HAPS system and data to a high degree from major cyberattacks and provides the required characteristics of C-HAPS services, such as authentication, confidentiality, and privacy.
However, several challenges need to be considered and tackled in order to realize a successful blockchain integration into C-HAPS. Most of these challenges are related to the HAPS characteristics, such as the line of sight (LOS) requirement and the physical maintenance of the HAPS stations. In addition, integrating the blockchain would increase the energy requirements of the HAPS stations. Furthermore, with the expected limited number of HAPS stations that are deployed in C-HPAS, the scalability of the system should be studied as more ground nodes join the blockchain and the number of transactions increases. In this paper, we focus on studying the last aspect and test several scenarios in which the number of cloud users is varied. As illustrated later in Section <ref>, the proposed system is scalable up to a high degree and its performance is slightly affected as more sensor nodes are added to the network.
§ LITERATURE REVIEW
In this section we review the state of the art related to the EIM application, focusing on previous research works that proposed blockchain and/or HAPS-based systems for EIM. Dong et al. <cit.> propose an integrated HAP-satellite (IHS) architecture for emergency scenarios with the aim of providing information transfer services for remote sensor devices. In the proposed system, the transmission power requirements of the terminal end and HAP end are investigated in a slow flat Rician fading channel. In addition, an energy-efficient transmission strategy for the energy-efficient path selection is designed using the concept of link-state advertisement (LSA) to reduce energy consumption at both ends.
The authors in <cit.> propose an Ant Colony Optimization (ACO)-based system for WSNs that are used to detect forest fires. The proposed system considers a multi-sink-based clustered WSN model in which mobile sink(s) can traverse the sensing field to collect data from cluster heads. The main objective is to overcome situations when there is a failure at one or more data collection points and hence achieve high fault tolerance. The ACO algorithm seeks to find the optimal assignment of cluster heads to mobile sinks in order to reduce energy consumption.
Several papers investigated the idea of utilizing the Internet of Drones (IoD) within the context of the EIM application. Liao et al. <cit.> propose a consortium blockchain system to guarantee trusted collaboration between controllers of software-defined IoD. The system makes use of multi-UAV networks to monitor the environment of a smart city. The IoD cloud services are offered via smart contracts that are managed by a group of drone controllers. In addition, the authors propose the proof of service guarantee (PoSG) consensus protocol in which each service provider elects some trusted controllers as representatives to participate in the consensus process. Next, the controllers elect the block creator and verifiers based on resource guarantees.
A sparsity-optimized spatiotemporal data aggregation model for large-scale disaster monitoring through WSNs is proposed in <cit.>. The system includes a UAV identity authentication mechanism that ensures the security of transmitted data within the context of a disaster semantic blockchain (DSB). The disaster semantics are obtained by exploring the semantic association relationship of disaster, background, event, and sensor data. Sensor nodes are divided into cluster members, cluster heads, and relay nodes. In addition, data flows are categorized into three levels: multi-cluster, isolated-cluster, and isolated node. A spatiotemporal data aggregation mechanism is performed at the cluster head level, and a temporal data aggregation process is performed by the isolated nodes.
A blockchain system for distributed Network Function Virtualization (NFV) is proposed in <cit.>. The proposed system (MOMEC) is implemented within the context of Mobile Edge Cloud (MEC) to reach consensus among multiple NFV Management and Orchestration (MANO) systems. The authors formulate the resource allocation problem as a multi-objective optimization challenge by considering the trust features of blockchain nodes and NFV MANO systems, as well as the blockchain computational capability. The authors propose a Deep Reinforcement Learning model to solve the resource allocation problem by improving the blockchain throughput and reducing the cost of providing user services.
The system proposed in <cit.> utilizes a blockchain model within the context of infrastructure monitoring to collect construction pollutant data via WSN sensors and execute smart contracts to automatically monitor the level of construction pollutants and evaluate the environmental performance. In the proposed system, site inspection forms are uploaded to the blockchain system from the supervisor node and automatically translated to smart contracts. Periodically, the smart contract results are evaluated by the construction peers via the Kafka consensus protocol and added to the blockchain.
A permissioned blockchain is utilized by the authors in <cit.> to preserve the privacy of data mining WSN nodes. Smart contracts are used to create an onion-like structure comprising the Hoeffding trees and a route. The onion-routed query hides the identity of the sensors from external adversaries, and obscures the data mining results to conceal them from compromised nodes. Hence, a trusted node shares a partially constructed model so that each other sensor node has access to a partial model that is the result of the computation of the previous sensor nodes.
While the discussed systems presented several mechanisms for enhancing the performance of the EIM application via blockchain or HAPS, none of these systems considered a joint HAPS/blockchain model that utilizes the resources of the various network participants to reduce the latency and improve the security of EIM. In the next section, we describe our proposed blockchain-based C-HAPS framework for enhancing the performance and security of the EIM application.
§ C-HAPS BLOCKCHAIN SYSTEM
§.§ Problem Statement
This section presents the proposed C-HAPS blockchain system, specifically designed for EIM applications that utilize the Internet of Things and wireless sensor networks. In such applications, a large number of sensor nodes (SNs) are used to monitor the environment and continuously send their readings to the cloud server via a sink/gateway. Many environment and infrastructure monitoring applications use the above sensing strategy to obtain the SN readings, analyze them, and produce results that are consumed by users via the cloud service API. An example of such applications is fire monitoring, whereby SNs are distributed over the area of a large forest to detect fires at an initial stage. Another example is smart grid infrastructure monitoring in which SNs are deployed at critical points in the smart grid network to monitor for incidents that affect the power system. A third application is related to natural disaster recovery, in which SNs are deployed into drones that fly over the affected area to search for survivors and provide aid whenever needed.
All applications highlighted above utilize sinks/gateways to send sensors' readings to the cloud. For example, in fire monitoring and smart grid applications, several sinks are deployed at specific locations within the WSN area. In disaster recovery, ground vehicles (which could be fixed or mobile) are used to play the role of the ground control stations (GCS) that connect the drones to the cloud network <cit.>. From a system level perspective, all such EIM applications are particularly delay-sensitive, since sensors’ readings must be obtained by the user within a certain time limit for further analysis; otherwise, the readings would become obsolete. For instance, readings of temperature and smoke sensors in fire detection applications must be read by the emergency control software within a limited time frame, which allows to respond in a timely fashion to fire events. In smart grid applications, power incidents can be handled by the response tool by isolating the affected areas, the efficacy of which depends on how early the incident is detected. Similarly, in disaster response and recovery applications, a drone that detects a survivor in a critical condition must send its readings to the cloud as fast as possible to enable medical aid and ambulance drones to reach the location of the survivor in a quick manner and provide medical assistance.
The above emphasizes the fact that EIM applications must meet predefined latency requirements, all while satisfying quality-of-service constraints at the users' sides, defined as a function of data rate and data reliability.
When deploying the EIM application as a cloud service, careful consideration should be made to secure its various elements and aspects. In general, the heterogeneity in the networks and devices that connect to the cloud and the openness of the cloud to various technologies create a challenge related to its security. Recently, a large number of researchers proposed the blockchain as a security solution for cloud services <cit.>. Also, the Blockchain as a Service <cit.> emerged as one of the best solutions for companies to secure their data. In this paper, we discuss a blockchain framework that is tailored to the C-HAPS EIM application. While integrating the blockchain into the EIM application can provide several benefits in terms of data confidentiality and integrity, it includes many challenges. For example, the blockchain comprises several heavy operations, such as transaction signing and validation, consensus protocol, and smart contracts that make the blockchain a complex system that adds a lot of overhead to the participating nodes.
Mainly, the processing capabilities of IoT nodes hinder them from engaging in heavy blockchain consensus protocols such as Proof of Work (PoW). In addition, the energy supply of these devices is usually limited. Therefore, they cannot afford to spend a lot of their power executing blockchain programs. Hence, it is essential to make sure that the EIM requirements (such as delay, processing, power consumption, etc.) are met when utilizing the blockchain in these applications.
§.§ System Architecture
In the proposed model, C-HAPS services are hosted within the HAPS stations datacenter. The HAPS network contains standalone stations that do not connect directly to other stations, in addition to mini-HAPS constellations that contain few HAPS stations. Such mini constellations are usually deployed in large urban areas. In our simulations (Section <ref>), we show that several HAPS stations are required when providing cloud services to large urban areas. Our model was designed and implemented while considering the delay-sensitive requirement of EIM and the limited resources of sensor nodes.
The proposed model aims to secure the application data from malicious access and modification via the blockchain, while performing the block generation and consensus in a fast manner that satisfies the delay requirements and poses negligible overhead on the SNs. In addition, we show that our proposed model is resilient to general attacks on cloud applications and blockchain-specific attacks. First, We start by describing the network architecture.
The nodes that participate in the proposed system are:
* Sensor nodes:
Based on the application, sensor nodes could be deployed in urban areas (for example, within homes, vehicles, companies, factories, grid infrastructure, etc.) or rural ones (for instance, forests, mountains, oceans, deserts, etc.). In this paper, we consider SNs that are used within the EIM context. EIM applications usually require a large number of SNs that are deployed over a wide area, creating several WSNs. A sensor node senses its environment, generates data, and sends it to the sink at a predefined frequency either directly or via other SNs. A large number of routing protocols for WSN and IoT networks were proposed in the literature, with the aim of delivering the data from the sensor node to the sink in an efficient manner that satisfies the objectives of the application (for example, to achieve small delay, low energy consumption, high reliability, etc.) <cit.>. The frequency at which a sensor node sends its data depends on the application requirements and can be dynamically adjusted by the network administrator in most IoT/WSN applications <cit.>.
* Ground-HAPS gateway: A WSN usually contains special nodes that are connected to the application backend. These nodes, called sinks, act as middleware between the SNs and cloud servers. Depending on the WSN size and objectives, sinks are usually deployed at different locations within the WSN to ensure that all SNs can connect to the backend. In our model, we integrate a HAPS communication module into the sink such that it forwards the readings that it receives to the HAPS station instead of the ground cloud server. Hence, the sink becomes a ground-HAPS gateway (GHG) that bridges the connection between the HAPS and any ground node that is not equipped with a HAPS communication module. Also, the GHG provides a means for a standalone HAPS station to connect to the ground cloud and to other HAPS stations. In general, the GHG is equipped with better resources than sensor nodes (processing, storage, and energy supply). Hence, it can participate in some of the blockchain operations, as we discuss in the next section.
* Aerial-HAPS gateway: Many applications that utilize aerial vehicles such as drones contain specific drones that act as aerial gateways that connect the ground control station (GCS) to the drones that are outside the communication range of the GCS. In some types of applications, drones may fly to locations that are outside their movement path based on the incidents that occur during their missions. Some other applications do not define a movement path for drones. In all these cases, it would be necessary to deploy one or more drones as aerial gateways that fly in such a way to maintain the connection between these drones and the GCS. Hence, aerial gateways are usually deployed when the aerial network contains nodes that do not have a connection to the GCS. In such cases, one or more aerial gateways are used to play the role of an aerial sink.
In our system, we equip these aerial gateways with HAPS communication modules such that they can send drones' readings to the HAPS stations. In other scenarios, tethered hot air balloons can be equipped with HAPS communication modules as well. These aerial nodes become aerial-HAPS gateways (AHG) that bridge the connection between ground and aerial nodes and the HAPS network. The AHG can also be used to connect a standalone HAPS station to other HAPS stations when needed. Fig. <ref> shows the deployment of C-HAPS stations in urban and rural areas and the utilization of GHGs and AHGs. In the urban area, three HAPS stations HAPS1, HAPS2, and HAPS3 are positioned over a city and its suburbs. GHG1 connects to HAPS1, GHG2 to HAPS2, and GHG3 to HAPS3. In the rural area, a single HAPS station (HAPS4) is installed. GHG4 connects directly to HAPS4, while GHG5 is outside the range of HAPS4 and connects to it via AHG1. HAPS4 is a standalone station and can connect to the other HAPS stations via the gateways.
* HAPS station: This node communicates with GHGs and AHGs to collect data from SNs, save them into the blockchain, and provide them to cloud users. The HAPS station plays the role of a full blockchain node that stores the full blockchain, participates in generating new blocks and validating other’s blocks, and responds to users’ requests by executing blockchain smart contract functions and sending blockchain data to users in a secure manner.
* Cloud user: A cloud user accesses the cloud application at the HAPS via his/her cloud account. The user authenticates him/herself via the blockchain smart contract and obtains a certificate that defines the user’s roles and privileges. The user provides the certificate alongside each blockchain request that is sent to the HAPS station. Based on the certificate, the user can retrieve IoT data from the blockchain, execute specific smart contract functions to obtain results from the data, and/or adds his/her own transaction to the blockchain via the consensus model which is explained shortly.
Figure <ref> illustrates the general communications between the system nodes. As shown in the figure, sensor nodes send their readings to the nearest GHG/AHG via the WSN. Each GHG/AHG validates the data it receives from SNs, aggregates them, and sends data transactions to the HAPS stations, possibly via other GHGs/AHGs. The HAPS station receives data transactions from GHGs/AHGs and saves them in its pending list. When its turn arrives, the HAPS station creates a new blockchain block and broadcasts it to the network of GHGs/AHGs and HAPS stations. Each node among the latter executes the consensus protocol (more details about it later) and updates its copy of the blockchain. As we explain later, the HAPS stations store the whole block while GHGs/AHGs store only the block header. When a cloud user wants to consume the cloud service, it sends a cloud service request to the nearest GHG/AHG, and the latter forwards the request to the nearest HAPS station that hosts the required service. The HAPS station replies to the GHG/AHG which validates the reply based on its local copy of the blockchain, endorses the reply packet, and sends it to the user.
Before we discuss the details of the proposed system, we describe in the next section the adversary model that we consider in our system.
§.§ Adversary Model
Among the various types of nodes that were described in the previous section, we consider the HAPS station to be fully trusted. We assume that the cloud services that are deployed and offered from the HAPS station datacenter are highly secured against various attacks.
On the other hand, we assume that the GHG/AHG is semi-trusted. While these devices are usually secured as much as possible, attackers could still be able to exploit software vulnerabilities or physically break the device (in certain situations) to compromise it and use it to launch attacks. Hence, we describe an approach that is applied by the proposed consensus protocol to detect and defend against a compromised gateway. Finally, we assume that sensor nodes are the least trustworthy nodes. Due to the limited resources of these devices and their existence in deserted locations, they are more susceptible to cyberattacks. Hence, a mechanism should be applied by the sink to detect the false data that would be transmitted by a compromised sensor node. A large number of papers proposed various solutions to this problem <cit.>. In this paper, we aim at presenting solutions to the attacks that could occur with the implementation of the blockchain in the Cloud-HAPS network. Mainly, we focus on the following attacks:
* Transaction Piracy attack: this type of attack occurs when the sensor node sends secret or private data to the blockchain and an attacker is able to steal the data.
* False Transaction attack: in this type of attack a malicious sensor node or gateway attempts to inject a false transaction into the blockchain by either creating a fake transaction or tampering with a valid transaction that it receives from another node.
* Block Withholding attack: this attack occurs when the blockchain miner that is generating the new block executes a malicious action to stop or delay sending the new block to the consensus nodes with the aim of delaying the consensus process and affecting the application performance.
* 51% or Majority attack: this attack occurs when the attacker is able to compromise a large number of blockchain miners with the aim of overriding the consensus algorithm and breaking the blockchain integrity by inserting false blocks or transactions into the blockchain without being detected.
* Forking and Replay attack: a blockchain fork occurs when the consensus process generates two or more different blocks that are selected by the different consensus nodes. Hence, multiple blockchain branches appear. Attackers exploit this incident by executing the same malicious transaction in multiple branches to gain benefits or affect the blockchain integrity.
* Consensus Sabotage attack: this attack is performed by a malicious blockchain node that participates in the blockchain consensus process and attempts to sabotage it by performing various malicious actions.
* Consensus Collusion attack: this attack is performed by a group of nodes that participate in the consensus process and collude to perform malicious actions with the aim of destroying the consensus process or compromising the blockchain integrity.
§.§ Transaction Generation and Validation
When a sensor node generates new readings, it combines them into a data packet, signs it, and sends it to the nearest sink. If the data is private, the sender encrypts the data with the receiver's private key. The data packet passes through zero or more sensor nodes on its way (based on the sender’s location and the routing protocol). Each sensor node on the packet’s path validates the packet by checking the correctness of the data (for public data only) and the validity of the sender’s signature. If the packet is valid, the intermediate sensor node endorses the packet by adding its signature to the packet before forwarding it to the next sensor node or to the sink.
Note that sensor nodes are usually clustered based on their geographic locations such that each node maintains the information (such as the ID, location, and public key) of all nodes in the cluster. Also, in such applications, one of the nodes in the cluster is selected as the cluster head (CH) that is responsible for forwarding all packets from/into the cluster. In some types of sensor networks, each CH is a sink. In other applications, each group of cluster heads connect directly to a sink. Several clustering approaches were proposed in the literature <cit.>. Our proposed framework is general and can be used with any clustering mechanism. In the simulations (Section <ref>), we clustered the SNs based on the approach proposed in <cit.>. The architecture of the proposed sensor network is illustrated in Fig. <ref>. In the figure, sensor nodes are divided into four clusters C1, C2, C3, and C4 with a cluster head in each cluster. Sensor nodes that are near to the CH send their readings directly to it, while sensor nodes that cannot connect directly to the CH send their reading via other sensor nodes (as in C1 and C4). Some CHs, such as that of C4, could be equipped with a HAPS communication module and hence act as a GHG. Other CHs send their aggregated data to the nearest GHG (C1, C2, and C3). Some CHs connect directly to the GHG, while other CHs (such as that of C2) connect to the GHG via other cluster heads.
When a GHG receives a packet from a sensor node or a CH, it checks the validity of the data by verifying the endorsement signatures that were added by the intermediate nodes and the signature of the sender. Based on the data that the sink received from other sensor nodes, the sink could aggregate the received data with the existing data, delete the received data (if it is a duplicate of an existing data record), or save the data for possible future aggregation. Each GHG periodically (based on the application requirements, for example, every 50 milliseconds) aggregates and groups the existing data into a blockchain transaction packet, signs it using its digital signature, and sends it to the nearest HAPS station that hosts this cloud service. In some cases, the packet is sent directly from the GHG to the HAPS station. In other cases, the packet could pass through one or more intermediate GHSs/AHGs before reaching the HAPS station that hosts the application.
Note that in the proposed system, each node encrypts a transaction packet with the next node’s public key before sending it. For example, consider a scenario in which GHG G1 sends a transaction packet to the HAPS station H1. The packet is sent on the following path: G1, G2, A1, H1. Here, G1 and G2 are GHGs and A1 is an AHG. Hence, G1 encrypts the packet with G2’s public key. G2 receives the packet, decrypts it, validates G1’s signature, encrypts the packet with A1’s public key, and sends the packet to A1. In its turn, A1 decrypts the packet using its private key, validates G1’s signature, encrypts the packet with H1’s public key, and sends the packet to H1. Also, note that in this paper we do not focus on the routing protocol that is used to route packets between GHGs/AHGs and the HAPS network and vice versa, as we plan to describe its details in a future paper. We assume that the routing protocol is responsible for routing packets between nodes when there is no direct link between them. For example, when the CH does not have a direct link to a GHG, the routing protocol is used to calculate the nearest GHG and find the path to it.
Each GHG and AHG stores a pending transaction list in which it saves all transactions that it generates/forwards before they are added to the blockchain. A transaction remains in the GHG/AHG pending list until the latter receives the next blockchain block. At this stage, the GHG/AHG checks its pending list and removes from it the transactions that have been successfully added to the blockchain. Also, each transaction in the pending list has an expiry deadline at which it would be deleted if it is still not added to the blockchain. The expiry deadline defines the time at which the transaction becomes obsolete and should be discarded.
In the proposed system, the GHGs, AHGs, and HAPS stations form the blockchain network. However, only HAPS stations are responsible for generating new blockchain blocks, while all the blockchain nodes (i.e., GHGs, AHGs, and HAPS stations) participate in the consensus protocol to validate the new blocks. This distinction was made based on the fact that the HAPS stations are the final destinations of all data transactions. On the other hand, the transactions will be scattered between the GHGs and AHGs. Hence, the HAPS station can group the transactions that it receives from the gateways, generate the new block by ordering the transactions based on their timestamps, generate the Merkle root and other parameters of the block header, and add the hash of the previous block. In order to satisfy the delay constraints of the EIM application, this operation is performed frequently such that transactions are added to the blockchain within the delay threshold of the application. More details about this issue will be discussed in the next section. With respect to the GHGs/AHGs, each gateway participates in the consensus protocol by validating that the transactions in its pending list were added correctly to the new block.
§.§ Blockchain Architecture
Several cloud services can be offered by C-HAPS. Each service will have its own blockchain that is configured based on the service characteristics and requirements. Many cloud services require a private blockchain that can be accessed by the service administrators and subscribers only. Other cloud services could use a hybrid blockchain architecture and restrict access to the private features of the service by utilizing smart contracts. In such cases, the users can access the paid features only after they subscribe to the service and their credentials are added to the blockchain. Hence, the smart contract checks the user’s credentials and identifies his/her access rights before enabling the user to access the private features. In certain cases, applications that share the same characteristics and settings can share a single blockchain and distinguish their transactions via a unique service identifier that is added to each transaction.
With respect to blockchain data, different types of cloud services store data with different sizes and characteristics. The blockchain administrators usually have the option to store the transactions directly in the blockchain block (on-chain), or to save the data in an external dedicated storage system (such as the InterPlanetary File System or IPFS) and save the transactions’ hash and metadata in the blockchain (off-chain). This latter option is more efficient when the service uses or generates enormous amounts of data. In such cases, the external storage system can provide better indexing and searching capabilities than the blockchain. Hence, the data can be retrieved faster from the external storage system and validated via the hashes in the blockchain. Several recent works proposed a variety of models that are based on off-chain storage <cit.>.
§.§ Block Generation and Consensus
§.§.§ Block Creation
In the proposed blockchain system, blocks are generated by HAPS stations at constant time intervals. Each cloud service defines its block generation time interval based on the application delay requirements.
The HAPS stations that host a certain EIM service generate the blocks of that service successively based on their network IDs. Each time a HAPS station receives a transaction, it validates it by checking the signatures of the sender and endorsers. Next, it broadcasts the transaction to all the HAPS stations that host this service. Each HAPS station validates the transaction and saves it in its pending transaction list.
When the block interval timer expires,
the HAPS station whose turn to generate the new block orders the transactions in its pending list based on their timestamps, creates the new blockchain block, and broadcasts it to all the HAPS stations that are hosting this EIM service in addition to all GHGs and AHGs who are forwarding the transactions packets of the EIM service. Note that each HAPS station saves a list that contains the IDs of all GHGs/AHGs that are participating in the blockchain of each EIM service that is hosted by the HAPS station. The latter updates the list when a GHG/AHG joins or leaves the blockchain network of that service.
§.§.§ QUICO Consensus Protocol
In the proposed blockchain model, the consensus protocol is executed among the HAPS stations that host the corresponding cloud service and the GHGs/AHGs that create and forward the data transactions of this service. The HAPS stations play the role of full blockchain nodes, while the gateways are light nodes that perform part of the consensus operations only, as we will explain in this section. On the other hand, SNs do not participate in the consensus process in order to preserve their limited resources.
When a blockchain node N (which could be a HAPS station or a GHG/AHG) receives a packet that contains a new block, it executes the C-HAPS consensus protocol, which we call QUICO (an abbreviation for quick consensus). The details of QUICO are: when N receives a new block from a HAPS station H, it verifies that the transactions in its pending list that exist in the new block were added correctly by H. If this is the case, N sends a “Block ACK” message to H that contains the block sequence number and signature of N. However, if N discovers that one or more transactions in its pending list were added incorrectly to the new block
(for example, if an error exists in the transaction data or metadata as compared to the transaction in N's pending list),
then N sends a “Block ERROR” message to H.
After broadcasting the new block to the blockchain nodes, H waits for a certain time to receive the replies of the blockchain nodes. H considers the block validated by the blockchain network after it receives a “Block ACK” from all the HAPS stations in the blockchain network and from more than half the gateways. It is important that H receives an acknowledgment from each other HAPS station in the blockchain since these nodes hold copies of all pending transactions and should be able to verify all the transactions in the new block. In addition, H needs to acquire confirmation from the majority of GHGs and AHGs in the blockchain network. Suppose that the blockchain network contains X HAPS stations and Y GHGs and AHGs. Also, suppose that the “Block ACK” packet from a HAPS station is labeled as HACK and that from a GHG/AHG is labeled as GACK, then H should receive (X-1) HACKs and floor((Y/2+1)) GACKs before it labels the new block as confirmed.
When the new block is confirmed, H aggregates the signatures in the confirmations that it received into a “Block CONFIRM” packet and broadcasts it to the blockchain nodes. The other HAPS stations verify the signatures and add the new block to the blockchain. On the other hand, the GHGs and AHGs add the header of the new block to their headers blockchain. In our proposed system, GHGs/AHGs act as light blockchain nodes that store only the headers of the blockchain blocks and use them to validate data transactions that they obtain from the HAPS stations. A light blockchain node in our system stores locally the headers of the blocks only. This concept is similar to the Simplified Payment Verification (SPV) node in Bitcoin <cit.>.
When a cloud user requests a blockchain transaction from a HAPS station, the transaction is sent by the HAPS to the GHG/AHG network and then to the user (unless the user has a HAPS communication module, then the user can obtain the block directly from the HAPS station and perform the validation locally).
Each GHG/AHG that forwards a transaction verifies it using its headers blockchain. If the transaction is valid, the GHG/AHG endorses the transaction by adding its signature to the data packet. This strategy increases the user’s trust in the obtained data, since it will be validated by multiple nodes, which confirms its legitimacy.
§.§.§ Sample Consensus Scenario
A sample scenario is shown in Fig. <ref>. In the figure, the turn is on HAPS H1 to generate the new block. Hence, all GHGs/AHGs send their new transactions to H1. Suppose that AHG A1 has a direct link to H1, while GHG G2 sends its packets to H1 through A1, and GHG G1 sends its packet to H1 through the path G2-A1. Three transactions are sent to H1: T1 is sent by G2, T2 by A1, and T3 by G1. As shown in the figure, each GHG/AHG that forwards a transaction adds it to its pending list (represented by the symbol “*” in the figure). When H1 receives a transaction, it broadcasts it to the other HAPS stations. When the time to generate the new block is due, H1 creates the block and broadcasts it to all GHGs/AHGs and HAPS stations. Each node validates the transactions in its pending list that are in the new block and sends a “Block ACK” packet to H1. After H1 collects the “Block ACK” packets, it broadcasts a “Block CONFIRM” packet to all nodes. Each node that receives a “Block CONFIRM” adds the new block to the blockchain and removes the transactions that are in the new block from its pending list.
§.§.§ Handling Block Errors
Going back to QUICO, if the block creator H receives a “Block ERROR” from one or more blockchain nodes, it checks the transactions that were labeled as erroneous in the “Block ERROR” packets. If H discovers that these transactions were indeed added incorrectly to the block, it regenerates the block by replacing the erroneous transactions with their correct versions and broadcasts the new block (with a new sequence number).
However, if H believes that one or more transactions that were labeled as erroneous by other nodes are correct, it adds the two versions of each transaction to an “ERROR Check” packet and sends the latter to the GHGs/AHGs who endorsed the transaction. Note that a transaction contains the signatures of all GHGs/AHGs who endorsed it.
A GHG/AHG that receives an “ERROR Check” packet reviews its pending list to see which of the two versions of the questionable transaction matches the transaction in its pending list. The GHG/AHG replies to H with an “ERROR Resolve” packet that indicates the correct version with the GHG/AHG signature. H collects the GHGs/AHGs replies, resolves the error, and forwards the replies and the decision to the blockchain nodes that sent the “Block ERROR” packets. The blockchain nodes update their pending lists accordingly. After H validates all the questionable transactions, it generates a new block
and broadcasts it to the blockchain nodes, and the process is repeated.
After the new block is added to the blockchain, the HAPS station whose ID is next in the group of HAPS stations who are hosting this blockchain examines the timestamp tb in the “Block CONFIRM” packet and starts a timer such that the timer will fire at a time tc in the future, where tc-tb = tth. Here, tth is the block generation interval that is defined by the application delay requirements. When the timer expires, the HAPS station performs the same operation that was explained before.
Note that the main objective of the QUICO protocol is to generate the block in a time less than the delay requirements of the application. The block creator waits for a small period (tw) to receive the other’s replies. If the reply of a certain gateway is delayed or dropped, it will not affect the consensus process as long as the number of such replies is less than half the total number of GHGs/AHGs. In normal conditions, the block creator should receive “Block ACK” packets from more than half the GHGs/AHGs within the small waiting time and confirm the new block. In abnormal conditions, such as when an attacker compromises one or more GHGs/AHGs, the block creator could receive “Block ERROR” messages from one or more GHGs/AHGs and execute the second part of QUICO related to validating the questionable transactions, which will delay the consensus process. When the latter case occurs, the block creator sends a consensus report to the cloud service administrators who investigate the reasons that lead to the erroneous transaction(s) at those GHGs/AHGs. This helps the administrators detect the attacker quickly and isolate the GHG/AHG or reset it.
Figure <ref> illustrates the messages exchanged during the execution of QUICO. In the figure, the HAPS station in the middle (i.e., H) generates a new block and broadcasts it to the blockchain network. H1 is a HAPS station that receives that new block from H. If H1 finds no errors in the new block, it sends a “Block ACK” packet to H. If H1 suspects that one or more transactions in the new block are erroneous, it sends a “Block ERROR” to H. The latter inspects the questionable transactions. If H is not able to confirm whether there is an error or not, it sends a “Block Check” packet to the GHGs/AHGs from which the questionable transactions were generated (represented as GHG G in the figure). Here, G resolves the situation as previously explained and sends an “ERROR Resolve” packet to the HAPS stations. Finally, H regenerates the new block after correcting the erroneous transactions and broadcasts the corrected block to the blockchain network.
§.§ Security Analysis
In this section, we discuss the security of the proposed blockchain model and its resilience to the attacks that were described in Section <ref>. Note that, as we mentioned before, we focused on the attacks that are related to the blockchain. On the other hand, attacks that target the WSN or IoT network have been extensively studied and a large number of solutions already exist (for example, <cit.>).
On the other hand, we focus here on describing how the proposed blockchain system can be used to defend against the following attacks:
* Transaction Piracy attack: in order to prevent attackers from stealing private data in the sensor nodes’ transactions, encryption is used to protect all such transactions. As described before, each node encrypts each private transaction (including the signature) with the public key of the receiver.
* False Transaction attack: an adversary will not be able to inject a false transaction into the blockchain since it will not be endorsed by the other nodes in the cluster and by the cluster head before it is sent to the sink. In our model, the trustworthiness of the data transaction depends on the endorsements that it receives. If the transaction passes through one or more sensor nodes before reaching the sink, these sensor nodes check the correctness of the transaction data before endorsing the transaction. If the transaction sender is a neighbor of the cluster head or the sink, then the latter performs the same operation. Hence, the only way for an attacker to insert a malicious transaction is by compromising the sender, the nodes between the sender and the sink, and the sink itself. This should prove to be a hard process to be done by the attacker, since the network administrators should detect the attack before the attacker is able to compromise multiple nodes and insert the transaction into the blockchain.
* Block Withholding attack: based on our assumption that the HAPS station is fully trusted, and since the HAPS stations are the only nodes that act as blockchain miners in our proposed system, this attack should not occur.
* 51% or Majority attack: this attack occurs if an attacker is able to compromise more than half of the GHGs/AHGs that participate in the blockchain. In our system, if a GHG/AHG is suspected of behaving maliciously, the network administrators remove it from the blockchain network. Hence, as long as the HAPS station is secure, it always reports malicious incidents to the network administrators. When the HAPS station detects any malicious incident by one or more GHGs/AHGs, it delays the block generation process until the issue is resolved by the administrators. This method prevents an adversary from corrupting the blockchain even if it compromises the majority of gateways participating in the blockchain consensus.
* Forking and Replay attack: since only a single block is generated at a time by a single HAPS station, forking cannot happen in the proposed system, which eliminates the possibility of Replay attacks.
* Consensus Sabotage attack: similar to the 51% attack, this attack is not possible as long as the HAPS station is secure. If an attacker compromises a GHG/AHG and attempts to sabotage the consensus process, it will be ejected from the blockchain network by the administrators. For example, if the malicious gateway sends a “Block ERROR” packet while there are no erroneous transactions in the new block, the HAPS station reports the incident to the administrators, as described in the previous section. If the GHG/AHG tampers with the new block that is broadcast by the HAPS station, the legitimate gateways will detect the tampered block from the false data in the block body (invalid hash and signatures) and report the incident to the HAPS station. Hence, a malicious gateway will not be able to jeopardize the consensus process as long as the HAPS stations are secure and there exist legitimate gateways in the blockchain network.
* Consensus Collusion attack: similar to the previous attack, this attack will be detected as long as there exists at least one legitimate gateway that detects the malicious operations of the colluding gateways and reports the incident to the HAPS station. Since all consensus operations are distributed among all the GHGs/AHGs that participate in the blockchain, a group of GHGs/AHGs will not be able to collude and perform an attack on the consensus process without being detected.
§ PERFORMANCE EVALUATION
In this section, we test the proposed system to prove its applicability and efficiency. Currently, there is still no standard testing platform or simulator for HAPS. Hence, we created a software module for the HAPS that we integrated into the network simulator 3 (NS-3) software. This module is loaded into a simulated HAPS node and includes all the functionalities required by a C-HAPS service. The HAPS module includes the parameters for two communication channels: HAPS-to-HAPS (H2H) and HAPS-to-Ground (H2G). These parameters are based on the Electronic Communications Committee Report[https://docdb.cept.org/download/624]. We also utilized some of the parameters mentioned by the International Telecommunication Union[https://www.itu.int/rec/R-REC-F.1891/en] and ATIS[https://www.atis.org/wp-content/uploads/3gpp-documents/Rel16/ATIS.3GPP.38.821.V1600.pdf] to complete the wireless setup of the HAPS module.
The proposed blockchain system was built using Hyperledger Iroha. We chose Iroha for several reasons: first, it is written in C++, which facilitates the utilization of parts of its source code directly into the NS-3 code. Second, Iroha utilizes the YAC consensus protocol, which is an enhanced version of PBFT that produces lower latency and higher throughput. The proposed QUICO protocol is similar to YAC in terms of the main principles of distributed voting and committing of new blocks. However, QUICO is tailored for the HAPS environment in terms of the specific roles of the HAPS stations and gateways and hence the existence of two different types of votes. In addition, the reject decision in YAC is replaced with the “ERROR Check” and “ERROR Resolve” method in QUICO. Hence, in order to avoid writing the code of QUICO from scratch, we based its implementation on that of YAC while making the necessary adjustments and additions. Overall, we created another NS-3 module that contains the blockchain operations that were discussed in Sections <ref> and <ref> and installed it into the HAPS and gateway nodes. The NS-3 HAPS module at the HAPS station utilizes the blockchain module to execute the blockchain functions (such as ordering transactions and creating blocks, broadcasting the block and collecting replies, sending “Block ACK” and other messages, etc.). On the other hand, the gateway uses the blockchain module to create transactions from sensor nodes’ data, validate others’ transactions, save them in the pending list, validate new blocks that it receives from HAPS stations, and send voting decisions. Most of the blockchain operations in the blockchain module were based on the source code of Iroha after modifying it to match the required operations in the proposed blockchain model.
§.§ Simulation Setup
After creating the NS-3 modules, we simulated several NS-3 scenarios to test the system performance. We assumed a constellation of ten HAPS stations scattered over an area of 440,896 km2 (664 km x 664 km). The size of the network topology was chosen based on the scenarios that we simulated in Section <ref> to find the effective HAPS footprint when deploying cloud services. The simulation results in Section <ref> show that in order to achieve an End-to-End delay less than or equal to 100ms, the HAPS station should be serving a maximum of 394 sensor nodes per km2 within a total area of 44,000 km2. Based on that, the network size was set to approximately 44,000 x10 km2 with ten HAPS stations. Each HAPS station services 394 x 44,000 = 17,336,000 sensor nodes. Hence, the total number of sensor nodes in the simulation scenarios was set to 173,360,000 distributed evenly across the area in order to reach an average of 394 nodes/km2. We assumed that each 100 sensor nodes connect to a single sink (GHG). Hence, we deployed a total of 1,733,600 GHGs that were distributed evenly across the network area. Each sensor node sends a data transaction to the nearest GHG every 100ms. The size of the data transaction was varied between 10 and 1000KB (we tested a separate scenario for each transaction size). To be able to simulate the required scenarios, we used a powerful Core i9 server equipped with 64GB RAM. The remaining simulation parameters are shown in Table <ref>.
In our experiments, we simulated a direct link between each GHG and the nearest HAPS station. Hence, we didn’t simulate the cases in which AHGs exist. We also didn't simulate the cases in which a cloud service is deployed at specific HAPS stations only, which causes a GHG to send its packets via other GHGs/AHGs if it does not have a direct link with one of these HAPS stations. We plan on extending the simulations in the future to include these cases.
The adversary model was simulated in the experiments as follows: first, among the seven attacks that were presented in Section <ref> and analyzed in Section <ref>, the Transaction Piracy attack is automatically resolved with the integration of blockchain. Since each private transaction is encrypted with the receiver's public key, an adversary will not be able to steal private data from the transmitted transaction. In addition, there are three attacks that are resolved by assuming that HAPS stations are trusted nodes and are closely monitored in order to prevent attackers from compromising them. Based on this assumption, the Block Withholding attack, the 51% attack, and the Forking and Replay attacks will never occur, as we explained in Section <ref>. Finally, we assume that the majority of gateways (i.e., GHGs/AHGs) are legitimate, which guarantees that the Consensus Collusion attack will not occur. Hence, in this section, we study the remaining two attacks, which are the False Transaction attack and the Consensus Sabotage attack.
In order to test the ability of the proposed system to detect and prevent the two mentioned attacks, we made some of the sensor nodes and GHGs behave maliciously. Hence, a specific percentage of SNs and GHGs were selected as malicious nodes (default value of 30% of the total number of nodes; however, in a later section we vary this percentage between 10% and 80%). A malicious sensor node modifies the simulated sensor readings before sending the packet to the GHG. This simulates a False Transaction attack. If the packet is forwarded by one or more intermediate legitimate sensor nodes, they should detect the malicious modification by comparing the data with their own readings. Hence, these sensor nodes do not endorse malicious data when they forward the packet to the next sensor node or to the GHG. When the GHG receives a packet that was not endorsed by one or more sensor nodes, it checks the data in the packet by comparing it to the data that was generated by the near sensor nodes at similar times. If the GHG finds out that the data values are majorly different from the values that it received from the near sensor nodes, it discards the packet. On the other hand, if the sensor node is a neighbor to the GHG, it sends its packets directly to it. If a GHG receives a packet that was not endorsed by an intermediate sensor node, it performs the same action described before (i.e., comparing the data with that generated by sensor nodes that are near the sender). If the GHG still does not have the readings from the near sensor nodes, it waits until it receives them.
With respect to malicious GHGs, we simulated their attacks as follows: a malicious GHG attempts to affect the consensus operation by sending a “Block ERROR” message to the HAPS station to increase the consensus delay. This simulates a Consensus Sabotage attack. If the HAPS station receives “Block ERROR” messages from a few GHGs but receives “Block ACK” from all the HAPS stations and the remaining GHGs, it revalidates the transactions that were marked as erroneous by the GHGs. If the HAPS station finds out that these transactions are valid, it discards the “Block ERROR” messages and generates a warning report. In the simulations, we created an administrative program that responds to the warning report from the HAPS station by stopping the execution of the code that leads to the malicious behavior by these GHGs and switching them back to normal behavior, which simulates fixing these GHGs. In order to maintain the percentage of malicious nodes, the program selects other legitimate GHGs and switches them to malicious mode. Our aim is to find out the percentage of cases in which a malicious GHG will be detected and the effect of the GHGs' malicious behavior on the consensus delay, which we will discuss in the simulation results. Note that if the HAPS station receives “Block ERROR” from more than half of the GHGs, it does not approve the new block directly. Rather, it generates the warning report as mentioned before and waits for it to fix the malicious GHGs, then it resends the new block to these GHGs and waits for their replies before it reevaluates the voting results. This was done according to the QUICO consensus condition which was explained in Section <ref>.
To test the effectiveness of the proposed system, we compared its results with those of the system proposed in <cit.>. The compared system, MOMEC, utilizes the blockchain to facilitate autonomous management and orchestration of virtualized resources in a Mobile Edge Cloud. In addition, MOMEC proposes a custom consensus protocol to secure cloud services that are offered by mobile edge servers. When we simulated MOMEC within the HAPS environment, we made the HAPS stations act as the NFV MANO servers, while the ground gateways acted as the MEC Servers. The two compared systems were evaluated based on the following parameters: 1) Blockchain Throughput (BTh), which is the number of transactions added to the blockchain per second, 2) Transaction Latency (TLa), which is the time between the transaction is sent by the sensor node and the block that contains the transaction is confirmed by the consensus protocol, 3) Consensus Time (CT), which is the time between the instance a new block is broadcast by the HAPS station until the instance the block is added to the blockchain at all HAPS stations, 4) Attack Detection Rate (ADR), which is the percentage of malicious transactions that are detected by the HAPS stations, 5) Malicious GHG Detection Rate (MGDR), which is the percentage of times a malicious GHG action is detected by the HAPS stations, and 6) Network Traffic (NT), which is the average traffic sent, forwarded, and received by a node (sensor and GHG) per second.
§.§ Simulation Results
§.§.§ Effective HAPS Footprint
In this section, we present the simulations that we performed to determine the effective HAPS footprint when offering cloud service to ground users. Note that previous HAPS papers, such as <cit.>, assumed the HAPS footprint as a ground circle with a radius of 500 km, but this estimate did not consider that the HAPS will be deploying cloud services. Rather, previous systems considered the HAPS as a super macro base station (SMBS) or as an aerial gateway for backhauling isolated BSs. When deploying and offering cloud services, the HAPS will experience a high load from ground users that will limit its capability to cover a wide footprint while satisfying the QoS requirements of the cloud service. For this reason, we tested a set of four simulation scenarios in which the network topology was set to (50 km x 50 km = 2500 km2), (100 km x 100 km = 10,000 km2), (200 km x 200 km = 40,000 km2), and (400 km x 400 km = 160,000 km2). We will refer to these scenarios as Map1, Map2, Map3, and Map4. Next, we obtained the average population in urban areas from <cit.>, the average percentage of people who use the Internet from <cit.>, and the percentage of Internet users who use cloud services from <cit.>. Based on that, we found that the average cloud user density is equal to 394.15 individuals per km2.
From these calculations, the total number of cloud users that were created in the four simulation scenarios was set to (2,500 x 394.15 = 985,375), (10,000 x 394.15 = 3,941,500), (40,000 x 394.15 = 15,766,000), and (160,000 x 394.15 = 63,064,000) users, respectively. In each scenario, the users were scattered uniformly within the network topology. A single HAPS was placed in the center of the topology. Finally, several GHGs were placed in each scenario such that each GHG serves 100 users. Each cloud user sends its requests to the nearest GHG, and the latter forwards them to the HAPS station and vice versa.
The users' data request rate was varied between 100 Kbps and 10 Mbps. For this purpose, we simulated five scenarios in each of the four maps. The data rate in the five scenarios was set to 100Kbps, 500Kbps, 1Mbps, 5Mbps, and 10Mbps. In order to set the user data rate, we made each node send a request to the HAPS each 100 ms and the HAPS sends back a reply packet. The size of the request packet was set to 128 bits while the size of the reply packet was varied in each scenario to achieve the required data rate. Hence, the size of the reply packet was set to 10, 50, 100, 500, and 1000 Kbits in the five scenarios. In each scenario, we calculate the average End-to-End latency from the instance a cloud user sends a data request packet until it receives the HAPS station reply. The results are shown in Fig. <ref>.
As stated before, different cloud services require different delay thresholds to achieve a good quality of service (QoS). For example, cloud gaming requires a maximum latency of 20ms. In the simulated scenarios, we consider a latency of 100ms as the maximum delay threshold. In other words, when the request delay is greater than 100ms, we consider that the HAPS station failed to provide the cloud service to the user with accepted QoS. Our aim is to study the maximum number of users that the HAPS station is able to serve with accepted QoS for each of the 5 data rates, and hence the HAPS effective footprint. From Fig. <ref>, we deduce that the effective HAPS footprint is equal to 150,000 km2, 78,000 km2, 44,000 km2, 7,000 km2, and 3,000 km2 for the five data rates. If we consider an average data rate of 1 Mbps, then the effective HAPS footprint will be 44,000 km2, which is the value that was used to calculate the total network size in the simulations (i.e., 10 HAPS stations x 44,000 km2 per station).
§.§.§ Varying the Transaction Size
After calculating the HAPS station's effective footprint, we fix the user data rate at 1Mbps while varying the sensor node transaction size between 10 and 1000KB in the next set of simulation scenarios. Fig. <ref> shows the blockchain throughput of the two compared systems. We notice that the throughput generally decreases as the transaction size increases. This is due to the fact that a larger-size transaction is divided into a larger number of packets when being transmitted (due to the MTU limit). In general, this increases the delay for the transaction to reach its destination. When this delay becomes greater than the HAPS station waiting time (100ms in the simulations), the transaction is not added to the current block but remains in the pending list until the next block is generated. Hence, when the size of the transaction increases, the probability that the total delay until the destination receives the last packet of the transaction is greater than 100ms increases. This means that more transactions are delayed to later blocks, which reduces the number of transactions in the block and decreases the throughput. Fig. <ref> shows that QUICO has a higher throughput than MOMEC (by an average of 12%) which is mainly due to the fact that QUICO has a much lower transaction latency, on average, as shown in Fig. <ref>.
Table <ref> illustrates the confidence in the results of Fig. <ref>. The confidence was calculated by taking a sample of 1M readings from each scenario. Table <ref> shows the confidence values for four different significance levels (0.05, 0.1, 0.2, and 0.3), which correspond to 95%, 90%, 80%, and 70% confidence levels, respectively. From the table, the confidence interval becomes smaller as the significance level decreases, which is logical. The maximum confidence value is 0.715 which occurs when the average value is equal to 10.453 x 106 and the significance level is 0.05. At this value, the confidence interval is equal to [9.74, 11.17] x 106 with a 95% confidence level. Such confidence interval is fairly acceptable, taking into consideration the huge sample size and the variations in the network conditions. In general, the confidence values in the table reflect an overall high accuracy in the simulation results, since the confidence values show that there is no high deviation from the average value in all the confidence intervals.
In Fig. <ref>, the transaction latency is measured from the instance the transaction owner (i.e., sensor node) sends the transaction until the instance the transaction is added to the blockchain. From Fig. <ref>, the average TLa of QUICO varies between 189 and 226ms while that of MOMEC varies between 686ms and 1.17s. This huge difference is due to two main reasons: first, the average consensus delay of QUICO is much smaller, as we will explain soon, and second, the block rejection method in MOMEC (which is adopted from traditional PBFT) is replaced in QUICO with the block check approach (i.e., using the “ERROR Check” and “ERROR Resolve” packets). In MOMEC, if consensus is not reached over a certain block, COMMIT is not made and the block is rejected, which highly increases the TLa of the block transactions since they will wait for the next block to be added to the blockchain. In QUICO, the HAPS station does not reject a block but resolves the erroneous transactions as explained before. This highly decreases the average delay of transactions.
The consensus delay is shown in Fig. <ref>. This parameter is one of the major advantages of QUICO as compared to MOMEC and traditional PBFT. Instead of broadcasting PRE-PREPARE and COMMIT messages between the consensus nodes, QUICO relies on the trustworthiness of HAPS stations to reduce the communication and delay overheads. Hence, the new block is broadcast to all consensus nodes which reply to the block creator directly and the consensus decision is made by the latter. The process of avoiding the broadcast storm in the COMMIT phase reduces the consensus delay by approximately 50%, as shown in Fig. <ref>. The CT of MOMEC is higher than that of QUICO by 109% when the transaction size is equal to 10KB and by 121% when the latter is equal to 1000KB.
§.§.§ Varying the Percentage of Malicious Nodes
In the next set of simulations, we vary the percentage of malicious nodes (both sensor nodes and GHGs) between 10 and 80%. Fig. <ref> shows that the throughputs of the two systems decrease in a similar fashion as the percentage of malicious nodes (PMN) increases. This is logical since when the number of malicious sensor nodes increases, the number of malicious transactions increases, and hence, the number of valid transactions decreases. Since the two systems have high attack detection rates, as we will discuss soon, most of the malicious transactions are detected by the legitimate GHGs and the HAPS stations and will not be added to the blockchain. This causes a decrease in the number of transactions that are added to the blockchain and hence a lower throughput.
With respect to the transaction latency, we notice from Fig. <ref> that the TLa of MOMEC is much more affected by the increase in PMN than that of QUICO. We notice that this high increase in the Tla of MOMEC occurs when PMN is higher than 30%. This is due to the consensus property of MOMEC and PBFT which requires at least two-thirds of the consensus nodes to COMMIT in order to add the block. If more than 1/3 of the nodes do not COMMIT the block, then the latter is rejected by the consensus nodes. This explains the high surge in the TLa of MOMEC when PMN is 40% or higher. In such cases, most of the blocks are rejected and the transactions should wait for the next block to be added to the blockchain, which highly increases their latency. On the other hand, a smaller surge occurs in the TLa of QUICO when PMN increases above 50%, since in such case, the HAPS station will receive “BLOCK ACK” from less than half the GHGs and will wait until the malicious GHGs are fixed and they send “BLOCK ACK” before the HAPS station commits the block. This wait causes a small rise in the TLa of QUCIO when PMN is greater than 50%, as can be seen in Fig. <ref>.
A similar observation can be noted about the consensus time in Fig. <ref>. The CT of both systems increases with PMN, however; the surge in the CT of MOMEC occurs after PMN = 30%, while that of QUICO happens when PMN is greater than 50%. Note that the CT is also affected by the increase in the waiting time of the HAPS station. When the percentage of malicious nodes is less than the consensus threshold (50% for QUICO and 30% for MOMEC), the HAPS station has a higher probability of receiving the COMMIT messages from the required number of consensus nodes and hence reaching consensus earlier. On the other hand, when the percentage of malicious nodes is greater than the consensus threshold, the HAPS station will wait until it receives replies from most of the consensus nodes to make sure that the block has been accepted or rejected by the majority of nodes (i.e., more than 50% in QUICO and 60% in MOMEC) before it takes the consensus decision. This increases the average consensus delay.
§.§.§ Attack and GHG Detection Rates
While varying PMN between 10 and 80%, we measured the sensor nodes’ and GHG attack detection rate. With respect to the ADR, we added a secret parameter to each transaction which indicates whether it is legitimate or malicious (this parameter is secret because it was not read by network nodes but used when calculating the results only). Next, we calculated the number of malicious transactions that were added to each new block and divided it by the number of malicious transactions that were generated by the sensor nodes. Finally, we took the average over all blocks and subtracted it from 100% to calculate the ADR. Fig. <ref> shows that both QUICO and MOMEC have acceptable ADRs when PMN is small. However, the ADR of MOMEC decreases to 59% when PMN is 80% while that of QUICO decreases to 75% only. The main reason is that QUICO adopts the endorsement strategy which helps legitimate GHGS and HAPS stations detect malicious transactions that were not endorsed by sensor nodes that forwarded them. In addition, QUICO makes the GHGs and HAPS stations validate the sensor’s data by comparing them to data that was sent by other near sensors at a similar time. A big difference in the data could indicate a malicious transaction. These strategies contribute to increasing the ADR of QUICO. On the other hand, MOMEC implements the default verification model of PBFT. Table <ref> illustrates the confidence in the results of Fig. <ref>. Similar to the results in Table <ref>, the confidence in the ADR results is high, with the largest confidence interval equal to [0.69, 0.75] when PMN is 80%.
With respect to the malicious GHG detection rate, Fig. <ref> shows that QUICO has a much higher capability of detecting a malicious GHG than MOMEC, especially at a high PMN. QUICO applies a clear strategy to detect a GHG’s malicious behavior. As explained in Section <ref>, a HAPS station checks with the GHGs who sent the transactions that were labeled as erroneous by the malicious GHGs to validate them (via “ERROR Check” packets). In addition, the HAPS station generates a warning report to the network administrators when it detects a possible malicious GHG. The methods applied in QUICO enable HAPS stations to detect malicious GHGs with a high probability (between 86 and 97%, as shown in Fig. <ref>). On the other hand, MOMEC does not adopt any method to detect malicious consensus nodes, which causes its MGDR to drop quickly as the percentage of malicious GHGs increases above 30%. In cases where the majority of GHGs are malicious, consensus will not be reached and the HAPS station will not be able to detect which GHGs are malicious. The authors of <cit.> did not discuss how to detect a malicious consensus node in their paper. Hence, the HAPS stations will consider any GHG that made a decision different from the consensus as malicious, which is a false deduction in many cases.
§.§.§ SN Energy Consumption
One of the main challenges of integrating the blockchain into a WSN is related to dealing with the blockchain overhead on SNs. In this section, we study the effect of the proposed blockchain model on the SN energy consumption. For this purpose, we integrated the NS-3 Energy Framework that was published in <cit.> into our simulations. We modified the parameters of the “Energy Source” and “Device Energy Model” NS-3 classes (we replaced some parameters and added other parameters) to include the factors related to the blockchain operations. Our modifications were based on the Lr-WPAN-MAC model that was proposed in <cit.> and obtained from the lr-wpan code repository[https://code.nsnam.org/vrege/ns-3-gsoc/file/c43c335bc921/src/lr-wpan/model]. In the simulation scenarios, We initiate the Energy Source of each SN to have a total energy density equal to 0.2775Wh (watt-hour) at the beginning of each simulation scenario, which is equivalent to 1000 Joules. This parameter denotes the maximum amount of energy that an SN can consume during the simulation scenario. This value was obtained by simulating the SN to have a battery capacity of 75mAh and 3.7V, which can be considered as average battery specifications for a general sensor node.
In this section, we simulated two main types of scenarios: the first is the same as the simulation scenarios that we presented in Section <ref>, while the second scenario does not include the proposed blockchain model. Rather, in the second scenario, SNs send their readings to the CHs who forward them to GHGs/AHGs, and the latter aggregate the readings into data transactions and send them to the HAPS stations which store them in their storage units. In addition, cloud users send their requests to the GHGs/AHGs and the latter forward them to the HAPS stations who reply to the clients via the gateways. Both scenarios were simulated while varying the Transaction size between 10 and 1000KB. For each scenario, we calculate the SN total energy consumption during the simulation time and take the average for all SNs. The results of the two scenarios are shown in Fig. <ref>.
From Fig. <ref>, we can see that the blockchain contributes to a slight increase in the SN total energy consumption. As stated before, the SN does not directly participate in the blockchain operations, such as creating and storing blocks and executing the consensus protocol. These operations are performed by the GHGs/AHGs and HAPS stations only. On the other hand, the SN validates and endorses the data of other SNs and sends/forwards data packets to the CH. In addition, the SN could cache its data for a specific period. In such case, the SN would be contacted by the GHG to resolve questionable transactions, as explained in Section <ref>. Note that these operations are performed by the SN with and without the blockchain integration. Hence, the latter should not have a direct effect on the SN resources. Rather, the slight increase in the SN's energy consumption is due to the effect of the blockchain on the whole network, which indirectly affects the SN. For example, the blockchain (especially the consensus protocol) requires a lot of network communications and broadcasting which congests the network and increases the percentage of resent packets due to intermediate nodes (such as CHs and GHG) dropping some packets.
This observation can be deduced from the figure: as the transaction size increases, a larger number of packets is generated (due to packet segmentation), which increases the congestion. Hence, the number of dropped/resent packets increases, and consequently, the energy consumption increases. However, the increase in the SN energy consumption is limited (15.6% when the transaction size is equal to 1000KB).
§.§.§ Measuring the Network Traffic
The last parameter that we calculated is the network traffic. While varying the PMN, we defined six parameters in each scenario: three for data packets and three for control packets. The three parameters are the total number of packets sent, received, and forwarded respectively. Each time a sensor node or GHG creates a new data packet and sends it, we increment the Nb_data_sent parameter. Similarly, each time a sensor node or GHG receives a data packet, processes it (for example, endorses the data), and then forwards it, we increment the Nb_data_forward parameter. Finally, each time a GHG receives a data packet from a sensor node, we increment the Nb_data_recv parameter. We did the same thing for control packets, which include the various consensus packets required by each protocol. Note that QUICO and MOMEC use different control packets in the consensus protocol as explained before. At the end of each scenario, we add the three parameters of each type and divide by the total number of nodes (sensors and GHGs) and by the total simulation time to find the NT in packet per (node second). Fig. <ref> illustrates that the two systems produce a similar number of data packets on average. On the other hand, MOMEC generates much more control packets than QUICO. This is mainly due to two reasons: 1) the consensus process in MOMEC contains two broadcast stages (PRE-PREPARE and COMMIT) while that in QUICO contains a single broadcast stage, and 2) When a block is rejected in QUICO, another block should be created instead and the consensus process is repeated, which doubles the number of consensus packets required to commit the block. Fig. <ref> shows that the number of control packets in MOMEC is higher by about 405% on average than that in QUICO.
§ CONCLUSIONS AND FUTURE WORKS
HAPS provide a great opportunity for cloud providers to enhance the quality of their services and expand their outreach. Securing C-HAPS platforms, however, becomes vital, as cloud services are susceptible to a wide range of cyberattacks that target their software, data, and network connections. This paper proposes a blockchain model as a solution to secure the C-HAPS EIM application from major cloud-related attacks. The paper describes the network architecture of the studied application and the blockchain role of each node in the system. The paper presents the details of the system implementation, including the storing and consuming of cloud transactions, the generation of new blocks, and the blockchain consensus protocol specifically design to account to the EIM requirements. The paper results, particularly, illustrate the performance of the proposed system in terms of throughput, latency, and resilience to attacks, which highlights the importance of the proposed model in securing future C-HAPS applications.
Future works in the field include integrating machine learning capabilities into the proposed blockchain operations such as smart contract functions and consensus. The idea herein is that, instead of applying a direct formula to reach consensus, a machine learning algorithm can be applied to evaluate the actions of gateways and select specific gateways to participate in the consensus. Other potential open issues are related to improving the accuracy of the proposed consensus mechanism by assigning validation tasks to sensor nodes in a secure manner. Another possible enhancement is to divide the block validation process among the sensor nodes in the cluster so as to validate very large blocks. Other future research directions is to further investigate the optimal deployment of HAPS stations and gateways to reduce the transaction and consensus delays of the blockchain while keeping the traffic overhead within acceptable limits. Such a direction can also be considered with a joint routing protocol for reliably routing the data packets from the source GHG to the destination HAPS station so as to reduce the end-to-end latency promises, especially given the ever-growing interest in massive ultra-reliable low latency communications (mURLLC) from the sky.
elsarticle-num
|
http://arxiv.org/abs/2306.02528v1
|
20230605013103
|
Using machine learning to find exact analytic solutions to analytically posed physics problems
|
[
"Sahel Ashhab"
] |
physics.comp-ph
|
[
"physics.comp-ph",
"quant-ph"
] |
Advanced ICT Institute, National Institute of Information and Communications Technology, 4-2-1, Nukuikitamachi, Koganei, Tokyo 184-8795, Japan
We investigate the use of machine learning for solving analytic problems in theoretical physics. In particular, symbolic regression is making rapid progress in recent years as a tool to fit data using functions whose overall form is not known in advance. Assuming that we have a mathematical problem that is posed analytically, e.g. through equations, but allows easy numerical evaluation of the solution for any given set of input variable values, one can generate data numerically and then use symbolic regression to identify the closed-form function that describes the data, assuming that such a function exists. In addition to providing a concise way to represent the solution of the problem, such an obtained function can play a key role in providing insight and allow us to find an intuitive explanation for the studied phenomenon. We use a state-of-the-art symbolic regression package to demonstrate how an exact solution can be found and make an attempt at solving an unsolved physics problem. We use the Landau-Zener problem and a few of its generalizations as examples to motivate our approach and illustrate how the calculations become increasingly complicated with increasing problem difficulty. Our results highlight the capabilities and limitations of the presently available symbolic regression packages, and they point to possible modifications of these packages to make them better suited for the purpose of finding exact solutions as opposed to good approximations. Our results also demonstrate the potential for machine learning to tackle analytically posed problems in theoretical physics.
Using machine learning to find exact analytic solutions to analytically posed physics problems
Sahel Ashhab
July 31, 2023
==============================================================================================
§ INTRODUCTION
Machine learning has found applications in a large number of areas, especially in the area of science and technology. One of the most remarkable achievements that highlight the power of machine learning is that, given only the basic rules of chess, a machine was able to train itself and subsequently defeat any present-day competitor, human or computer running older methods <cit.>. Machines are also now able to diagnose diseases with a prediction accuracy that is comparable to that of well-trained and experienced physicians <cit.>.
There are now a vast number of proposals to use machine learning for scientific research in such a way that machines perform tasks that were conventionally performed by researchers <cit.>. For example, in experiments that generate large amounts of data and the scientists look for specific features in the data, it is natural to use automated methods to look for these features. In conventional machine learning applications, the overall structure of the data is known, and the algorithms determine the parameters that give the best fit for the data within the known structure. More recent algorithms, such as deep learning, allow one to make predictions even in cases when the rules of inference are unknown, e.g. when it is not known exactly what features one needs to look for. The algorithm is given a training data set, and it finds patterns in this data that it then uses to make inference about new data, e.g. the so-called test data set.
A recently emerging and still developing area of machine learning is symbolic regression <cit.>. In scientific fitting problems, and taking the single-variable case for simplicity, one is typically given a set of N data points of the form (x_j,y_j), with j=1,2,...,N, and one is interested in finding a function f(x) such that one can say that, to some approximation, y=f(x) [see e.g. <cit.>]. In the conventional and common case, the general form of f(x) is known, and only some fitting parameters need to be determined. For example, if one is looking for a linear fit, the function f(x) can be expressed as f(x)=α x + β, where α and β are the fitting parameters. In the study of oscillatory dynamics, one might use the function f(x)=αsin (ω x+ϕ), where α, ω and ϕ are fitting parameters. In symbolic regression, even the form of f(x) is unknown at the beginning of the fitting process. For example, one might not know in advance whether the best fit to the data will be a polynomial function, a trigonometric function, an exponential function or some combination (including sums, products and concatenations) of these. For example, the best fit could be a function of the form
f(x) = α + β x + γ x^2+ δ e^η x^2 + κ xsin (ω x + ϕ),
keeping in mind that even the general form of the function is not known before finding that it gives the best fit. We note here that the generalization to problems with more than one variable, i.e. replacing x by multiple input variables, is conceptually straightforward although it can make the practical implementation of the algorithm to solve a given problem significantly more time consuming.
Previous studies on symbolic regression <cit.>, which have mainly been focused on developing the technique, have generally taken the approach of assuming that the input is given in the form of data and we would like to find the best fit for the data. Here we take a different approach: our aim is to use symbolic regression as a tool to find exact analytic solutions to mathematical problems that are themselves expressed analytically. As such, one does not have any data to begin with. Instead, one could have an equation that one wishes to solve. The data is generated by selecting some values of the input variables, i.e. the variable x in the examples described above, numerically evaluating the corresponding values of y and then using symbolic regression to determine the analytic form of the function being sought. It should be noted here that some studies on complicated physical systems, e.g. density functional theory and molecular dynamics, also generate data and then use general function fitting techniques to obtain models for atom-atom interactions <cit.>. However, in these studies the obtained models are assumed to be approximate empirical models, not exact ones. As we shall discuss below, the difference in our assumptions about the problem formulation and the desired output lead to some constraints that we can impose on the computation and other differences in applying symbolic regression to the problem at hand.
§.§ Motivation: the Landau-Zener problem
We first describe a physical problem that illustrates the possible usefulness of symbolic regression in solving analytically formulated problems. In 1932, four physicists [Landau, Zener, Stückelberg and Majorana (LZSM)] independently addressed the same problem that is commonly known as the Landau-Zener (LZ) problem <cit.>. The problem can be formulated as follows: we have a time-dependent quantity (the quantum state vector) composed of two complex numbers ψ(t) = (ψ_↑(t), ψ_↓(t)) that obeys the Schrödinger equation
( [ idψ_↑/dt; idψ_↓/dt ]) = 1/2( [ v t Δ; Δ - v t ]) ·( [ ψ_↑; ψ_↓ ]),
where v and Δ are parameters that depend on the exact conditions of the problem (e.g. the rate of change and the minimum value of a magnetic field applied to the magnetic dipole of an electron). We have not explicitly written “(t)” in the above equation, but it should be kept in mind that ψ_↑ and ψ_↓ are functions of the time variable t. We can, with no loss of generality, assume that v and Δ are both positive. It is assumed that at the initial time t→ -∞ the quantum state is given by (ψ_↑, ψ_↓) = (1,0). We now want to know the values of |ψ_↑|^2 and |ψ_↓|^2 at the final time t→∞ <cit.>. These quantities represent the probabilities that the quantum system will stay in its initial state or make a so-called LZ transition. If we make the hypothetical assumption that the solution to this problem is not well known, solving the problem is not at all straightforward. In 1932, LZSM solved the problem using mathematical methods that are not familiar to most physicists. In fact, Landau did not rigorously solve the problem; he only solved it in two opposite limits and then correctly guessed the general solution. Using dimensional analysis, or alternatively by inspection of Eq. (<ref>), it is relatively easy to recognize that the solution, i.e. |ψ_↑(t→∞)|^2 or |ψ_↓(t→∞)|^2, must be a function of the single parameter Δ^2/v. It is not easy to make much progress beyond this point. Although the LZ problem is difficult to solve, its solution is remarkably simple:
|ψ_↑(t→∞)|^2 = e^-πΔ^2/2v,
which is the solution obtained by LZSM.
This situation leads to the following idea: if (hypothetically) we wanted to solve the LZ problem but we did not know the above simple solution, we could numerically solve the Schrödinger equation and find |ψ_↑(t→∞)|^2 for various values of Δ^2/v. Each one of these data points can be generated with an accuracy of several significant figures in a fraction of a second on a present-day personal computer. Simply by plotting the data of |ψ_↑(t→∞)|^2 as a function of Δ^2/v and looking at the resulting graph, one would immediately recognize that the plot looks like an exponential function. A straightforward fitting procedure would yield the function f(x) = e^-π x/2 and show that the deviation between the data and fitting function is at the level of numerical errors in the computation, indicating that this function is in fact the exact solution and not simply a convenient approximation.
For the LZ problem, knowledge about basic mathematical functions is sufficient to determine the function that fits the data. The question now is: if we are dealing with a problem that yields data that is not easily recognizable to a human scientist inspecting the data, can a machine-learning algorithm identify the function? For example, for functions that take more than one input variable [e.g. f(x, y, z) which is a function of three variables with nontrivial functional behaviour], it would in general be difficult for a human to discern all the different features and trends in the function from the data. This function identification procedure is exactly what symbolic regression does.
The important point to note here is that there is a class of theoretical physics (and applied mathematics) problems that have no known analytic solutions but for which the solution can be evaluated straightforwardly on a computer for any value of the input parameter (or parameters). Furthermore, theoretical physics problems, even ones that can be difficult to solve analytically, often have simple-looking solutions that contain only a few terms and only a few of a standard set of mathematical functions. It is for these problems that the our approach would be most relevant and effective. Needless to say, having an analytic closed-form solution is far more attractive and valuable than just knowing that the computer can reliably give us the value of the function for any set of input parameters. In theoretical physics problems, having the analytic solution can also help elucidate the physical mechanisms at play in the phenomenon under study.
§ RESULTS AND DISCUSSION
§.§ Some considerations for implementation
We first discuss a few issues related to the implementation of our approach.
§.§.§ Algorithm structure
The procedure for solving the problem can go as follows:
(1) The user chooses a number input-variable values to cover a good portion of the variable space. The points could be spaced equally or unequally, e.g. randomly.
(2) The user numerically evaluates the values of the output variable for the selected points.
(3) The user applies the symbolic regression algorithm to identify the mathematical function that describes the relation between the input and output variables.
If the above procedure does not produce the solution, which could happen for hard problems, the following steps can be added:
(4) After starting with the initial set of input-parameter values, and possibly identifying several possible candidates for the sought function, the user can start to actively choose values of x to help distinguish between the different candidate choices for f(x) and eliminate incorrect ones until the correct function is identified with a high degree of confidence. The locations of the new data points can be decided so as to maximize the distinguishability between the different candidate solutions. Needless to say, the choice of the input variable values for the new data points can be automated, and the user does not need to make the decision about where to place these new points. This point shows that our approach can incorporate ideas similar to active learning <cit.>.
(5) The user can focus on specific limits, i.e. input-parameter values close to specific points, to obtain the functional form of the solution in these limits. Knowing the behaviour of the function in different limits can be helpful for the purpose of inferring the general form of the function, just like how Landau was able to solve the LZ problem.
(6) It is common in physics problems that the asymptotic value of the solution can be inferred based on intuitive arguments, without solving the equations in detail. In some cases, the solution might be known for specific point, e.g. when the input variables are equal to zero. There can also be physicality conditions on the allowed range of the solution, e.g. probabilities being bounded between zero and one. Any such prior knowledge about the solution can be incorporated as constraints to be used during the data fitting.
(7) In the case of multiple-variable functions, the user can fix the values of some input variables and try to find the output variable as a function of the remaining input-variables in various special cases. These special-case solutions can then be used as constraints in the step of searching for the full solution.
(8) Zeros, as well as extrema, can be used to assist the search for the fitting function. Different mathematical functions have their own characteristic patterns of where the zeros are located. The zeros in the data can therefore be used as a sort of fingerprint for the sought function.
To our knowledge, points (5-8) are not in the presently available symbolic regression packages, e.g. <cit.>. These features could be incorporated to improve the performance of these packages.
§.§.§ Noise in the data
One point that is worth noting here is that data fitting algorithms usually assume that we are dealing with data that has some noise component, e.g. because of experimental errors or because many unknown factors contribute to determining the data values. As a result, one searches for the best fit the gives the lowest net deviation from the data (among the set of candidate fitting functions). Sometimes, outliers are discarded based on the assumption that they are not representative of the bulk of the data. In the situation that we are considering in this work, the value of the function for any set of input parameters is calculated numerically using a controlled computation. It is therefore possible, at least for some problems, to keep numerical errors at arbitrarily small levels for all data points. In other words, unlike many problems that are commonly encountered in the field of machine learning, one can assume that there is no noise or random component in the data (up to the small numerical errors in the computational evaluation of the function), no outliers that can be ignored, no missing data etc. Therefore one does not need to design the algorithm to accommodate some deviation from the data. One can require that these deviations remain at the error level in the computations, say 10^-10. This fact can be incorporated into the algorithm via the cost function that penalizes deviations between the fitting function and the data. One can use a function that remains negligibly small up to the known error level of the data and then increases dramatically above the error level. In other words, one could disqualify any candidate solution if it gives a deviation cost function that is higher than the allowed value for any data point, even if this candidate solution gives the best fit for most of the data points. This procedure could be helpful in avoiding the common problem in which the search algorithm gets trapped in a local minimum in the landscape of the cost function.
§.§.§ Running time
The success of the algorithm obviously relies on the existence of the exact solution in the set of candidate solutions considered by the algorithm. The time needed to perform a full calculation will scale with the number of functions in the full set of candidate functions. One can then start with a relatively small set, e.g. with a limited set of building blocks and a limited number of the building blocks in the candidate solutions. If the first attempt with the small set of candidate solution fails, one can gradually increase the size of the set. Eventually the algorithm can fail for one of two reasons: (1) the nonexistence of an exact solution within the set of standard functions considered by the algorithm, or (2) the high complexity of the exact solution that would lead to a prohibitively long computation time.
§.§ Tests using AI Feynman
In this section we report on attempts that we made using the state-of-the-art symbolic regression package AI Feynman (AIF) to solve physics problems following the approach described above. In all of the tests we used the settings BF_try_time=60, BF_ops_file_type= “14ops.txt”, polyfit_deg=3 and NN_epochs=500. We tried different settings in some of the calculations, especially those in which we expected that AIF should be able to find the correct solutions. However, the different settings that we tried did not lead to improved solutions. It should also be noted that AIF does not necessarily generate the same suggested solutions on repeated runs of the same problem. In cases where we list all the suggested solutions, we report the results that were obtained on the first run for each problem. In these cases, no clear improvement in the best suggested solution was obtained in the 10-20 subsequent runs that we performed. In cases where we give a single solution obtained by AIF, we take the best solution obtained from the 10-20 runs.
§.§.§ Solved problems
As a first reference point, we took the LZ probability function P=e^-π/(2x) with x=v/Δ^2, and we used this function to generate 121 data points with x distributed uniformly on a logarithmic scale in the range [10^-3,10^3]. The data was generated using the software Wolfram Mathematica as well as using the numpy package in python. The maximum difference between the P values in the two data sets was about 1.1 × 10^-16. We can therefore estimate the error in the data to be at or below the level of 10^-16. We ran AIF to find the best fit for the data. The solution file created by AIF contained the following six suggested solutions, ordered from the solution with the smallest error at the top to the solution with the largest error (but smallest complexity) at the bottom
f_1(x) = ( exp (-3.141592653589793/x) )^0.5
f_2(x) = 0.000000000000 + √(exp ( (π+sinπ) / (-x) ))
f_3(x) = (exp ( -3 / x) )^0.5
f_4(x) = tan ( 2 * exp ( - exp (1/x) ) )
f_5(x) = 3 * exp ( - exp (1/x) )
f_6(x) = 0.
The first two of these solutions, i.e. f_1(x) and f_2(x), are clearly the correct solutions (noting here that the long constant in f_1(x) is π to all shown significant figures). It is interesting that f_2(x) contains two clearly superfluous terms: the zero term at the beginning and the term sinπ. It is also not clear why f_1(x) is assigned the largest complexity value by AIF, although it looks simpler than some of the other solutions, e.g. f_2(x).
Now we move somewhat closer to the scenario described in previous sections. We generate the data using numerical simulations of the time-dependent Schrödinger equation. The problem that we seek to solve is similar to the LZSM problem, but with a two-level system (TLS) coupled to a harmonic oscillator <cit.>. The Schrödinger equation is expressed as
id|ψ⟩/dt = Ĥ|ψ⟩
with an infinite-dimensional complex vector |ψ⟩, the Hamiltonian matrix
Ĥ = - Δ/2σ̂_x ⊗1̂_∞ - vt/2σ̂_z ⊗1̂_∞ + ω1̂_2⊗â^†â + g σ̂_z ⊗( â + â^†),
σ̂_α (with α=x,y or z) are the two-dimensional Pauli matrices, â and â^† are, respectively, the harmonic oscillator annihilation and creation operators, 1̂_n is the n-dimensional identity matrix, and ⊗ stands for the tensor product. We assume that the initial state at t→ -∞ is the ground state of the Hamiltonian |↓,0⟩ [where the symbol ↓ indicates the TLS eigenvector with σ̂_z value -1, and the index 0 indicates the harmonic oscillator eigenvector with 0 photons [keeping in mind that the interaction with the TLS modifies the photon number operator from the usual â^†â to (â^†-g/ω) (â-g/ω) for the TLS state ↓ and to (â^†+g/ω) (â+g/ω) for the TLS state ↑]. For the initial state |↓,0⟩, no states of the form |↓,n⟩ with n≥ 1 will be occupied at the final time t→∞. There is also a very good approximation for the final-time occupation probabilities of the states |↑,n⟩, as discussed recently in Ref. <cit.>. Furthermore, if we consider a theoretical model where the states |↓,n⟩ with n≥ 1 in the TLS-oscillator system are ignored, there exists an exact solution for the final-time occupation probabilities <cit.>. The probabilities are given by
P(↓, 0) = e^-πΔ^2/(2v)
P(↓, n) = 0 for all n≥ 1
P(↑, 0) = 1 - e^-πΔ_0^2/(2v)
P(↑, 1) = e^-πΔ_0^2/(2v)( 1 - e^-πΔ_1^2/(2v))
P(↑, 2) = e^-πΔ_0^2/(2v) e^-πΔ_1^2/(2v)( 1 - e^-πΔ_2^2/(2v))
⋮
with
Δ_n = 1/√(n!)( -2g/ω)^n e^-2(g/ω)^2Δ.
This problem provides a good test case, as the complexity of the solutions increases gradually with increasing n, and the exact solution contains only simple mathematical functions.
To generate the data points, we solved the Schrödinger equation numerically following the approach used in Ref. <cit.>: we first fix the value of g/Δ. As a specific example, we took g/Δ=0.1. A polaron transformation was applied, such that we only need to keep the (polaron-transformed) states |↓, 0⟩, |↑, 0⟩, |↑, 1⟩, |↑, 2⟩, ... in the simulation. We kept 101 state, going up to the state |↑, 99⟩. Although the initial and final times are infinite in the theoretical formulation of the problem, we set finite time boundary conditions. To determine suitable initial and final time values, we note that the dynamics is characterized by a series events at which sudden transitions and probability rearrangement occur. We determined the t values at which these events occur, which is easy to do based on the energy-level crossing points for g=0. We then set the initial value of t to be 100Δ/v before the first event and the final value of t to be 100Δ/v after the last event. The total time duration was divided into 10^5 time steps. The Hamiltonian was approximated as being constant during each time step, with its value calculated based on the t value at the middle of the time step. We used 101 values of v/Δ^2 ranging from 10^-2 to 10^3 and distributed uniformly on a logarithmic scale. Generating a single data point takes a few minutes on a single core of a present-day computer, and generating many data points does not add much to the computation time if the simulation of multiple values of v/Δ^2 is optimized, such that some intermediate steps in the computation are performed once and do not need to be repeated for each value of v/Δ^2. It is almost certain that this computation can be optimized further to generate the data faster. However, since the simulation timescale is not a limiting factor to our overall calculation, we do not attempt to find the most optimal implementation of the dynamics simulation. The P(↓, 0), P(↑, 0), P(↑, 1) and P(↑, 2) values obtained from these simulations had respective deviations of up to 2× 10^-4, 6× 10^-6, 2× 10^-5 and 3× 10^-5 from the exact formulae given in Eq. (<ref>). We emphasize that these numerical errors can be reduced by orders of magnitude if we use a larger time range and smaller time steps, albeit at the cost of a longer computation time.
We ran AIF to find the best fit for the four data sets. For P(↓, 0), and setting x=v/Δ^2, AIF gave the following suggested solutions
f_1(x) = exp (-1.570796326794897/x)
f_2(x) = 0.000000002179 + √(exp (π / (-x)))
f_3(x) = 0.000000002179 + exp (π / (x × (cosπ -1)))
f_4(x) = exp (-1.5 / x)
f_5(x) = exp (-2 / x)
f_6(x) = 0.
The constant in f_1(x) is π/2 to all shown significant figures. Apart from the first term in f_2(x) and f_3(x), i.e. 2.179× 10^-9, the first three functions are all correct, only expressed in different ways.
For P(↑, 0) the suggested solutions were
f_1(x) = exp(cos(exp(-1/x))) - 1.718281953032
f_2(x) = ( cos(1.33333333333333 ×exp(-1/x)) )^2
f_3(x) = ( cos(exp(-1/x)) )^2
f_4(x) = ( cos(x ×exp(-1/x)) )^2
f_5(x) = ( cos(x^3) )^2
f_6(x) = 1
f_7(x) = 0.
The first of these solutions looks rather complicated, but it is in fact a rather good approximation for the data, as shown in Fig. <ref>. Nevertheless, there is a clear deviation between the data and the fitting function around v/Δ^2=1. This result illustrates the point made in Sec. <ref> about the fact that we can set the accepted error level to the numerical error level in the computation. If we had done that, f_1(x) would have been disqualified, because its small deviation from the data (which reaches a maximum value of 4.5× 10^-2 at v/Δ^2=1) is clearly larger than the computational error level. The functions f_2(x) and f_3(x) are less good approximations, with clear deviations from the data for large values of v/Δ^2. The functions f_4(x) and f_5(x) are close to 1 for small v/Δ^2 values. When P(↑, 0) makes a turn and deviates away from 1, both f_4(x) and f_5(x) start to oscillate fast. To summarize, only the first suggested solution is a good approximation, and it is a surprisingly good approximation but clearly not the solution that we are seeking. It is also somewhat surprising that AIF did not find the exact solution given in Eq. (<ref>), even though the exact solution is less complex than some of the solutions suggested by AIF.
For P(↑, 1) and P(↑, 2), after ten runs for each data set, AIF gave only one suggested solution, namely 0 (expressed in different ways). Interestingly, on some of the runs on the P(↑, 2) data, the suggested solutions included variations on the expression
f(x) = √(- exp (-1/x^2)),
which is clearly imaginary and cannot be the correct solution. We suspected that the reason for the inability of AIF to generate any good fitting functions in these cases is the fact that the probabilities are small, with maximum values of 0.0144 and 0.000283 for P(↑, 1) and P(↑, 2), respectively. We multiplied the probability data by 10^2 and 10^4, such that the peak value in each data set was amplified to the order of one. After this change, the best two suggested solutions for P(↑, 1) were
f_1(x) = 1.69777143001556 ×sin{sin[ 3.12565946578979 ×exp( -1.06572902202606/x^1.03745067119598) ] }
f_2(x) = 1.48647665977478 ×sin{ 3.11723256111145 ×exp[ -1.05347275733948/x] }.
For P(↑, 2), the best suggested solution was
f(x) = 3.33333333333333 ×sin{sin[ 3.12629866600037 ×exp( -1.10573124885559/x^1.04231333732605) ] }.
As can be seen in Fig. <ref>, Eqs.(<ref>) and (<ref>) give very good fits to the data. However, although the deviation is barely discernible, it is at the level of 10^-2, which is larger than the computational error level and hence disqualifies both Eqs.(<ref>) and (<ref>) from being exact solutions.
We performed the same calculations for g/ω=1. The results for P(↓, 0) were similar to those shown in Eq. (<ref>): several equivalent expressions describing the correct solution (including zero or small extra terms) as well as a few low-complexity but incorrect solutions [specifically f_4(x), f_5(x) and f_6(x)]. For P(↑, 0) the suggested solutions were
f_1(x) = 0.028778633465 ×( x+(exp(π+1)+1)^-1)^-1
≈ 0.028779/x+0.015649
f_2(x) = 0.090288630445 ×sin1/π x
f_3(x) = 0.057509404466 ×sin0.5/x
f_4(x) = 0.028444255608/x,
as well as a few additional solutions that have the same forms as some of those in Eq. (<ref>) but with different constants. The function f_1(x) is a good approximation for the data except for a small deviation at the small-x end, as shown in Fig. <ref>. It should be emphasized that this good agreement is somewhat misleading, as the data saturates at 1 in the limit x→ 0 while f_1(x) continues to increase with decreasing x until it reaches f_1(0)=1.839, a clearly unphysical and therefore disqualifying feature. This case illustrates two points that we mentioned in Sec. <ref>: (1) the usefulness of imposing physicality conditions on the allowed solutions and (2) the possibility of using active learning and adding more data points based on the initial results of fitting the first batch of data points. The functions f_2(x) and f_3(x) are good approximations for large values of x (partly because both they and P(↑, 0) approach 0 in the limit x→∞) but become fast oscillating functions at small values of x.
The data and best suggested solutions for P(↑, 1) and P(↑, 2) after 10 runs of AIF are shown in Fig. <ref>. The best suggested solution to fit the P(↑, 1) data was
f(x) = sin( -0.000147037180 + 1/x ×exp(π-1)+1).
The best suggested solution for P(↑, 2) was
f(x) = 0.230159951539 ×exp(1/x)^-0.25/x.
The function in Eq. (<ref>) is a poor fit to the data. The function in Eq. (<ref>) is a generally good fit to the data everywhere. However, the deviation between the data and the fitting function is sufficiently large for us to conclude that the best suggested solution generated by AIF is not the exact solution that we are seeking.
§.§.§ Unsolved problem – Multi-level LZ problem
One example of a problem that has no known analytic solution is the multi-level LZ problem, i.e. the generalization of the LZ problem (described in Sec. <ref>) to a quantum system with more than two quantum states. For example, in the three-level case, the Schrödinger equation can be expressed as:
( [ idψ_1/dt; idψ_2/dt; idψ_3/dt ]) =
( [ v_1 t + E_1 Δ_12/2 Δ_13/2; Δ_12/2 v_2 t + E_2 Δ_23/2; Δ_13/2 Δ_23/2 v_3 t + E_3 ]) ·( [ ψ_1; ψ_2; ψ_3 ]).
This equation has nine parameters (v_1, v_2, v_3, E_1, E_2, E_3, Δ_12, Δ_13, and Δ_23), but one can use simple arguments to reduce the number to five parameters that determine the solution <cit.>. The five parameters on which the unknown functions (i.e. |ψ_j(t→∞)|^2) will depend are E_3-(E_1+E_2)/2, v_3/(v_1-v_2), Δ_12^2/(v_1-v_2), Δ_13^2/(v_1-v_2), and Δ_23^2/(v_1-v_2). We can solve Eq. (<ref>) numerically for any combination of parameters and obtain results that are accurate to several significant figures in a fraction of a second (although extreme parameter values that approach zero or infinity can pose problems for the numerical solution of the equation) <cit.>.
It should be noted here that although the multi-level LZ problem is not fully solved, there is an exact analytic solution for the probability to remain in the same state if the system starts in the lowest or highest state. In other words, assuming that at t→ -∞ the state is (ψ_1, ψ_2, ψ_3) = (1,0,0) and v_1 is larger than both v_2 and v_3,
|ψ_1(t→∞)|^2 = exp{-∑_j≠ 1πΔ_1j^2/2(v_1 - v_j)},
where the sum over j is a sum over all the states except the initial state <cit.>.
Since we expect that this problem in its full five-variable form is too hard for the current version of AIF, we consider just one special case: we set Δ_12=Δ_13=Δ_23=Δ, v_1=-v_3=v, v_2=0, E_1=E_3, E_2-E_1=-Δ/2. As explained in Sec. <ref>, the final occupation probabilities will now be functions of the single parameter v/Δ^2. These are the probabilities that we would like to evaluate and subsequently fit. We emphasize that even this special case of the three-level LZ problem has no known analytical solution.
Similarly to what we did with the TLS-oscillator problem treated in Sec. <ref>, we generate the data points by numerical integration of the Schrödinger equation, which is Eq. (<ref>) in this case. For the numerical calculations, we set the initial and final times to -vt_ initial=vt_ final=10^5. We divide the total time into 10^7 time steps and use the average Hamiltonian in each step. We assume that the initial state is (ψ_1, ψ_2, ψ_3) = (1,0,0) and calculate the probabilities P_1, P_2 and P_3 (P_n=|ψ_n|^2) at the final time. Generating a single data point takes a few minutes, which does not create a computational bottleneck for our overall calculation. We note here that we repeated the calculations by setting -vt_ initial=vt_ final=10^4 and using 10^5 time steps. With these computation settings, generating one data point takes a few seconds. By comparing the different simulation results with the formula in Eq. (<ref>), we can estimate that our final probability data obtained with the finer simulation parameters deviate from the exact values by up to 4× 10^-6 (mainly in the region where |ψ_1(t→∞)|^2 rises from 0 to 1) while those with the coarser simulation parameters deviate from the exact values by up to 4× 10^-4 (mainly at the small-v end of the data).
The data and best suggested solutions are shown in Fig. <ref>. For the P_1 data set, the best suggested solution by AIF was
f(x) = exp{ -4.71008510610149 × x^-0.999879062175751}.
We can recognize that the constants in this expression are close to simple constants (to within 5× 10^-4), and the above expression can be simplified by rounding the constants to
f(x) = exp{ -3π/2x}.
This function coincides with the known solution for P_1. For P_2 the best solution suggested by AIF was
f(x) = sin3.340054886514/x + exp((π+1)/x).
For P_3 the three top suggestions by AIF were all variants of the function
f(x) = 1 - exp{1/-x + sin x}
with some negligibly small differences in the constants. The functions in Eqs. (<ref>) and (<ref>) are good fits for the data for most values of v/Δ^2. However, these function completely miss the peak-dip pair in the data in the region 0.3<v/Δ^2<1 and exhibit a noticeable deviation from the data in the region 2<v/Δ^2<10 as well. Hence they cannot be the exact solutions that we are seeking.
To conclude this section, we emphasize that the examples that we have presented here are just a few specific examples that are simple generalizations of the LZ problem. There will undoubtedly be more important and interesting problems in theoretical physics to tackle using symbolic regression when the power of the algorithms increases and can identify more complex functions. Any unsolved problem for which we can use computational methods to find the solution for any set of input parameters will be well suited for treating with this approach.
§ CONCLUSION
We have explored the use of recently emerging symbolic regression tools to find closed-form solutions for analytically posed problems in theoretical physics and applied mathematics. Assuming that a problem can be solved numerically for any set of input parameters, the approach is to first generate a numerical data set from the analytically posed problem and then uses symbolic regression, possibly combined with active learning, to identify the functional representation of the data, hence obtaining the analytic solution to the problem. We gave a few examples based on the LZ problem to illustrate the computational procedure and demonstrate the abilities and limitations of state-of-the-art symbolic regression tools. Our results suggest that modifying symbolic regressions packages to apply stricter constraints on the allowed error and to apply physicality constraints will be crucial to make these packages more capable and efficient in finding exact solutions. We believe that the approach discussed here will help in the development of powerful computational tools to help answer important questions in theoretical physics and applied mathematics.
§ ACKNOWLEDGMENT
We would like to thank Sanjay Chawla, Raka Jovanovic, Stefano Rizzo and Kouichi Semba for useful discussions and Takashi Nakayama for helping set up the computing environment that was used for some of our calculations. This work was partially supported by the Q-LEAP Quantum AI Flagship Project of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan (Grant Number JPMXS0120319794).
99
Silver D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, T. Lillicrap, K. Simonyan, and D. Hassabis, A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play, Science 362, 1140 (2018).
NatMatEditorial Ascent of machine learning in medicine, Nat. Mater. 18, 407 (2019)
Radovic A. Radovic, M. Williams, D. Rousseau, M. Kagan, D. Bonacorsi, A. Himmel, A. Aurisano, K. Terao, and T. Wongjirad, Machine learning at the energy and intensity frontiers of particle physics, Nature 560, 41 (2018).
Carleo G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová, Machine learning and the physical sciences, Rev. Mod. Phys. 91, 045002 (2019).
Koza J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, (MIT Press, Cambridge, 1992).
Schmidt M. Schmidt and H. Lipson, Distilling free-form natural laws from experimental data, Science 324, 81 (2009).
Eureqa R. Dubčáková, Genetic programming and evolvable machines 12, 173 (2011).
Wu T. Wu and M. Tegmark, Toward an artificial intelligence physicist for unsupervised learning, Phys. Rev. E 100, 033311 (2019).
Jovanovic R. Jovanovic and S. Ashhab, A GRASP approach for symbolic regression, Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 1723 (2019).
Udrescu S.-M. Udrescu and M. Tegmark, AI Feynman: A physics-inspired method for symbolic regression, Sci. Adv. 6, eaay2631 (2020).
Udrescu2 S.-M. Udrescu, A. Tan, J. Feng, O. Neto, T. Wu, and M. Tegmark, AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity, arXiv:2006.10782 [34th Conference on Neural Information Processing Systems (Neurips 2020), Vancouver, Canada].
Iten R. Iten, T. Metger, H. Wilming, L. del Rio, and R. Renner, Discovering physical concepts with neural networks, Phys. Rev. Lett. 124, 010508 (2020).
Qiu J. Qiu, G. Zhong, Y. Lu, K. Xin, H. Qian, X. Zhu, The Newton scheme for deep learning, arXiv:1810.07550.
Alhousseini I. Alhousseini, W. Chemissany, F. Kleit, and A. Nasrallah, Physicist's journeys through the AI world - a topical review. There is no royal road to unsupervised learning, arXiv:1905.01023.
Unke O. T. Unke, S. Chmiela, H. E. Sauceda, M. Gastegger, I. Poltavsky, K. T. Schütt, A. Tkatchenko, and K.-R. Müller, Machine Learning force fields, Chem. Rev. 121, 10142 (2021).
Mishin Y. Mishin, Machine-learning interatomic potentials for materials science, Acta Mater. 214, 116980 (2021).
Behler J. Behler, Four generations of high-dimensional neural network potentials, Chem. Rev. 121, 10037 (2021).
LZReviews The LZ problem is reviewed, for example, in F. Di Giacomo and E. Nikitin, The Majorana formula and the Landau–Zener–Stückelberg treatment of the avoided crossing problem, Sov. Phys. Uspekhi 48, 515 (2005); S. N. Shevchenko, S. Ashhab, and F. Nori, Landau-Zener-Stückelberg interferometry, Phys. Rep. 492, 1 (2010).
Settles B. Settles, Computer Sciences Technical Report 1648 (2010); http://burrsettles.com/pub/settles.activelearning.pdf
Wubs M. Wubs, K. Saito, S. Kohler, P. Hänggi, and Y. Kayanuma, Gauging a quantum heat bath with dissipative Landau-Zener transitions, Phys. Rev. Lett. 97, 200404 (2006).
Ashhab2023 S. Ashhab, T. Fuse, F. Yoshihara, S. Kim, and K. Semba, Controlling qubit-oscillator systems using linear parameter sweeps, arXiv:2303.09834.
Demkov Yu. N. Demkov and V. I. Osherov, Stationary and nonstationary problems in quantum mechanics that can be solved by means of contour integration, Sov. Phys. JETP 53, 1589 (1968).
Ashhab2016 S. Ashhab, Landau-Zener transitions in an open multilevel quantum system, Phys. Rev. A 94, 042109 (2016).
Shytov A. V. Shytov, Landau-Zener transitions in a multilevel system: An exact result, Phys. Rev. A 70, 052708 (2004).
SchroedingerEqFootnote The Schrödinger equation has the property that it conserves |ψ_↑|^2+|ψ_↓|^2, so we only need to calculate either |ψ_↑|^2 or |ψ_↓|^2.
MultiLevelLZFootnote If we do not consider the phases in the quantum superpositions at the initial and final times, we now have four combinations of initial and final states. We can for example take the initial state at t→ -∞ as (ψ_1, ψ_2, ψ_3) = (1,0,0) or (ψ_1, ψ_2, ψ_3) = (0,1,0). In each case we have two unknown functions, e.g. the values of |ψ_2|^2 and |ψ_3|^2 at t→∞. The remaining five probabilities can then be determined from unitarity and probability conservation, e.g. the constraint that |ψ_1|^2+|ψ_2|^2+|ψ_3|^2=1.
|
http://arxiv.org/abs/2306.02833v1
|
20230605122913
|
The $L^\infty$ Learnability of Reproducing Kernel Hilbert Spaces
|
[
"Hongrui Chen",
"Jihao Long",
"Lei Wu"
] |
stat.ML
|
[
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] |
Limitations of Noisy Quantum Devices in Computational and Entangling Power
Xiongfeng Ma
Received 21 February, 2023; accepted 5 June, 2023
==========================================================================
In this work, we analyze the learnability of reproducing kernel Hilbert spaces (RKHS) under the L^∞ norm, which is critical for understanding the performance of kernel methods and random feature models in safety- and security-critical applications. Specifically, we relate the L^∞ learnability of a RKHS to the spectrum decay of the associate kernel and both lower bounds and upper bounds of the sample complexity are established. In particular, for dot-product kernels on the sphere, we identify conditions when the L^∞ learning can be achieved with polynomial samples.
Let d denote the input dimension and assume the kernel spectrum roughly decays as λ_k∼ k^-1-β with β>0. We prove that if β is independent of the input dimension d, then functions in the RKHS can be learned efficiently under the L^∞ norm, i.e., the sample complexity depends polynomially on d. In contrast, if β=1/poly(d), then the L^∞ learning requires exponentially many samples.
§ INTRODUCTION
Traditional machine learning have primarily focused on average-case scenarios, seeking to optimize performance measures that capture the average behavior. For instance, in regression problem, L^2 error is the most popular
metric.
However, in many safety- and security-critical applications, it is also crucial to consider worst-case scenarios, where the performance of the learning algorithm needs to be guaranteed regardless of the data distribution. For instance, the worst-case guarantee is required in ensuring the adversarial robustness <cit.>, solving PDEs <cit.>, and understanding reinforcement learning <cit.>.
However, this needs us to move beyond the L^2 metric and consider the performance of machine learning algorithms in L^∞ metric.
In this paper, our goal is to
study the L^∞ learning of functions in reproducing kernel Hilbert space (RKHS) <cit.>. In algorithm designing, RKHS have emerged as a powerful framework, providing a flexible and expressive class of models that can capture complex patterns in data <cit.>. In theoretical analysis,
because of the reproducing property, RKHS has been widely adopted in analyzing the performance of kernel-based methods, random feature models <cit.>, and neural networks <cit.>. However, existing analyses of learning RKHS mostly focus on the L^2 metric. Understanding the L^∞ learnability of RKHS is still largely open.
As particular examples, we will mostly focus on dot-product kernels on the unit sphere <cit.>. A kernel k: ^d-1×^d-1↦ is said to be dot-product if there exist κ: [-1,1]↦ such that
k(x,x') = κ(x^Tx').
Dot-product kernels are one of the most popular kernels used in kernel-based methods<cit.>. In particular, the neural tangent kernels <cit.> arising from understanding neural networks all are dot-product <cit.>.
Our contribution.
In this paper, we provide a spectral-based analysis of the L^∞ learnability of functions in RKHS. Specifically, we establish both lower bounds and upper bounds (only for dot-product kernels) of the sample complexity by using the eigenvalue decay of the associate kernel. Let {μ_j}_j≥ 1 be the eigenvalues in decreasing order, Λ(m) = ∑_j=m+1^+∞μ_j, and n the number of samples. Our results
can be summarized as follows.
* We first establish an upper bound of the L^∞ excess risk for dot-product kernels by using the spectrum decay: Λ(n). In particular, if there exists an absolute constant β>0 such that the eigenvalue decays like μ_j∝ j^-1-β, then L^∞ error is given by O(d^β/2β+1n^-β/2(2β+1)), where the dependence on d is only polynomial. This implies that achieving L^∞ learning is feasible as long as the eigenvalues decay reasonably fast.
It is important to note that the learning algorithm we employed is the standard kernel ridge regression (KRR).
* We next establish a lower bound on the minimax L^∞ estimation error. In particular, the lower bound is proportional to the quantity Λ(n), indicating that the decay rate of the L^∞ excess risk follows the form O(n^-2β_d) when the eigenvalues satisfy the condition μ_j∼ j^-1-β_d. Consequently, the L^∞ learning suffers from the curse of dimensionality if β_d=1/(d).
* We also apply our bounds to RKHSs associated with dot-product kernels of the form: k(x,x')=_v∼τ_d-1[σ(v^Tx)σ(v^Tx')], where τ_d-1 is the uniform distribution over ^d-1 and σ(·) is an activation function. Specifically, when
σ(·) is a smooth function (such as sigmoid, softplus, and SiLU), L^∞ learning in the corresponding RKHSs needs only polynomially many samples. In contrast, when σ(·) is nonsmooth (e.g., ReLU), L^∞ learning inevitably suffers from the curse of dimensionality. These conclusions follow from analyzing how the eigenvalue decay of k(·,·) depends on the smoothness of σ(·).
§.§ Related Work
To the best of our knowledge, the first study of L^∞ learnability in RKHS was conducted by <cit.>, establishing both the upper and lower bounds in relation to the kernel spectrum decays: Λ(n) = ∑_i=n+1^+∞μ_i. The lower bound, specifically, shows that the worst-case error of any L^∞ learning algorithm within an RKHS can be bounded below by the kernel spectrum decays. Subsequent research <cit.> has predominantly employed the same methodology to investigate the lower bound. Note that <cit.> considers a setup where the training samples {x_i}_i=1^n are deterministic. In their framework, the hard functions are data-dependent and consequently, for different training samples {x_i}_i=1^n, the hard target functions might be different. In contrast, we consider the standard statistical learning setup, where our training data {x_i}_i=1^n are samples drawn from a specific distribution. Our lower bound is established for the standard minimax errors <cit.>, where the hard functions are independent of training data.
Let e_i be the eigenfunctions associated with the eigenvalue λ_i for i∈. The upper bound in <cit.> requires an uniform L^∞ bound for the eigenfunctions: sup_i ∈ℕ^+e_i_∞ < +∞. This assumption is also found in <cit.> for the upper bound. Moreover, in <cit.>, this condition is relaxed by requiring sup_i ∈ℕ^+μ_i^ϵe_i_∞ < +∞ where ϵ∈ [0,1/2) is a universal constant. The upper bound in this case correlates with the modified eigenvalue decay: Λ^ϵ(n) = ∑_i=n+1^+∞μ_i^1-2ϵ. However, these assumptions exclude dot-product kernels on the sphere since the L^∞ norm of sphere harmonics, the eigenfunctions of the dot-product kernels, grows rapidly <cit.>.
Therefore, existing results for the upper bound are only applicable to the dot-product kernel with extremely fast spectrum decay when d is large.
Concurrent work <cit.> also explores L^∞ learnability for dot-product kernels. However, they focus on Gaussian random field and their results are not applicable to an entire RKHS. In this work, we establish the upper bound of L^∞ learnability of the entire RKHS with dot-product kernel based on the kernel spectrum decay, despite the rapid growth of the L^∞ norm of sphere harmonics. Furthermore, it is important to note that the L^∞ learnability in <cit.> is established for an add-hoc algorithm, whereas ours is for the standard KRR algorithm.
§ PRELIMINARIES
*Notations.
For ⊂ℝ^d, we denote by () the set of probability measures on and () the space of signed Radon measures equipped with the total variation norm μ_()=μ_TV. For a probability measure γ∈(), we use ⟨·,·⟩_γ and ·_γ to denote the L^2(γ) inner product and norm, respectively. For any vector v in Euclidean space, denote by v = (∑_i |v_i|^2 )^1/2 the Euclidean norm. Let ^d-1 be the unit sphere on ^d and τ_d-1 denote the uniform measure on ^d-1.
We use a ≲ b to mean a ≤ Cb for an absolute constant C > 0 and a ≳ b is defined analogously. We use a ∼ b if there exist absolute constants C_1, C_2 > 0 such that C_1b ≤ a ≤ C_2b. We also use a≲_γ b to denote that a≤ C_γ b for a constant C_γ that depends only on γ and a≳_γ b is defined analogously.
*Mercer decomposition and RKHSs.
We recall some facts about the eigen decomposition of a kernel.
For any kernel k:×↦ and a probability measure γ∈(), the associated integral operator _k^γ: L^2(γ)↦ L^2(γ) is given by
_k^γ f = ∫ k(·,x) f(x) γ(x).
When k is continuous and is compact, the Mercer's theorem guarantees the existance of eigen decomposition of k:
k(x,x')=∑_i=1^∞μ_i e_i(x)e_i(x').
Here {μ_i}_i=1^∞ are the eigenvalues in a decreasing order and {e_i}_i=1^∞ are the orthonormal eigenfunctions satisfying ∫ e_i(x)e_j(x)γ(x)=δ_i,j. The trace of k satisfies ∑_i=1^∞μ_i = ∫_ k(x,x) γ̣(x)
Note that the decomposition depends on the input distribution γ and when needed, we will denote by (μ_i^k,γ,e_i^k,γ) the i-th eigenvalue and eigenfunction to explicitly emphasize the influence of k and γ. By using the Mercer decomposition, the RKHS can be defined as
_k={f: f__k<∞},
where the RKHS norm is given by
f__k^2 = ∑_i=1^∞⟨ f,e_i⟩^2_γ/μ_i.
*Legendre polynomials and spherical harmonics.
Legendre polynomials and spherical harmonics are critical for the analysis of dot-product kernel on the sphere. The Legendre polynomials in d-dimension is recursively defined by:
P_0,d(t)=0, P_1,d(t)=t,
P_k,d(t)=2 k+d-4/k+d-3 t P_k-1,d(t)-k-1/k+d-3 P_k-2,d(t), k ≥ 2 .
Note that {P_k,d}_k=0^∞ forms an complete orthogonal basis of L^2(τ̃_d-1), where τ̃_d-1 is the marginal distribution of the uniform distribution on the sphere, i.e., the distribution of x_1 for x ∼τ_d-1.
Let 𝒴_k^d be the space of all homogeneous harmonic polynomials of degree k in d dimensions restricted on 𝕊^d-1; the dimension of the space 𝒴_k^d is N(d, k) :=2k+d-2/kk+d-3d-2. Let {Y_k, j}_1 ≤ j ≤ N(d, k) be an orthonormal basis of 𝒴_k^d in L^2(τ_d-1). Then Y_k, j: 𝕊^d-1↦ℝ is the j-th spherical harmonics of degree k and {Y_k, j}_k ∈ℕ, 1 ≤ j ≤ N(d, k) forms an orthonormal basis of L^2(τ_d-1). Moreover, the degree-k harmonics {Y_k,j}_1≤ j ≤ N(d,k) satisfies
1/N(d,K)∑_j=1^N(d,k) Y_k,j(x)Y_k,j(x') = P_k,d(x^⊤ y), ∀ x,x' ∈^d-1.
*Dot-product kernels on the unit sphere.
Sphere harmonics are the common eigenfunctions for all dot-product kernels on the sphere. Specifically, consider a dot-product kernel k(x,x') = κ(x^⊤ x') on ^d-1. Then, the spectral decomposition of k corresponding to the uniform measure τ_d-1 is
κ(x^⊤ x')= ∑_k=0^∞∑_j=1^N(d,k)λ_k Y_k,j(x) Y_k,j(x'),
where {λ_k}_k≥ 0 are the eigenvalues of the kernel operator _k^τ_d-1 counted without multiplicity in decreasing order. Note that N(d,k) is the mutiplicity of the k-th eigenvalue λ_k. It is important to note that {μ_j}_j≥ 1 are the corresponding eigenvalues with multiplicity counted.
We refer to <cit.> and <cit.> for more details about the harmonic analysis on ^d-1.
§.§ Random Feature Kernels
We are in particular interested in the kernels with an integral representation because of the connection with random feature models (RFMs). Let be weight space equipped with a weight distribution π and ϕ: ×→ be a feature function.
Consider the RFM:
f_m(x;θ) = ∑_j=1^m c_jϕ(x,v_j),
where θ = {c_j,v_j }_j=1^m, the inner weights v_1,⋯,v_m ∈ are i.i.d. sampled from a weight distribution π∈(). The output weights c_1,⋯,c_m ∈^m are the learnable parameters. Given the random weight v_1,⋯,v_m, functions of the form (<ref>) induces an empirical kernel k̂: ×→:
k̂(x,x') = 1/m∑_j=1^m ϕ(x,v_j)ϕ(x',v_j).
As the number of features m tends to infinity, the empirical kernel converges to
k(x,x') = ∫_ϕ(x,v)ϕ(x',v)π̣(v).
For this kernel, the corresponding RKHS norm _k admits following representation:
f__k :=inf_f=∫_ a(v)ϕ(·,v) π̣(v)a_π.
We refer to the kernel (<ref>) as the random feature kernel associate with the feature function ϕ.
Conversely, the following proposition establishes that any kernel can be expressed as a random feature kernel associate with some feature function:
For any k: ×→ and any π∈(), there exists a symmetric feature function ϕ: ×→ such that k(·,·) takes the form of random feature kernel (<ref>).
Moreover, for dot-product kernel k(x,x') = κ(x^⊤ x'), if = ^d-1 and π = τ_d-1, there exists an activation function σ: → such that ϕ(x,v) = σ(v^⊤ x) satisfies (<ref>).
Consider the spectral decomposition of k corresponding to the distribution π:
k(x,x') = ∑_i=1^∞μ_ie_i(x)e_i(x').
Choosing ϕ(x,v) = ∑_i=1^∞√(μ_i)e_i(x)e_i(v), we have
∫_ϕ(x,v)ϕ(x',v) π̣(v) = ∑_i=1^∞∑_j=1^∞√(μ_iμ_j)∫_e_i(x)e_i(v)e_j(x')e_j(v) π̣(v)
= ∑_i=1^∞μ_ie_i(x)e_i(x') = k(x,x').
Thus we complete the proof of the first part. Similarly, for dot product kernel k(x,x') = κ(x^⊤ x'), consider the spectral decomposition of κ:
κ(x^⊤ x')= ∑_k=0^∞∑_j=1^N(d,k)λ_k Y_k,j(x) Y_k,j(x') = ∑_k=0^∞λ_k N(d,k)P_k,d(x^⊤ x'),
where the last step uses (<ref>).
We complete the proof of the second part by choosing
σ(t) = ∑_k=0^∞√(λ_k)N(d,k)P_k,d(t).
Proposition <ref> implies that: 1) any kernel admits a random feature representation with = and a symmetric feature function ϕ(·,·); 2) dot-product kernels further exhibit dot-product structures in their feature functions. Consequently, we can focus on random feature kernels induced by symmetric features without loss of generality.
§ MAIN RESULTS
Let _k be the RKHS associated kernel k on the input domain . Suppose that the training data S_n={(x_i,y_i)}_i=1^n are generated by y_i = f(x_i) + ξ_i, where input data {x_i}_i are independently sampled from the input distribution ρ∈(), the target function f lies in the unit ball of the RKHS, i.e., f__k≤ 1, and {ξ_i}_i are sub-gaussian noise. We consider the problem of learning f from the training data S_n under the L^∞ metric.
In this section, we will show that the error of L^∞ is closely related to the eigenvalue decay of k(·,·), which can be quantified by
Λ_k,π(n) = √(∑_i=n+1^∞μ_i^k,π).
In particular,
* First, for dot product on the unit sphere, the quantity Λ_k,τ_d-1(n) controls the L^∞ generalization error of the standard KRR estimator.
* Second, sup_π∈()Λ_k,π(n) provides a lower bound on the minimax error for learning functions in _k under L^∞ metric.
§.§ Upper Bounds
Suppose k:^d-1×^d-1↦ is a dot-product kernel taking the form of (<ref>) and the input distribution is ρ = τ_d-1. For any decreasing function L : ^+ →^+ that satisfies Λ_k,τ_d-1(m) ≤ L(m), let q(d,L) = sup_k ≥ 1L(k)/L((d+1)k). Assume that the noise ξ_i's are mean-zero and ς-subgaussian. Consider the KRR estimator
f̂_n = _f̂__k≤ 1∑_i=1^n(f̂(x_i)-y_i)^2.
Then with probability at least 1-δ over the sampling of {(x_i,y_i)}_i=1^n, we have
f̂_n - f _∞≲inf_m ≥ 1[√(q(d,L)L(m)) + √(m)(ϵ(n,ς,δ) + e(n,δ))],
where ϵ(n,ς,δ) = (ς^2κ(1)(1+log(1/δ))/n)^1/4, e(n,δ) = √(κ(1)(log n + log(1/δ))/n).
To obtain the tightest bound, one can choose L(m) = Λ_k,τ_d-1(m). However, since the exact value of Λ_k,τ_d-1(m) is often unknown, the introduction of L(m) is mainly for the convenience of calculating the constant q(d,L).
The rotational invariance assumption plays a critical role in the analysis presented above, and the result may potentially be extended to settings where the densities of π and ρ are strictly positive. In addition, using localization techniques (see, e.g., <cit.>) to address noise-induced errors might yield tighter bounds.
However, our focus here is understanding how the error rate depends on the kernel spectrum and if the rate suffers from the curse of dimensionality, not obtaining optimal rates.
Take L(m)=Λ_k,τ_d-1(m)=∑_j=m+1^∞μ_j. If μ_j ∼ j^-(1+2β),[Here, just as an illustration, we assume μ_j ∼ j^-(1+2β), ignoring the effect of multiplicity.] then we roughly have that L(m)∼ m^-2β and q(d,L)∼ d^2β. Plugging them into (<ref>) yields
f̂_n - f _∞≲_β,ς,δinf_m≥ 1(d^β m^-β+ m^1/2n^-1/4)≲_β,ς,δ d^β/2β+1n^-β/2(2β+1),
where we hide constants that depend on β,ς and δ.
It is evident that if β does not depend on d, then the L^∞ error does not exhibit the curse of dimensionality, i.e., polynomially many samples are sufficient to recovery the target function in L^∞ metric.
*Examples.
We now instantiate the upper bound established above for concrete examples. Specifically, we discuss how the smoothness of σ(·) affects the L^∞ learnability.
<cit.>
Suppose that for any m ∈_+, σ(·) satisfies
sup_t ∈ [-1,1] |σ^(m)(t)| ≲Γ(m+1).
Then, we have Λ_k,τ_d-1(m) ≲ 1/m.
By asserting L(m) = C/m in Proposition <ref>, we immediately obtain that the L^∞ error scales as d^1/4n^-1/8, breaking the curse of dimensionality.
The condition in Proposition <ref> could include
random feature kernels associated with smooth activation functions such as sigmoid, softplus, arctan, GELU <cit.>, and Swish/SiLU <cit.>. We refer to <cit.> for the detailed verification of why they satisfy the condition in Proposition <ref>.
§.§ Lower Bounds
Let s = ∫_ k(x,x) ρ̣(x) be the trace of the kernel k. Suppose that ξ_i ∼(0,ς). For any estimator f̂_n that maps the training data {x_i,y_i}_i=1^n to a function on have
inf_f̂_nsup_f__k≤ 1f̂_n - f _∞≳min(1, ς/√(s)) sup_π∈()Λ_k,π(n),
where the expectation is taken over the random sampling of {(x_i, y_i)} = {(x_i,f(x_i)+ξ_i)}_i=1^n.
The above theorem shows that up to a multiplicative constant, the L^∞ minimax error is lower bounded by Λ_k,π(n). If μ_j∼ j^-1-β_d, then Λ_k,π(n)∼β_d^-1n^-β_d and the trace s∼β_d^-1. In this case, the lower bound becomes O(β_d^-1n^-β_d). If β_d=1/(d), then the lower bound exhibit a curse of dimensionality.
*Comparison with <cit.>. <cit.> considers the scenario without noise, i.e., y_i=f(x_i) for i=1,2,…, n and the inputs are deterministic.
The L^∞ lower bound in <cit.> is given by: for any x_1,x_2,…,x_n∈, it holds that
inf__nsup_f__k≤ 1_n-f≳sup_π∈()Λ_k,π(n),
where the infimum is taken over all possible estimators that use only information {(x_i, f(x_i))}_i=1^n.
Note that (<ref>) holds for any samples {x_i}_i=1^n. Hence, it implies that the same lower bound holds even if we can adaptively query input samples. However, in (<ref>), the worst-case target functions may depends training samples {x_i}_i=1^n. Thus, for different training samples, the hard target functions can be different. In contrast, Theorem <ref> holds for the standard setup of statistical learning and
the worst-case target functions in (<ref>) only depend on the input distribution instead of the specific training samples.
*Examples.
For dot-product kernels, when σ(·) is non-smooth, it is often that β_d=1/(d) (see spectral analyses of dot-product kernels in <cit.>). As concrete examples, consider the ReLU^α activation: σ(t)=max(0,t^α), where α∈_≥ 0. The Heaviside step and ReLU function correspond to α=0 and α=1, respectively. The case of α>1 also has many applications <cit.>. In particular, for α=0,1, <cit.> shows
κ(t)= 1/2 π(π-arccos (t)) if α=0
1/2 π d((π-arccos (t)) t+√(1-t^2)) if α=1.
Specifically,
<cit.> proves that there exists a constant C_α,d depending on 1/d polynomially such that
Λ_k,τ_d-1(n)≥ C_α, d n^-2α+1/d.
Combining with Theorem <ref>, it is evident that the L^∞ learning in the corresponding RKHS suffers from the curse of dimensionality.
We also note that the smoothness of κ does not necessarily imply the smoothness of the corresponding activation function σ(·). Indeed, it is possible for κ to be smooth while still suffering from the curse of dimensionality in L^∞ learning. For example, consider the Gaussian kernel k(x,x') = exp(-x-x'^2/2), where κ(t) = exp(t-1) satisfying |κ^(m)(t)|≲ 1 for any m ∈_+ and t ∈ [-1,1]. However, the following proposition shows that the spectral decay Λ_k,τ_d-1 does not admit a polynomial rate:
Consider the Gaussian kernel k(x,x') = exp(-x-x'^2/h^2). For any h>0,
there dose not exist absolute constant α∈ and β > 0 such that
Λ_k,τ_d-1(m)≲d^α/m^β.
The proof is deferred to Appendix <ref>.
§ PROOF SKETCH
In this section, we present an overview of the proofs of Theorem <ref> and Theorem <ref>. At a high level, our proofs consist of three main components:
* We introduce a quantity called the L^∞-L^2 gap to measure the difference between the L^∞ norm and L^2 norm of functions in _k. We show that this quantity controls both the upper and lower bound for the error of L^∞ learning.
* We bridge the connection between the L^∞-L^2 gap and the approximation of parametric feature functions {ϕ(x,·)}_x ∈. This connection allows us to address the L^∞ learning problem by investigating the corresponding approximation problem.
* Building upon <cit.>, we relate the approximation of the function class {ϕ(x,·)}_x ∈ to the spectral decay of the kernel k.
Combining the above three steps, we obtain spectral-based upper and lower bound for the L^∞ learning of RKHS.
*Relating the L^∞ learnability to the L^∞-L^2 gap.
We define the following quantity to measure the difference between the L^∞ and L^2 norm for functions in _k: for any ν∈() and ϵ >0, let
Δ_ν,ϵ := sup_f__k≤ 1, f_ν≤ϵf_∞.
The following proposition demonstrates that the quantity defined in (<ref>) serves as both an upper and lower bound for the error of L^∞ learning:
(1). Suppose that k is a product kernel with the form
(<ref>) on ^d-1 and ρ = τ_d-1. Assume the noise ξ_i's are mean-zero and ς-subgaussian. Consider the KRR estimator
f̂_n = _f̂__k≤ 1∑_i=1^n(f̂(x_i)-y_i)^2.
Then ℘at least 1-δ over the sampling of {(x_i,y_i)}_i=1^n, we have
f̂_n - f _∞≲Δ_ρ̂_n, ϵ(n,ς,δ),
where ρ̂_n = 1/n∑_i=1^n δ_x_i is the empirical measure and ϵ(n,ς,δ) = (ς^2κ(1)(1+log(1/δ))/n)^1/4.
(2). Suppose that ξ_i ∼(0,ς). we have
inf_f̂_nsup_f__k≤ 1f̂_n - f _∞≳Δ_ρ,ς n^-1/2
where the infimum is taken over all possible estimators and the expectation is taken the sampling of {(x_i, y_i)}_i=1^n.
The proof of Proposition <ref> is deferred to Appendix <ref>.
*Relating the L^∞-L^2 gap to the approximation of parametric feature functions.
Given a feature function ϕ: ×→, consider a class of parametric functions Φ := {ϕ(x,·): x ∈}. The closure of the convex, symmetric hull of Φ is:
𝒢_ϕ ={∑_i=1^m a_j ϕ(x_i,·): ∑_i=1^m | a_j | ≤ 1, x_j ∈, m ∈ℕ^+}
The following proposition shows that the L^∞-L^2 gap is closely related to the linear approximability of functions in _ϕ:
Recall that by Proposition <ref>, for any π∈(), there exist a ϕ:×↦ such that the kernel k could be written in the form of
k(x,x') = ∫_ϕ(x,v)ϕ(x',v) π̣(v).
For any ν∈() and ϵ >0, we have
sup_f__k≤ 1, f_ν≤ϵf_∞ = sup_g ∈_ϕinf_b ∈ L^2(ν)g - ∫_ b(x)ϕ(x,·) ν̣(x) _π + ϵb_ν.
The proof is deferred to Appendix <ref>. By this proposition, we have the following observations.
* Regarding the upper bound, we are interested in the case where ν = ρ̂_n = 1/n∑_i=1^nδ_x_i. Then, the right hand side of (<ref>) becomes
sup_g ∈_ϕinf_c∈^ng-1/n∑_i=1^n c_iϕ(x_i,·) _π + ϵ/√(n)c,
which could be viewed as the error of approximating _ϕ with random features.
* As for the lower bound, we are interested in the case where ν = ρ. Let k̃(v,v') = ∫_ϕ(x,v)ϕ(x,v') ρ̣(x) and f_b=∫_ b(x)ϕ(x,·) ρ̣(x). By (<ref>), f_b__≤b_ρ. Then, the right hand side of (<ref>) is equivalent to
sup_g ∈_ϕinf_h ∈_k̃g-h_π + ϵh__k̃.
Hence, the right hand side of (<ref>) becomes the error of approximating functions in _ϕ with the RKHS _k̃.
*Relating the approximation of parametric feature functions to the spectral decay of kernel.
The linear approximation of the function class _ϕ with optimal features is extensively studied in <cit.>, where both lower bounds and upper bounds are established by leveraging spectral decay. Specifically, <cit.> focuses on the linear approximation with optimal features, which are the spherical harmonics for dot-product kernels. However, the problem we are concerned (the right hand side of (<ref>)) is the linear approximation with random features or RKHS, rather than the optimal features. To fill this gap, we provide the following proposition.
(1). Suppose k:^d-1×^d-1↦ is a dot-product kernel taking the form
k(x,x') =κ(x^⊤ x') = ∫_^d-1σ(v^⊤ x)(v^⊤ x') τ̣_d-1.
For any decreasing function L : ^+ →^+ that satisfies Λ_k,τ_d-1(m) ≤ L(m), let q(d,L) = sup_k ≥ 1L(k)/L((d+1)k). For any ϵ > 0, suppose that x_i iid∼τ_d-1, with probability at least 1-δ, we have
sup_g ∈_ϕinf_c∈^n g- ∑_i=1^n c_iσ(x_i^⊤·) _τ_d-1 + ϵ/√(n)c≲inf_m ≥ 1[√(q(d,L)L(m)) + √(m)(ϵ+e(n,δ)) ],
where e(n,δ) = √(κ(1)(log n + log(1/δ))/n).
(2). Recall that by Proposition <ref>, for any π∈(), there exist a ϕ:×↦ such that the kernel k could be written in the form of
k(x,x') = ∫_ϕ(x,v)ϕ(x',v) π̣(v).
Let s = ∫_k(x,x)ρ̣(x). Define k̃(v,v') = ∫_ϕ(x,v)ϕ(x,v') ρ̣(x) and let _k̃ be the corresponding RKHS. For any ϵ > 0 and any positive integer n, it holds that
sup_g ∈_ϕinf_h ∈_k̃g-h_π + ϵh__k̃≥min(1, ϵ√(n/s)) Λ_k,π(n).
The proof is deferred to Appendix <ref>.
Finally, the proofs of Theorem <ref> and Theorem <ref> can be completed by combining Proposition <ref>, Proposition <ref>, and Proposition <ref>. It is worth noting that the choice of π in Proposition <ref> and Proposition <ref> is arbitrary. Therefore, for any π∈(), we obtain a lower bound, and we can take the supremum over π to obtain the final lower bound.
§ CONCLUSION
In conclusion, we present a spectral-based analysis of the L^∞ learnability of RKHS. The upper bound result demonstrates that standard KRR algorithms can effectively avoid the curse of dimensionality when dealing with kernels with smooth activation functions. This means that the L^∞ learning in RKHS remains feasible with polynomial sample complexity in these cases. On the other hand, the lower bound analysis considers the standard statistical learning setting, where the hardness of functions is dependent on the input distribution rather than the specific training samples. The results reveal that the L^∞ learning in RKHS suffers from the curse of dimensionality when the kernel spectrum decays slowly, specifically as λ_k ∼ k^-1-β with β = 1/poly(d). This phenomenon is observed for the dot-product kernel associated with the ReLU activation function. These findings contribute to a better understanding of the capabilities and limitations of kernel-based learning algorithms in safty- and security-critical applications.
amsalpha
§ PROOF OF PROPOSITION <REF>
*Part I: The upper bound.
We firstly bound the empirical loss 1/n∑_i=1^n(f̂(x_i)-f(x_i))^2. By the optimality of the estimator f̂, we have
∑_i=1^n(f̂(x_i) - f(x_i) + ξ_i)^2 ≤∑_i=1^n ξ_i^2.
Hence,
1/n∑_i=1^n (f̂(x_i) - f(x_i))^2 ≤2/n∑_i=1^n ξ_i(f(x_i) - f̂(x_i))
≤2/nsup_f__k≤ 2 ∑_i=1^n ξ_if(x_i)
≤2/nsup_f__k≤ 2 ⟨ξ_ik(x_i,·) , f⟩__k (use f(x)=⟨ f, k(x,·)⟩__k)
≤4/n∑_i=1^n ξ_i k(x_i,·) __k
= 4/n√(∑_i=1^n∑_j=1^n ξ_iξ_j k(x_i,x_j))
≤4/n√(z^⊤ K z),
where K ∈^n × n is the kernel matrix given by K_ij = k(x_i,x_j) and z=(ξ_1,ξ_2,…,ξ_n). Note that
[z^⊤ K z] =∑_i=1^n [ξ_i^2]k(x_i,x_i)≤ nς^2κ(1).
Note |K_ij| ≤κ(1). By Hanson-Wright inequality <cit.>, ℘at least 1-δ over the noise {ξ_i}_i=1^n, we have
z^⊤ K z ≲ nς^2κ(1)( 1 + log(1/δ))
and thus
√(1/n∑_i=1^n (f̂(x_i) - f(x_i) )^2)≲ϵ(n,ς,δ):= (ς^2κ(1)(1+log(1/δ))/n)^1/4.
Then, by the definition of the L^∞-L^2 gap, the L^∞ estimation error is bounded by
f̂ - f_∞≲sup_f__k≤ 1, f_ρ̂_n≤ϵ(n,ς,δ) f_∞ = Δ_ρ̂_n,ϵ(n,ς,δ) .
We complete the proof.
*Part II: The lower bound.
For any function f ∈_k, let _f^n denote the law of {(x_i,f(x_i)+ξ_i)}_i=1^n.
For a given estimator f̂_n: (×)^n →_k and a target function f ∈_k, the performance of f̂_n is measured by
d(f̂_n, f) := f̂_n({ (x_i,f(x_i)+ ξ_i)}_i=1^n) - f _∞.
We apply the two-point Le Cam's method <cit.> to derive the minimax lower bound. For any f_1,f_2 ∈{f∈_k: f__k≤ 1}, we have
inf_f̂_nsup_f__k≤ 1 [d(f̂_n,f)] ≥inf_f̂_nmax{ [d(f̂_n,f_1)], [d(f̂_n,f_2)] }
≥f_1-f_2_∞/2(1- TV(_f_1^n,_f_2^n) ).
Applying the Pinsker's inequality [For two probability distributions P,Q defined on the same domain, P-Q_TV≤√(KL(P||Q)/2).] <cit.>, we obtain
inf_f̂_nsup_f__k≤ 1 [d(f̂_n,f)] ≥f_1-f_2_∞/2(1- √(KL(_f_1^n_f_2^n)/2)) .
The KL divergence between _f_1^n and _f_2^n can be computed by
KL(_f_1^n _f_2^n ) = __f_1^n[log_f_1^n /_f_2^n]
= __f_1^n[log( ∏_i=1^n ρ(x_i)exp(-ξ_i^2/2ς^2) /∏_i=1^n ρ(x_i)exp(-f_2(x_i) +ξ_i - f_1(x_i)^2/2ς^2)) ]
= _x_i∼ρ, ξ_i∼(0,ς^2I_d)[∑_i=1^n (f_2(x_i)-f_1(x_i)+ξ_i^2 - ξ_i^2/2ς^2)]
=n/2ς^2f_2 - f_1_ρ^2.
Combining (<ref>) and (<ref>), we arrive at
inf_f̂_nsup_f__k≤ 1[d(f̂_n,f)] ≥f_1-f_2_/2(1 - √(n)f_2-f_1_ρ/2ς)
There must exist a function ∈_k, __k≤ 1 such that
_ρ≤ς/√(n), _≥2/3Δ_ρ,ς n^-1/2.
Substituting f_1 = 0, f_2 = into (<ref>) yields
inf_f̂_nsup_f__k≤ 1 [d(f̂_n,f)] ≥1/6Δ_ρ,ς n^-1/2.
We complete the proof.
§ PROOF OF PROPOSITION <REF>
We firstly need the following lemma.
Let ϕ:×↦ and ν,π∈() and ϵ > 0. Define
A = {a ∈ L^2(π): a_π≤ 1, ∫_ a(v)ϕ(·,v) π̣(v) _ν≤ϵ}.
For any function g ∈ L^2(π), we have
sup_a ∈ A∫_ a(v)g(v) π̣(v) = inf_h ∈ L^2(ν)(g - ∫_ h(x) ϕ(x,·) ν̣(x) _π + ϵh_ν).
Step I:
On the one hand, we have for any h ∈ L^2(ν) that
g - ∫_ h(x) ϕ(x,·) ν̣(x) _π + ϵh_ν
= sup_a_π≤ 1⟨ a, g - ∫_ h(x) ϕ(x,·) ν̣(x)⟩_π + ϵh_ν
= sup_a_π≤ 1[ ∫_ a(v)g(v) π(v) - ∫_ h(x) ( ∫_ a(v)ϕ(x,v) π̣(v) )ν̣(x) ] + ϵh_r',ν
≥sup_a∈ A[ ∫_ a(v)g(v) π(v) -
h_ν∫_ a(v)ϕ(·,v) π̣(v)_ν] + ϵh_ν
≥sup_a∈ A∫_ a(v)g(v) π(v),
where the last step uses the property that ∫_ a(v)ϕ(·,v) π̣(v)_ν≤ϵ for a∈ A.
Hence,
sup_a ∈ A∫_ a(v)g(v) π̣(v) ≤inf_h ∈ L^2(ν)( g - ∫_ h(x) ϕ(x,·) ν̣(x) _π + ϵh_ν).
Step II: On the other hand, consider the functional : L^2(π)↦ given by
(l) := inf_h ∈ L^2(ν)(l - ∫_ h(x) ϕ(x,·) ν̣(x) _π + ϵh_ν).
It is not hard to verify that
* is sublinear on L^2(π), i.e., (l_1+l_2)≤(l_1)+(l_2),
* (λ l)=λ(l) for any l∈ L^2(π).
For a given g∈ L^2(π), let G=span{g}. By the Hahn-Banach Theorem <cit.>, there exists a linear functional on L^2(π) such that
* (g) = (g), i.e.,
(g) = inf_h ∈ L^2(ν)g - ∫_ h(x) ϕ(x,·) ν̣(x) _π + ϵh_ν;
* (l) ≤(l) for any l ∈ L^2(π), i.e.,
(l) ≤inf_h ∈ L^2(ν)l - ∫_ h(x) ϕ(x,·) ν̣(x) _π + ϵh_ν, for any l ∈ L^2(π).
Taking h=0 in (<ref>) gives
(l) ≤l_π for any l ∈ L^2(π).
Hence, _op≤ 1. By Riesz representation theorem, there exists a_g ∈ L^2(π) with a_π≤ 1 such that
(l) = ∫_ a_g(v)l(v) π̣(v), for any l ∈ L^2(π),
where a_g(·) depends on g as depends on g.
Noticing that for any h∈ L^2(ν),
∫_ h(x)(∫_ a_g(v)ϕ(x,v)π̣(v))ν̣(x) = (∫_ h(x)ϕ(x,·) ν̣(x) ) ≤ϵh_ν,
we can conclude that a_g(·) satisfies
∫_ a_g(v)ϕ(·,v)π̣(v)_ν≤ϵ,
implying
a_g∈ A.
Thus,
sup_a ∈ A∫_ a(v)g(v) π̣(v) ≥∫_ a_g(v)g(v) π̣(v)=(g)=(g)
=inf_h ∈ L^2(ν)(g - ∫_ h(x) ϕ(x,·) ν̣(x) _π + ϵh_ν)
By combining (<ref>) and (<ref>), we complete the proof.
Proof of Proposition <ref>.
We note that the function class _ϕ could be written as
_ϕ = {∫_a(x)ϕ(x,·)μ̣(x): μ∈(), μ_TV≤ 1 }
and the RKHS ball _k^1: = {f ∈_k: f__k≤ 1} could be written as
_k^1 = {∫_ a(v)ϕ(·,v) π̣(v): a_π≤ 1 }.
Using the duality form f_∞ = sup_μ∈(), μ_TV∫_ f(x) μ̣(x) and combining Lemma <ref>, we have
sup_f__k≤ 1,f_ν≤ϵf_∞ = sup_f__k≤ 1,f_ν≤ϵsup_μ∈(), μ_TV≤ 1∫_ f(x)μ̣(x)
= sup_μ∈(), μ_TV≤ 1sup_a ∈ A∫_(∫_ a(v)ϕ(x,v) π̣(v) ) μ̣(x)
=sup_μ∈(), μ_TV≤ 1sup_a ∈ A∫_ a(v) (∫_ϕ(x,v) μ̣(x) )π̣(v)
(i)=sup_μ∈(), μ_TV≤ 1inf_h ∈ L^2(ν)∫_ϕ(x,·)μ̣(x) - ∫_ h(x)ϕ(x,·)ν̣(x)_π + ϵh_ν
= sup_g ∈_ϕinf_h ∈ L^2(ν)g - ∫_ h(x)ϕ(x,·)ν̣(x)_π + ϵh_ν.
where (i) comes from Lemma <ref>. We complete the proof.
§ PROOF OF PROPOSITION <REF>
§.§ The Upper Bound
Our proof needs the following two lemmas related to the linear approximation of parametric function and the random feature approximation of RKHS.
For any x ∈^d-1, there exists a function h ∈_k such that h__k≤√(m) and
σ_x - h _τ_d-1≲√(q(d,L)L(m)),
where σ_x :^d-1→ is the single neuron v ↦σ(v^⊤ x).
Let m_k = ∑_i=0^k N(d,i). Assume that m ∈ [m_k,m_k+1-1], we choose h as the optimal approximation of σ(x^⊤·) in the span of {Y_i,j}_1≤ i ≤ k, 1≤ j ≤ N(d,i), that is, h̅ = ∑_i=1^k ∑_j=1^N(d,i) c_i,jY_i,j with
{c_i,j} = _{α_i,j}_1≤ i ≤ k, 1≤ j ≤ N(d,i)σ_x - ∑_i=1^k ∑_j=1^N(d,i)α_i,jY_i,j_τ_d-1^2
Now we verify that h̅(·) satisfies the conditions in Lemma <ref>. Recall that the spherical harmonics is related to the Legendre polynomials:
∑_j=1^N(d,k)Y_k,j(v)Y_k,j(v') = N(d,k)P_k(v^⊤ v'), ∀ v,v' ∈^d-1.
Since {Y_i, j}_0 ≤ i ≤ k, 1 ≤ j ≤ N(d, i) is orthonormal in L^2(τ_d-1), we have
inf _{c_i, j}_0 ≤ i ≤ k, 1 ≤ j ≤ N(d, i)σ_x-∑_i=0^k ∑_j=1^N(d, i) c_i, j Y_i, j_τ_d-1^2=σ_x_τ_d-1^2-∑_i=0^k ∑_j=1^N(d, i)⟨ Y_i, j, σ_x⟩_τ_d-1^2
Hence,
σ_x_τ_d-1^2-∑_i=0^k ∑_j=1^N(d, i)⟨ Y_i, j, σ_x⟩_τ_d-1^2
=∫_𝕊^d-1|σ(x^⊤ v)|^2 dτ_d-1(v)-∑_i=0^k ∫_𝕊^d-1∫_𝕊^d-1σ(x^⊤ v) σ(x^⊤ v^') ∑_j=1^N(d, i) Y_i, j(v) Y_i, j(v^') dτ_d-1(v) dτ_d-1(v^')
=∫_𝕊^d-1|σ(x^⊤ v)|^2 dτ_d-1(v)-∑_i=0^k N(d, i) ∫_𝕊^d-1∫_𝕊^d-1σ(x^⊤ v) σ(x^⊤ v^') P_i(v^⊤ v^') dτ_d-1(v) dτ_d-1(v^')
By rotation invariance, we have
∫_^d-1 |σ(x^⊤ v) |^2τ̣_d-1(x) = ∫_^d-1∫_^d-1 |σ(x^⊤ v) |^2τ̣_d-1(x) τ̣_d-1(v)
= ∫_^d-1κ(v^⊤ v) τ̣_d-1(v) = ∑_i=0^∞ N(d,i)λ_i,
and
∑_i=0^k N(d, i) ∫_𝕊^d-1∫_𝕊^d-1σ(x^⊤ v) σ(x^⊤ v^') P_i(v^⊤ v^') dτ_d-1(v) dτ_d-1(v^')
= ∑_i=0^k N(d, i) ∫_𝕊^d-1∫_𝕊^d-1∫_𝕊^d-1σ(x^⊤ v) σ(x^⊤ v^')dτ_d-1(x) P_i(v^⊤ v^') dτ_d-1(v) dτ_d-1(v^')
= ∑_i=0^k N(d, i) ∫_𝕊^d-1∫_𝕊^d-1κ(v^⊤ v')P_i(v^⊤ v') dτ_d-1(v) dτ_d-1(v^')
= ∑_i=0^k∑_j=1^N(d,i)∫_^d-1∫_^d-1κ(v^⊤ v') Y_i,j(v') τ̣_d-1(v')
= ∑_i=0^k N(d,i)λ_i,
where the last equation is because {Y_i,j}_1≤ i,j≤ n are the eigenfunctions of the kernel operator:
∫_^d-1κ(v^⊤ v')Y_i,j(v') τ̣_d-1(v') = λ_iY_i,j(v).
Combining (<ref>) and (<ref>), we arrive at
σ_x - h _τ_d-1^2 = ∑_i=k+1^∞ N(d,i)λ_i ≤ L(m_k).
Note that m_k+1/m_k≤ d+1(see <cit.> for details), we have
L(m_k) ≤L(m_k+1)/L(m_k)L(m) ≤ q(d,L)L(m).
Thus we complete the proof of (<ref>). To bound the RKHS norm of h̅, we note that the optimal approximation is given by
c_i,j = ⟨σ_x, Y_i,j⟩_τ_d-1.
Similar to the proof of (<ref>), we have
∑_i=1^k ∑_j=1^N(d,i)c_i,jY_i,j__k^2 = ∑_i=0^k 1/λ_i∑_j=1^N(d,i)⟨ Y_i,j, σ_x ⟩_τ_d-1^2
= ∑_i=0^k 1/λ_iN(d, i) ∫_𝕊^d-1∫_𝕊^d-1κ(x^⊤ x') P_i(x^T x^') dτ_d-1(x) dτ_d-1(x^')
= ∑_i=1^k N(d,i) = m_k ≤ m.
We complete the proof.
Suppose that x_i iid∼τ_d-1, with probability at least 1-δ, we have
sup_h__k≤ 1inf_c_1,⋯,c_nh-1/n∑_i=1^n c_iσ_x_i_τ_d-1 +ϵ/√(n)c≤ϵ + e(n,δ),
where e(n,δ) = √(κ(1)(log n + log(1/δ))/n).
We directly apply <cit.>. With probability at least 1-δ over samples {x_i}_i=1^n, we have
sup _h__k≤ 1inf_c^2 ≤ 4nh -1/n∑_i=1^n c_i σ_x_i_τ_d-1≤ 4 t
where t satisfies that
5 d(t) log(16 d(t)/δ)≤ n
and d(t)=sup _x ∈^d-1⟨σ_x,(Σ+t I)^-1σ_x ⟩_τ_d-1, in which Σ is a self-adjoint, positive semi-definite operator on L^2(τ_d-1) (see the detailed definition in <cit.>).
Notice that
d(t) ≤ t^-1sup _x ∈^d-1⟨σ_x, σ_x ⟩_τ_d-1≤ t^-1κ(1).
From (<ref>),(<ref>), we have
sup _h__k≤ 1inf_c^2 ≤ 4nh-1/n∑_i=1^n c_iσ_x_i_τ_d-1^2 ≲κ(1)/n[1+log(n/δ)].
The above implies
sup_h__k≤ 1inf_c_1,⋯,c_nh - ∑_i=1^n c_iσ_x_i_τ_d-1 + ϵ/√(n)c≲(ϵ + √(κ(1)(log n + log(1/δ))/n)).
We complete the proof.
*Proof of the first part(upper bound) of Proposition <ref>.
By triangle inequality, for any m ∈_+ we could decompose the left hand side of (<ref>) as
sup_g ∈_ϕinf_c∈^ng - ∑_i=1^n c_iσ_x_i_τ_d-1 + ϵ/√(n)c
≤sup_g ∈_ϕinf_h__k≤√(m)g - h _τ_d-1
+ sup_h__k≤√(m)inf_c_1,⋯,c_nh-1/n∑_i=1^n c_iσ_x_i_τ_d-1 +ϵ/√(n)c
Firstly we consider the term (<ref>). By Lemma <ref>, for any x ∈^d-1, there exists a function h_x ∈_k such that h_x__k≤√(m) and σ_x - h_x_τ_d-1≤√(q(d,L)L(m)). For any g ∈_ϕ, there exists a signal measure μ∈(), μ_TV≤ 1 such that g= ∫_σ_x μ̣(x). Let h = ∫_ h_x μ̣(x), by Jensen's inequality we know that h satisfies
h__k≤∫_^d-1h_x__kμ̣(x) ≤√(m) and
∫_^d-1σ_x μ̣(x) - h _τ_d-1^2 ≤∫_^d-1σ_x - h_x _τ_d-1^2 μ̣(x) ≤ q(d,L)L(m).
Thus (<ref>) is bounded by √(q(d,L)L(m)). Next, we apply Lemma <ref> to bound the second term (<ref>)
sup_h__k≤√(m)inf_c_1,⋯,c_nh - 1/n∑_i=1^n c_iσ_x_i_τ_d-1 + ϵ/√(n)c≲√(m)(ϵ + √(κ(1)(log n + log(1/δ))/n)).
Combining (<ref>) and (<ref>) and using the arbitrariness of m, we complete the proof.
§.§ The Lower Bound
Our proof needs the following two lemmas related to the linear approximation of RKHS and parametric feature functions
<cit.>
For any basis function φ_1,⋯,φ_n ∈ L^2(π), we have
sup_g ∈_ϕinf_c_1,⋯,c_ng - ∑_j=1^n c_j φ_j _π≥Λ_k,π(n).
For a kernel k̃ on , let _k̃ be the corresponding RKHS, denote s̃ = ∫_k̃(x,x) π̣(x) the trace of k̃ with respect to π.
Recall that {e_j^k̃,π} be the eigenfunctions of the kernel k̃ with respect to π. We have
sup_h__k̃≤ 1inf_c_1,⋯,c_n h -∑_j=1^n c_j e_j^k̃,π_π≤√(s̃/n)
Since {e_j^k̃,π} are orthonormal in L^2(π), for any h ∈_k̃ with h__k̃≤ 1, the optimal approximation is given by c_j = ⟨ h,e_j^k̃,π⟩_π. Thus
inf_c_1,⋯,c_nh - ∑_j=1^n c_je_j^k̃,π_π^2 = h - ∑_j=1^n ⟨ h,e_j^k̃,π⟩_π e_j^k̃,π_π^2
= ∑_j=n+1^∞⟨ h, e_j^k̃,π⟩_π^2.
By the definition of RKHS norm, we have
h__k̃^2=∑_j=1^∞1/μ_j^k̃,π⟨ h,e_j^k̃,π⟩_π^2 ≤ 1.
The right hand side of (<ref>) is bounded by
∑_j=n+1^∞⟨ h, e_j^k̃,π⟩_π^2 ≤μ_n^k̃,π∑_j=n+1^∞1/μ_j^k̃,π⟨ h, e_j^k̃,π⟩_ρ^2 ≤μ_n^k̃,π∑_j=1^∞1/μ_j^k̃,π⟨ h, e_j^k̃,π⟩_π^2 ≤μ_n^k̃,π,
where the last step is due to (<ref>). Thus, we complete the proof by noticing that μ_n^k̃,π≤s̃/n, which is due to
=∑_j=1^∞μ_j^k̃,π≥∑_j=1^n μ_j^k̃,π≥ n μ_n^k̃,π.
*Proof of the second part(lower bound) of Proposition <ref>.
By the triangle inequality, we decompose (<ref>) as
sup_g ∈_ϕinf_h ∈_k̃g- h _π + ϵh__k̃
≥sup_g ∈_ϕinf_c∈^ng - ∑_j=1^n c_j e_j^k̃,π_π
-( sup_h ∈_k̃inf_c∈^n h-∑_j=1^n c_j e_j^k̃,π_π - ϵh__k̃)
Applying Lemma <ref>, the first term (<ref>) can be bounded by
sup_h ∈_ϕinf_c∈^nh - ∑_j=1^n c_j e_j^k̃,π_π≥Λ_k,π(n).
To tackle the second term (<ref>), we use Lemma <ref> to obtain
sup_h ∈_k̃inf_c_1,⋯,c_n h -∑_j=1^n c_j e_j^k̃,π_π - ϵh__k̃≤ 0, ∀ϵ≥√(s̃/n),
where s̃ = ∫_k̃(v,v)π̣(v) be the trace of k̃. Combining the bound for (<ref>) and (<ref>), we obtain
sup_g ∈_ϕinf_h ∈_k̃g- h _π + ϵh__k̃≥Λ_k,π(n), ∀ϵ≥√(s̃/n).
Therefore, for any ϵ > 0,
sup_g ∈_ϕinf_h ∈_k̃g- h _π + ϵh__k̃ ≥min( 1, ϵ√(n/s̃))( sup_g ∈_ϕinf_h ∈_k̃g- h _π + √(s̃/n)h__k̃)
≥min( 1, ϵ√(n/s̃))Λ_k,π(n)
Finally, we complete the proof by noticing that
s =∫_ k(x,x) ρ̣(x) =∫_∫_ϕ(x,v)ϕ(x,v) π̣(v) ρ̣(x) = s̃.
§ PROOF OF PROPOSITION <REF>
We first choose a fixed j ∈ℕ^+ such that β j > α -1. By Theorem 2 in <cit.> and Stirling's formula, we know that
λ_j ∼ (2e/h^2)^j d^d/2+1/2/(2j+d-2)^j+d/2-1/2∼ d^1-j.
Then, again using Stirling's formula, we have
N(d,j)λ_j = 2j+d-2/jj+d-3d-2λ_j
∼j+d/j(j+d)^j+d-5/2/d^d-3/2j^j-3/2 d^1-j
∼ d.
Let
m_j = ∑_i=0^jN(d,i)= ∑_i=0^j[Γ(i+d)/Γ(d)Γ(i+1) - Γ(i+d-2)/Γ(d)Γ(i-1)
= Γ(j+d)/Γ(d)Γ(j+1) + Γ(j+d-1)/Γ(d)Γ(j)∼ d^j.
If inequality (<ref>) holds, then
1 ≳ d^-αm_j^βΛ_k,τ_d-1(m_j) ≳ d^β j - α N(d,j+1)λ_j+1∼ d^β j - α + 1,
which is a contradiction.
|
http://arxiv.org/abs/2306.10774v1
|
20230619084231
|
The Illusive Slump of Disruptive Patents
|
[
"Jeffrey T. Macher",
"Christian Rutzer",
"Rolf Weder"
] |
econ.GN
|
[
"econ.GN",
"q-fin.EC"
] |
Impact of Dynamic Tariffs for Smart EV Charging on LV Distribution Network Operation
Flore Verbist, Nanda Kishor Panda, Pedro P. Vergara, Peter Palensky
Electrical Engineering, Mathematics and Computer Science,
Delft University of Technology, Delft, The Netherlands
Email: [email protected], {n.k.panda, p.p.vergarabarrios, p.palensky}@tudelft.nl
July 31, 2023
========================================================================================================================================================================================================================================================================================
§ HIGHLIGHTS
* We challenge the identified slowdown in disruptive patents by Park et al. (Nature, 2023) and extend the analysis to 2016.
* The slowdown is mostly a result of truncation bias.
* The number of highly disruptive patents has increased.
* Our analysis suggests caution against premature research policy changes.
Despite tremendous growth in the volume of new scientific and technological knowledge, the popular press has recently raised concerns that disruptive innovative activity is slowing. These dire prognoses were mainly driven by <cit.>, a Nature publication that uses decades of data and millions of observations coupled with a novel quantitative metric (the CD index) that characterizes innovation in science and technology as either consolidating or disruptive. We challenge the <cit.> methodology and findings, principally around concerns of truncation bias and exclusion bias. We show that 88 percent of the decrease in disruptive patents over 1980-2010 reported by the authors can be explained by their truncation of all backward citations before 1976. We also show that this truncation bias varies by technology class. We update the analysis to 2016 and account for a change in U.S. patent law that allows for citations to patent applications in addition to patent grants, which is ignored by the authors in their analysis. We show that the number of highly disruptive patents has increased since 1980—particularly in IT technologies. Our results suggest caution in using the <cit.> methodology as a basis for research and decision making in public policy, industry restructuring or firm reorganization aimed at altering the current innovation landscape.
Keywords: Disruptive Innovation, Truncation Bias, Exclusion Bias, U.S. Patent Law Change
JEL: 030, 032, O33
§ INTRODUCTION
Continued innovation in science and technology is considered a bedrock for and driving force of growth and prosperity in most economies. A paper co-authored by Park, Leahey and Funk entitled, “Papers and Patents are Becoming Less Disruptive Over Time” was recently published in Nature (2023). Given its provocative title and findings, as well as the topic examined, the paper attracted significant and global media attention. <cit.> has been featured in hundreds of international newspapers and magazines <cit.>: for example, <cit.> emphasized in a report on the changing nature of science that "Papers and Patents are Becoming Less Disruptive", indicating in a subtitle that "why that is, is a mystery"; <cit.> further stated: "What Happened to All of Science's Big Breakthroughs?"; and the <cit.> noted: "Science is Losing its Ability to Disrupt".
One reason for this attention is that the <cit.> results show marked declines in disruptive innovation over time. The authors state in the abstract that their "results suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology" (p. 138), and note in the conclusion that "this trend is unlikely to be driven by changes in citation practices or the quality of published work. Rather, the decline represents a substantive shift in science and technology, one that reinforces concerns about slowing innovative activity. We attribute this trend in part to scientists' and inventors' reliance on a narrower set of existing knowledge" (p. 142).
Another reason for this attention is that the <cit.> results have implications regarding the organization of the entire science and technology innovation process—from government research labs and universities to private and public enterprises. The authors' methodology and findings unsurprisingly piqued the interests of researchers, commentators, and journalists on a global scale who offered myriad explanations, such as increased pressures on scientists to apply for large (interdisciplinary) projects, rising administrative burdens, declining basic research funding, increasing risk-aversion among scientists, and pressures to publish rapidly <cit.>. Still other researchers raised concerns around the <cit.> methodology as a measure of disruptive innovation <cit.>.
An important question arises, however, as to whether the argument made by <cit.> is accurate. We have major concerns around their methodological approach regarding truncation and exclusion bias related to measurement. We show using patent data that the worrisome result suggested by the authors is instead mainly a consequence of the omission of citations to older innovations: i.e., the authors artificially truncate all backward citations of patents before 1976. We also show using patent data that substantive differences in the results occur from not considering patent law changes: i.e., the authors neglect backward citations of patents published after 2000 to patent applications. Both of these biases have large measurement effects on their proposed CD index and direct implications on the accuracy of their findings and conclusions.
In what follows, we first compare the <cit.> methodology with truncation to our own methodology with no truncation. We explain why truncation bias can have such a significant impact, and then assess the effects of truncation bias using patent data. We then update the patent data over 2011-2016 and take into account a change in patent law that allows for citations to patent applications as well as to patent grants. We show that the number of highly disruptive patents has increased since 2005. We conclude with a discussion of our main findings and the dangers of measurement bias in influencing public policy, industry organization, and firm organization decisions and interventions.
§ MEASURING PATENT DISRUPTIVENESS
The CD index was developed by <cit.> and used by <cit.> to characterize whether a patent or scientific publication is considered more consolidating (i.e., building upon previous research and reinforcing the status quo) or more disruptive (i.e., obsolescing previous research and pushing into new directions).[Since its introduction by <cit.> the index has been used in a wide variety of analyses to capture the disruptive content of patents <cit.> and of scientific papers <cit.>.] It is based on the idea that a patent or scientific publication is disruptive if "the subsequent work that cites it is less likely to also cite its predecessors" <cit.>. The CD index ranges from -1 (consolidating) to 1 (disruptive). The measure uses five-year post-publication windows in its construction, referred to as CD_5. For scientific papers, it starts in 1945; for patents, it starts in 1980.
Fig. <ref> plots two average CD_5 indices for approximately 3.66 million patents over 1980-2010 from the United States Patent and Trademark Office (USPTO): one based on the <cit.> methodology; the other based on our methodology.[As in <cit.>, we consider only utility patents and draw our data from PatentsView (version February 21, 2023). The database contains all US patents that have been published between 1976 and September 29, 2022. While <cit.> focus on patents of the aggregate NBER technology fields "Chemical", "Computer and Communications", "Pharmaceutical and Medical", "Electrical and Electronic", and "Mechanical", which encompasses a total of 3,046,672 granted utility patents over 1980-2010, our main analysis utilizes all granted utility patents published by the USPTO during the same period, resulting in a total of 3,662,051 patents. For more details, see Appendix <ref>.] As is readily apparent, the evolution of each index differs markedly: the <cit.> methodology produces an average annual CD index that starts at 0.39 in 1980 (the first observation year) but declines rapidly over time; our methodology instead indicates an average annual CD index that starts at 0.09 in 1980 and declines more gradually over time. Fig. <ref> nonetheless shows marked convergence in the two indices in the later years of the sample—particularly since 2000.
The difference in the average CD indices arises from <cit.> truncating all backward citations before 1976. To the best of our understanding, the authors do not explain or acknowledge this truncation explicitly despite it having important consequences. The <cit.> truncated CD index has much larger values compared to our untruncated CD index for those patents published closer to the truncation year. The reason is simple: patents published closer to the truncation year (e.g., 1980) frequently cite patents published in proximate years (e.g., pre-1976), while patents published well-past the truncation (e.g., 2010) infrequently cite those patents. In the earlier years, many backward citations are truncated; in the later years, only a few are. As a result, the CD indices diverge in the early years and converge in the latter years.
This implies that the <cit.> results are significantly biased due to the truncation of backward patent citations, with the bias strongest for those patents published closer to the truncation year. Hence, their findings of a sharp decline in the average patent disruptiveness can mostly be attributed to a reduction in measurement bias; not to a decline in disruptive innovations. After correcting for truncation bias, our methodology shows that the CD index has a much lower starting point and more modest decline over time. Specifically, 88 percent of the total decline in the average disruptiveness of patents found by <cit.> can be explained by the authors artificially truncating all backward citations to patents published before 1976.
§ WHY TRUNCATION MATTERS
An example helps illustrate our argument: We compare two patents that both appear in the <cit.> dataset:[Fig. <ref> in the Appendix shows the first page from each patent document listing the backward citations.] US 4181011 was issued in 1980; US 6511791 was issued in 2003. US 4181011 makes 13 citations to patents published between 1958 and 1974; US 6511791 makes 11 citations to patents published between 1987 and 2001. Using the <cit.> methodology and replication data (patentsview_analytical_df.csv), the number of backward citations for patent US 4181011 is zero—given the exclusion of backward citations to patents published before 1976. In contrast, the number of backward citations for patent US 6511791 is 11—the same number as listed in the patent document.
Fig. <ref> provides intuition as to how the truncation of backward citations at a given point in time can bias the CD index: without truncation, the focal patent has a CD index value of -0.5 and is considered consolidating; with truncation (i.e., excluding pre-1976 backward citations), the focal patent has a CD index value of 1 and is considered disruptive.
Formally, this can be shown as follows. <cit.> define the CD index for a focal patent as:
CD_t = 1/N∑_N = i^N(-2f_itb_it+f_it), with
f_it=
1, if i cites the focal patent within t years of post-publication of the focal patent
0, otherwise
and
b_it=
1, [t]0.8if i cites parts of the predecessors of the focal patent within t years of post-publication of the focal patent
0, otherwise
where N is the sum of the number of patents citing: the focal patent only (N_F), the focal patent and parts of its predecessors (N_B), and parts of its predecessors only (N_R) within t years after the focal patent publication. A forward citation increases the consolidating (disruptive) nature of the focal patent if the forward cited patent does (does not) cite the predecessor patents (i.e., those patents also cited by the focal patent).
We extend the CD index by directly considering truncation. Truncation affects the CD index in two ways. First, left-truncation (i.e., with respect to the earlier years) reduces the number of counted forward citations N if the truncated backward citations are cited within the t post-publication years of the focal patent:
N^tr<N^non-tr,
where tr stands for truncated and non-tr for non-truncated backward citations.
Second, left-truncation reduces the b_it value if a forward citation of the truncated predecessor is also a forward citation of the focal patent within t years of publication. In this case:
-f_itb_it^tr=0 and -f_itb_it^non-tr=-1 → -f_itb_it^tr>-f_itb_it^non-tr.
Considering equations (<ref>) and (<ref>), we distinguish among four distinct cases: (i) no backward citations are truncated; (ii) backward citations are truncated but none are cited within the t post-publication years of the focal patent; (iii) truncated backward citations are cited within the t post-publication years of the focal patent but none cite the focal patent; and (iv) the truncated backward citations are cited within the t post-publication years of the focal patent and at least one of them cite the focal patent. In case (i) and case (ii), equations (<ref>) and (<ref>) hold as equalities and truncation has no effect on the CD index:
CD_t^tr = CD_t^non-tr.
In case (iii), the inequality of equation (<ref>) holds but equation (<ref>) is strictly an equality. In case (iv), the inequalities of equations (<ref>) and (<ref>) hold. In case (iii) and case (iv), the truncated CD index is biased upward as long as the summation part of equation (<ref>) of the truncated CD index is positive, because each of the two effects artificially increases its value. Moreover, ceteris partibus, the bias is stronger in case (iv). Both results are easily seen by adding the inequalities of equations (<ref>) and (<ref>) to the CD index provided in equation (<ref>). As a result:
CD_t^tr > CD_t^non-tr.
Finally, truncation may lead to an upward or downward biased CD index if the summation of equation (<ref>) is negative and the inequalities of either equation (<ref>) or equations (<ref>) and (<ref>) hold. In this case:
CD_t^tr≷ CD_t^non-tr.
§ ASSESSING TRUNCATION EFFECTS
Fig. <ref> categorizes focal patents by the number of backward citations to demonstrate the impact of truncation. The left panel uses the <cit.> methodology; the right panel uses our methodology.
Both panels show that the majority of patents have backward citations. However, notable differences are found depending upon whether truncation is or is not present. The left (truncation) panel indicates that 38 percent of patents had no backward citations in 1980 and 99 percent of patents had no backward citations in 1976 (the truncation year), given that the only backward citations included are those patents also published in 1976. The right (no truncation) panel indicates that less than two percent of patents had no backward citations in these years. Fig. <ref> also indicates that patents published further from the truncation year have lower proportions of truncated backward citations. The reason is again straightforward: More recent patents tend to cite more recent patents.
It is readily apparent that the number of years considered for backward citations matters. The analysis of <cit.> begins in 1980 and includes backward citations to 1976 (the first year that USPTO patents are available). Hence, patents published in 1980 include four years of backward citations. Fig. <ref> shows that had their analysis started in 1976 (i.e., the year of truncation), the bias would be much larger: nearly all patents from 1976 would be classified as disruptive. Fig. <ref> also shows that the starting year decision does not alleviate truncation bias and measurement concerns: e.g., had their analysis started in 1990 or in 2000, a four-year window for backward citations would produce similarly sharp declines in the CD index in the earliest years after the truncation year which then stabilizes and converges to the unbiased index over time.
Fig. <ref> indicates that this bias is not identical across the technologies examined by <cit.>. The bias is most pronounced in the aggregate technology "Mechanical" and least pronounced in "IT". The difference is again most likely explained by truncation: IT patents have developed more recently and rapidly in comparison to Mechanical patents, suggesting older patents are relatively more often cited in the latter category than in the former category (see Appendix Fig. <ref>). As a result, fewer backward citations are truncated in IT patents and more backward citations are truncated in Mechanical patents.
§ UPDATING YEARS AND PATENT LAW CHANGE
We examine two factors that could affect the CD index post-2010. First, we update the analysis window using patent data to 2016. Second, we consider a major change to U.S. patent law: in particular, the Inventor Protection Act of 1999 requires that U.S. patent applications be published 18 months after the initial application filing, effectively shifting citation patterns from strictly patent grants to both patent grants and patent applications <cit.>.[See Appendix <ref> for more details on the additional data used, and Appendix <ref> for a more detailed discussion of how excluding citations to patent applications affects the CD index of patents.] Fig. <ref> shows these post-2010 changes in three average CD_5 indices: the <cit.> methodology (I); our methodology including all backward citations to strictly patent grants (II); and our methodology including all backward citations to patent grants and patent applications (III). Methodology (III) considers only those citations to patent applications that receive patent grants by the end of 2021. This approach best achieves consistency between the pre- and post-Inventor Protection Act periods, as only patent grants were allowable citations.[Fig. <ref> in the Appendix shows similar results when citations to patent applications that have not (yet) received patent grants are included.]
Fig. <ref> is identical to Fig. <ref> up to 2001 because—prior to the Inventor Protection Act's passage—patents only cited patent grants. The inclusion of citations before 1976 eliminates truncation bias and subsequently decreases the CD index over 1980-2005—as seen by comparing (I) to (II) and (III). The inclusion of citations to patent applications eliminates exclusion bias and subsequently decreases the CD index over 2005-2016—as seen by comparing (I) and (II) to (III). Correcting for both biases indicates that the average CD index decreases slightly over 1980-2016 and is relatively stable since 2005—as shown in methodology (III).
§ INCREASING NUMBER OF HIGHLY DISRUPTIVE PATENTS
With both biases considered and accounted for, our analysis documents a notable increase in the number of highly disruptive patents (i.e., patents with CD_5∈(0.75, 1]) over 1980-2016 and particularly since 2010: Panel (a) of Fig. <ref> shows that the number of highly disruptive patents has more than doubled since 1980 (Methodology III). It also reveals the major distortions in the CD index if truncation bias (via pre-1976 backward citations) and exclusion bias (via backward citations to patent applications) are not considered (Methodology I and II).
Using 1980 as a benchmark, Panel (b) of Fig. <ref> shows that disruption is heterogeneous across technologies: the number of highly-disruptive IT patents increases nearly twenty-fold; the number of highly-disruptive Pharma patents increases more than four-fold; and the number of highly-disruptive Mechanical patents is stable. These results challenge the <cit.> argument that "despite large increases in scientific productivity, the number of papers and patents with CD_5 values in the far right tail of the distribution remains nearly constant over time" (p. 140). Our findings are, however, consistent with a recent scientific publication that indicates "a paper published in Nature <cit.> showed that research is becoming less disruptive. This does not seem to be the case for the next generation of cancer chemotherapy based on the methionine dependence of cancer" <cit.>.
Thus far, we have taken an aggregated methodological approach, but this masks two disparities inherent at the individual patent level. First, the truncation of backward citations misclassifies a large proportion of patents as highly disruptive. Appendix Table <ref> illustrates how CD_5 categories for patents published in 1980 calculated using the <cit.> (truncation) methodology are distributed across CD_5 categories using our (no truncation) methodology. As an example, 25 percent of the USPTO patents in 1980 that are considered highly disruptive under the truncation methodology are actually consolidating under the no truncation methodology.
Second, the exclusion of backward citations to patent applications misclassifies a large proportion of recent patents as highly disruptive. Appendix Table <ref> shows how patents in 2016 that belong to a particular CD_5 category using the <cit.> (truncation) methodology are distributed across CD_5 categories using our (no truncation) methodology and accounting for backward citations to patent applications. As an example, 19 percent of the USPTO patents in 2016 that are considered highly disruptive using the <cit.> methodology are actually consolidating when accounting for both truncation and backward citations to patent applications. Determining the disruptive potential of individual patents thus requires careful consideration to ensure against bias and misclassification. Such bias and misclassification is the case when the data provided by <cit.> or <cit.> are used for analysis and decision-making.
§ DISCUSSION
Our examination of the <cit.> CD index methodology was prompted by the stark decrease in the share of disruptive innovation in science and technology the authors find in patents over 1980-2010 and in scientific publications over 1945-2010. At least for patents, our analysis instead suggests that the decrease in the CD index is mainly due to the truncation of backward citations, while a modest increase in the CD index post-2005 is mainly driven by the exclusion of backward citations to patent applications brought on by a change in patent law.
Variations of the index proposed in <cit.> and based upon other research <cit.> are also affected by truncation bias and exclusion bias when backward citations to granted patents and to patent applications are not appropriately accounted for. Our analysis is important for the research community that may overlook these fundamental issues and work with biased data – either via the CD index or some other measure.
Our analysis highlights, in particular, the dangers of truncation bias from backward citations and exclusion bias from citations to patent applications. Our correction and methodology shows—in comparison to <cit.>—that: (1) the CD index starting level is significantly lower, modestly declines up to 2005, and is mostly stable since; (2) the starting truncation "date" does not matter—any backward citation "cutoff" in any chosen year can produce similar CD index patterns; (3) truncation bias differences are heterogeneous across technology classes; and (4) the number of highly disruptive patents has increased—particularly since 2010 and in IT.
Our analysis suggests, in general, caution against overt government policy interventions, research policy changes, or substantive industry and firm reorganizations that seek to improve or disrupt the innovation status-quo. If it was accurate that breakthroughs in science and technology have substantially decreased over time, the current organization of the entire science and technology innovation process could be called into question and should be reevaluated. In particular, reforms that seek to improve the creation of disruptive innovation or restart innovation within R&D centers and research labs—e.g., in government agencies, universities and firms—would become paramount.
But our results suggest—at least in the patent data—that the so-called recent slump in disruptive innovation is illusory. An unbiased CD index is instead markedly stable over the past several decades, and highly disruptive patents have increased overall–especially in IT and Pharma. Thus, an interesting question for future research could investigate why these industries have seen increases in disruptive innovation, relative to others. Given our findings, we should finally emphasize that premature changes in the current innovation processes of science and technology might bring with them unintended consequences.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Jeffrey T. Macher: Conceptualization, Writing – original draft, Writing – review & editing. Christian Rutzer: Calculation, Conceptualization, Investigation, Visualization, Writing – original draft, Writing – review & editing. Rolf Weder: Conceptualization, Writing – original draft, Writing – review & editing.
§ DECLARATION OF COMPETING INTEREST
The authors have received no external funding and declare no competing interests.
§ DATA AVAILABILITY
The code and data needed to reproduce this study are available at Zenodo <https://zenodo.org/deposit/8020004>.
The raw data used in this research was obtained from PatentsView, a platform that provides unrestricted access to detailed information on all granted USPTO patents published since 1976. All raw data can be downloaded at <https://patentsview.org/download/data-download-tables>.
We used R version 4.1.2 to perform the computations, mainly using the tidyverse and data.table libraries for data processing and ggplot2 for visualization. The computations were performed using SLURM on the sciCORE cluster at the University of Basel, Switzerland.
§ DATA
The study uses data from the February 21, 2023 version of PatentsView, which includes all U.S. patents published between 1976 and September 29, 2022. As in previous research <cit.>, we focus only on USPTO patents. We include PatentsView data that assigns each USPTO patent to technology categories as defined by the World Intellectual Property Organization (WIPO) <cit.>. The specific WIPO technologies used to create our aggregated technology groups are provided in the Table <ref>. It is important to note that our technology classification differs from that of <cit.>, who categorize patents based on NBER technology fields that are no longer available.
In Sections 2-4, we use all granted utility patents over 1980-2010. <cit.> examine granted utility patents over this timeframe and in the NBER technology fields of "Chemical," "Computers and Communications," "Drugs and Medical," "Electrical and Electronic," and "Mechanical." Their data includes 3,046,672 patents and 29,777,375 backward citations (see replication data file patentsview_analytical_df.csv of <cit.>). We examine granted utility patents over the same timeframe, but cover all technology fields. Our data thus includes 3,662,051 patents and 42,617,016 backward citations in the case of no truncation (i.e., as they appear in the original patent documents) and 35,192,186 backward citations in the case of truncation.
In Sections 5-6, we expand the dataset over 1980-2016, which results in 5,324,224 granted utility patents and 74,943,850 backward citations. With the passage of the Inventor Protection Act of 1999, patents beginning in November 2000 include backward citations to patent applications and to patent grants. Thus, in our methodology (III), we include all backward citations to patent grants and to patent applications that were eventually granted using supplemental PatentsView data. We replace all citations to patent applications with the corresponding patent grant information: i.e., the grant number and publication date. This replacement can result in some backward citations with publication dates later than the citing patent, but the backward citation publication date is not relevant to the CD_5 index methodology. What matters are publication dates of forward citations. We do not include patent applications published within five years after the publication year of a focal patent but with corresponding patent grants published later in the analysis. Finally, we do not include citations to patent applications that have not been granted by the end of 2021 (the latest date available) in the analysis. This approach results in 87,880,452 backward citations.
§ CD INDEX AND PATENT LAW CHANGE
The Inventor Protection Act of 1999 might bias the CD index if citations to patent applications are not considered—especially since patent applications are now increasingly cited vis-a-vis patent grants. As <cit.> note, citations to patent applications accounted for 25 percent of all citations made by USPTO patents in 2015.
First, as Fig. <ref> illustrates, ignoring patent applications can create bias by falsely declaring a patent as completely disruptive. The channels are the same as when backwards citations are truncated in time. Formally, it affects the CD index via b_it and N in equation (<ref>), but the driving force is now missing citations to patent applications. As shown by equations (<ref>)-(<ref>), this can create a bias in the CD index similar to left-truncated backward citations.
Second, ignoring patent applications can create bias if a patent application is cited in one instance and the subsequent patent grant is cited in another instance. This can occur from citations to predecessor patents or to the focal patent—i.e., applications and grants. Fig. <ref> illustrates this possibility in more detail. Formally, it affects the CD index in equation (<ref>) as before in b_it or N but now also in f_it. This may result in an upward or downward bias of the CD index. An upward bias occurs, for example, when the focal patent cites a patent application and a successor patent cites the focal patent and the patent grant (instead of the patent application). In such a case, the citation to the patent application is missed. A downward bias occurs, for example, if a successor patent only cites the patent application of the focal patent and no predecessor patents (applications or grants) of the focal patent. As this would not be considered a forward citation to the focal patent, it thereby loses some of the disruptive value of the focal patent. The Inventor Protection Act of 1999 may therefore bias the CD index if citations to patent applications for patents published after November 29, 2000, are not properly taken into account.
§.§ Figures
§.§ Tables
elsarticle-harv
|
http://arxiv.org/abs/2306.07370v1
|
20230612185147
|
Nanomechanical behavior of pentagraphyne-based single-layer and nanotubes through reactive classical molecular dynamics
|
[
"J. M. De Sousa",
"W. H. S. Brandão",
"W. L. A. P. Silva",
"L. A. Ribeiro Junior",
"D. S. Galvão",
"M. L. Pereira Júnior"
] |
physics.comp-ph
|
[
"physics.comp-ph",
"cond-mat.mes-hall",
"00-XX",
"J.2.0; I.6.0"
] |
IFPI,UNICAMP1]J. M. De Sousaauthor
[author]I am corresponding author
[email protected]
UFPI]W. H. S. Brandão
CiMa]W. L. A. P. Silva
UnB]
L. A. Ribeiro Junior
[email protected]
UNICAMP1,UNICAMP2]D. S. Galvão
[email protected]
UnB2,CiMa]M. L. Pereira Júnior
[email protected]
[IFPI]organization=Instituto Federal de Educação, Ciência e Tecnologia do Piauí – IFPI,
addressline=
Primavera,
city=São Raimundo Nonato,
postcode=64770-000,
state=Piauí,
country=Brazil
[UNICAMP1]organization=Applied Physics Department, “Gleb Wataghin” Institute of Physics - IFGW, University of Campinas – UNICAMP,
addressline=Rua Sérgio Buarque de Holanda, Cidade Universitária,
city=Campinas,
postcode=13083-859,
state=São Paulo,
country=Brazil
[UFPI]organization=Department of Physics, Federal University of Piauí – UFPI,
addressline=Ininga,
city=Teresina,
postcode=64049-550,
state=Piauí,
country=Brazil
[CiMa]organization=University of Brasília, Faculty UnB Planaltina, PPGCIMA,
city=Brasília,
postcode=73345-010,
country=Brazil
[UnB]organization=University of Brasília, Institute of Physics,
city=Brasília,
postcode=70910-900,
country=Brazil
[UNICAMP2]organization=Center for Computing in Engineering and Sciences, University of Campinas,Department and Organization
addressline=Rua Sérgio Buarque de Holanda, 777 - Cidade Universitária,
city=Campinas,
postcode=13083-859,
state=São Paulo,
country=Brazil
[UnB2]organization=University of Brasília, Faculty of Technology, Department of Electrical Engineering,
city=Brasília,
postcode=70910-900,
country=Brazil
In a recent theoretical study, a new 2D carbon allotrope called pentagraphyne (PG-yne) was proposed. This allotrope is derived from pentagraphene by introducing acetylenic linkages between sp^3 and sp^2 hybridized carbon atoms. Due to its interesting electronic and structural properties, it is of interest to investigate the mechanical behavior of PG-yne in both monolayer and nanotube topologies. To achieve this, we performed fully atomistic reactive (ReaxFF) molecular dynamics simulations, and our results show that Young's modulus average of PG-yne monolayers is approximately 913 GPa, at room temperature. In comparison, it ranges from 497-789 GPa for the nanotubes studied. Furthermore, we observed that PG-yne monolayers exhibit a direct transition from elastic to complete fracture under critical strain without a plastic regime. In contrast, some PG-yne nanotubes exhibit an extended flat plastic regime before total fracture.
Molecular Dynamics Mechanical Properties Pentagraphynes ReaxFF Nanofracture
§ INTRODUCTION
Carbon-based two-dimensional (2D) materials have gained significant attention due to their unique physical and chemical properties, which are useful in flat electronics <cit.>. Among them, graphene stands out as one of the most popular 2D materials <cit.>. It consists of a single layer of carbon atoms arranged in a hexagonal lattice, showing exceptional electrical conductivity, mechanical strength, and thermal properties <cit.>. These traits combined have made it a promising candidate for a wide range of applications, including electronics <cit.>, energy storage <cit.>, and sensors <cit.>.
Other 2D carbon allotropes, including γ-graphyne <cit.>, monolayers of biphenylene <cit.>, amorphous carbon <cit.>, and fullerene networks <cit.>, have also been synthesized. Despite these successes, there has been an ongoing effort to develop new materials that can address some of the limitations of graphene, such as its lack of an electronic bandgap, which restricts its use in digital electronic devices.
The discovery of new 2D carbon allotropes has spurred a surge of theoretical investigations <cit.>. One such investigation proposed pentagraphene (PG), a material composed entirely of pentagonal carbon rings arranged in a pattern resembling the Cairo pentagonal tiling <cit.>. This unique material combines high stability, negative Poisson's ratio, and anisotropic conductivity. It also features a semiconducting and indirect band gap of 3.25 eV, and Young's modulus and Poisson's ratio values of approximately 263.8 GPa·nm and -0.068, respectively. Although the synthesis of PG has not yet been realized, PG has inspired further studies aimed at developing new materials with similar topology <cit.>, using fused pentagonal rings that tend to preserve its attractive properties.
A novel 2D carbon allotrope, namely pentagraphyne (PG-yne), was proposed in a theoretical study recently <cit.>. PG-yne is more energetically favorable than other graphyne members, including experimentally synthesized graphyne <cit.> and graphdiyne <cit.> monolayers. It is derived from PG by inserting acetylenic linkages between sp^3 and sp^2 hybridized carbon atoms, in the same way that graphene can generate graphynes. State-of-the-art calculations have shown that PG-yne is dynamically, thermally, and mechanically stable, able to withstand temperatures up to 1000 K. It exhibits intrinsic semiconducting properties, with an electronic bandgap of around 1.0 eV that can be tuned by applying strain. These remarkable properties motivate the comprehensive study of PG-yne's mechanical properties and fracture patterns performed here, which may expand the understanding of the range of applications for structures of similar topology and can stimulate further studies into its synthesis.
In the present study, we have carried out fully atomistic reactive (ReaxFF) classical molecular dynamics (MD) simulations to investigate the mechanical properties and fracture patterns of PG-yne monolayers and nanotubes (PG-yneNTs), as depicted in Figure <ref>. Our simulations considered a range of tube diameters and temperature values. Young's modulus of PG-yne monolayers was determined to be around 913 GPa at room temperature, whereas the values varied between 497-789 GPa for the nanotubes considered in this study. When a critical strain was reached, both PG-yne and (n,0)PG-yneNTs underwent complete fracture without exhibiting a plastic deformation stage. Conversely, (n,n)PG-yneNTs exhibited a flat plastic region between the elastic and completely fractured regimes.
§ METHODOLOGY
To investigate the mechanical properties and fracture patterns of PG-yne monolayers and PG-yneNTs, we performed fully atomistic reactive molecular dynamics (MD) simulations using the LAMMPS <cit.> code with the ReaxFF <cit.> potential. ReaxFF is a widely used interatomic force field for studying the mechanical properties of nanostructures since it can handle bond formation and breaking at the atomic level <cit.>. In our model of the PG-yne supercell and for the nanotubes, we simulate the chirality types (n,n) and (n,0), shown in Figure <ref>. Structural details of the PG-yne and PG-yneNTs are presented in Table <ref>.
When creating the PG-yneNTs models computationally, we define the chiral vector C_h as na+mb, where a and b are the lattice vectors and n and m are integers that determine the chirality. The diameter of the nanotube is calculated using d_t=|C_h|/π. The translational vector T, perpendicular to C_h, is determined by T=t_1a+t_2b, where t_1 and t_2 are integers obtained through the inner product C_h·T=0. The length of the nanotube is L=|T|, and the chiral and translational vectors define the nanotube unit cell. Additionally, since PG-yne's unit cell is a square (a=b), configurations (m,0) and (0,m) for PG-yneNTs are the same, except for a rotation.
In our computational approach, we first minimized the energy of the PG-yne and PG-yneNTs systems. Subsequently, we coupled them to a thermostat chain for thermodynamic equilibrium. We performed constant NPT ensemble integration at null pressure and at 300 K to ensure no remaining stress, using a Nose/Hoover <cit.> pressure barostat for 25 ps. Following this, we coupled all the systems in the canonical NVT ensemble for an additional 25 ps. This enabled the generation of sampled positions and velocities for a range of temperatures from 10 K up to 1200 K.
To investigate the overall mechanical behavior of PG-yne and PG-yneNTs, we conducted tensile tests by stretching the systems until rupture. The tests were performed using a constant engineering tensile strain rate of 10^-6/fs within an NVT ensemble. Stress was applied to PG-yne along the x-direction (layers) and to PG-yneNTs along the z-direction (tubes), while the other directions were allowed to relax freely. We analyzed the stress-strain curves to extract the elastic properties of each structure at different temperatures ranging from 10 K up to 1200 K. We also described the bond breaks and defined the fracture patterns of PG-yne and PG-yneNTs structures by analyzing the MD snapshots. To visualize the MD snapshots, we used VMD <cit.> software and the nanotube structures were generated using a Fortran code<cit.>.
§ RESULTS
We start the discussion by analyzing the bond and angle distributions of PG-yne and PG-yneNTs after lattice thermalization at different temperatures. The corresponding profiles are shown in Figure <ref>, where black, red, green, blue, and yellow lines represent the 10, 300, 600, 900, and 1200 K cases, respectively. In Figure <ref>(a), the bond distribution profile displays a prominent peak centered at 1.2 Å (sp^2-hybridized C_2-C_3 and C_5-C'_18 bonds), and four similar peaks within the range of 1.4-1.6 Å (sp^2-hybridized C_1-C_2, C_3-C_4 and C_4-C_5 bonds) at 10 K. On the other hand, in Figure <ref>(b), the angle distribution profile exhibits two peaks between 100^∘-120^∘ (C_3-C_4-C_5, C_3-C_4-C_6, C_7-C_8-C_9 and C_7-C_8-C_15 angles) and 160^∘-180^∘ (C_1-C_2-C_3, C_2-C_3-C_4, C_4-C_5-C'_18 and C_4-C_6-C_7 angles) at 10 K. In turn, the two types of nanotubes, (n,n) and (n,0), exhibit practically the same curves, overlapping each other, for the distributions of bonds and angles (Fig. <ref>(c)-(d)). Furthermore, these distributions show peaks around intervals practically equal to those seen for membranes at 300 K.
The C-C bond-length values increase with temperature due to larger amplitude thermal vibrations of the carbon atoms. In this sense, the bond distribution profile changes, with the most prominent peak holding at the same position and the four peaks merging into a single one centered at 1.5 Å for temperatures between 100-1200 K. These results are consistent with the original work and indicate the structural stability of PG-yne with the coexistence of sp^2 and sp^3-like bonds. PG-yne retains its planar morphology, with two well-defined angle distributions centered at 110^∘ and 170^∘ for temperatures between 100-1200 K, as shown in Figure <ref>(b).
Figure <ref> displays stress-strain curves for PG-yne at temperatures ranging from 10 up to 1200 K. The left and right panels show the x- and y-direction stretching profiles. Black, red, green, blue, and yellow lines indicate the 10, 300, 600, 900, and 1200 K cases. These curves indicate that the PG-yne monolayers transition directly from elastic to completely fractured regimes at a critical strain (ϵ_C), which is also known as the fracture strain. The stress-strain curves show a similar trend for tensile loading applied along the x- or y-direction, in line with the topology of PG-yne. Higher temperatures result in lower values of σ_C, the critical stress at the first fracture, indicating that PG-yne becomes easier to fracture as temperature increases. This can be attributed to the greater amplitude and probability of bond breaking due to more pronounced thermal vibrations of carbon atoms in the nanostructure at higher temperatures.
PG-yneNTs can exhibit a plastic stage before fracturing, illustrated in Figure <ref>. Their topology significantly influences the mechanical properties. The left and right panels of the figure show the stress-strain interplay for the (n,n) and (n,0) cases, respectively, when subjected to stretching along their longitudinal (z-direction). In the left panel, the (3,3), (6,6), and (8,8) cases are represented by black, red, and green lines, respectively. In contrast, the right panel shows the (4,0), (8,0), and (12,0) cases at 300 K. The distinct curve profiles indicate that PG-yneNTs can withstand more strain than PG-yne layers before fracturing. This behavior can be attributed to the tubular topology of PG-yneNTs, which provides many pathways for stress dissipation, thus preventing early and brittle fractures compared to PG-yne monolayers.
The mechanical properties of (n,n)-type and (n,0)-type PG-yneNTs differ due to the different arrangements of carbon atoms, leading to variations in bond angles and bond lengths. The (n,0)-type PG-yneNTs possess sp^3-hybridized C-C bonds aligned along the direction of the applied strain, resulting in a higher resistance to deformation than the (n,n)-type PG-yneNTs. The (n,n)-type PG-yneNTs contain sp^2-like C-C bonds aligned with the tube axis, which resist deformation and fracture by forming linear atomic chains (LACs) at about 25% within the plastic stage. However, the (n,0)-type PG-yneNTs have these bonds oriented at an angle to the tube axis, with average fracture strain also close to 25%, but with slightly higher σ_C (see Table <ref>). The fracture strain in (n,0)-type PG-yneNTs is not influenced by the tube diameter, and their stress-strain interplay is similar to the ones for PG-yne monolayers. Figure <ref>(c) and (d) provide visual representations of the bond configurations in PG-yneNTs along their longitudinal axis.
The PG-yneNTs have been found to withstand higher strain values before breaking compared to PG-yne monolayers, but they can withstand lower critical stress values, as shown in Table <ref>. Our simulations for all temperatures indicate that the average of Y_Mod for PG-yne monolayers is approximately 823.92 GPa for direction x and 897.81 GPa for direction y. At the same time, PG-yneNTs have averages of 731.69 GPa and 642.72 GPa for types (n,n) and (n,0), respectively. Moreover, smaller-diameter nanotubes tend to be stiffer and have higher Young's modulus (Y_M) but smaller critical strains. The Y_M value for PG-yne monolayers is approximately 913 GPa for room temperature, while for PG-yneNTs, it can range from 479-789 GPa. It is worth noting that these values are comparable to those reported for other carbon-based 2D materials <cit.> and are lower than those for graphene <cit.>, mainly because of the porous structure of PG-yne.
We also analyzed the fracture patterns of PG-ynes using MD simulations. Figures <ref> and <ref> illustrate typical MD snapshots of the fracture process for PG-yne under x- and y-directional stress at 10 K, respectively. The color-coding scheme in these figures shows the von Mises (VM) stress per-atom values <cit.>, where red and blue colors indicate high- and low-stress accumulation, respectively. These VM values help identify the points or regions where the fracture starts or propagates. More details on VM calculations can be found in references <cit.>.
Figure <ref>(a) shows the unstressed PG-yne monolayer, dominated by a blue color indicating low stress. At 21.00% of strain (Figure <ref>(b)), the monolayer exhibits a moderate and uniform stress accumulation, represented by a green color, without any visible signs of fracturing. Once the strain surpasses a critical threshold, the fracture occurs, resulting in the breaking of carbon-carbon bonds in the sheet and rapid propagation of cracks that separate the sheet into smaller fragments, as shown in Figure <ref>(c) at 22.5% of strain. In carbon-based 2D materials, it is typical for cracks to propagate along the opposite direction of the applied strain <cit.>. The formation of LACs, shown in Figure <ref>(c), is insufficient to establish a distinct plastic stage in the fracture process of PG-yne sheets. Figures <ref>(d) and <ref>(e) display the lattice arrangement in the region where the first bond breaking occurred for the critical fracture strain (22.00%) and a higher strain value (22.01%), respectively. These snapshots reveal that the bonds oriented close to the strain direction are the first ones to break.
In Figure <ref>, the fracture patterns of PG-yne under y-directional stress at 10 K are shown, and they follow the same trend observed along the x-directional stress. The unstressed PG-yne monolayer is presented in Figure <ref>(a). At 21.01% of strain (Figure <ref>(b)), the monolayer experiences a moderate and uniform stress accumulation, indicated by green colors, without any visible signs of fracturing. However, at 22.15% of strain, fast crack propagation leads to fracture, as shown in Figure <ref>(c), without any LAC formation. The lattice arrangement in the region where the first bond breaking occurred for the critical fracture strain (21.85%) and a higher strain value (21.95%) is presented in Figures <ref>(d) and <ref>(e), respectively, revealing that the bonds oriented close to the strain direction are the first ones to break.
Finally, we present some MD snapshots for the fracture process of the PG-ynes studied here. Again, we use a color scheme representing the VM stress per-atom values to help visualization of the fracture process. In this way, Figures <ref> and <ref> show the representative cases (8,8)PG-yneNT and (8,0)PG-yneNT, respectively. As a general trend, the fracture patterns obtained in the simulations depend on the chirality of the tubes, as expected.
Figures <ref>(a) and <ref>(b) show the unstressed and partially stressed (8,8)PG-yneNT at 24.46% strain, respectively. The stress is primarily accumulated in the sp^2-like C-C bonds, which are also the first to break. Long LACs form as a result, as seen in Figure <ref>(c) at 50.59% of strain. The stretching dynamics cause the tube wall to almost collapse, as illustrated in Figure <ref>(d). The bonds in (n,n)PG-yne are oriented at an angle to the tube axis, leading to uneven load distribution and more complex tube fracture patterns with multiple cracks and LACs. Fragmentation occurs due to the formation of new free edges, which can nucleate new cracks and cause the structure to break into smaller pieces. This mechanism defines the plastic stage seen in the stress-strain curves of (n,n)PG-yneNTs (refer to Figure <ref>). Snapshots of the lattice arrangement in the region where the first bond breaking occurred for the critical fracture strain (19.26%) and a higher strain value (31.25%) are presented in Figures <ref>(d) and <ref>(e), respectively. These snapshots demonstrate that the bonds oriented at a certain angle to the strain direction are the first ones to break.
The fracture pattern of (n,0)PG-yneNTs is shown in Figure <ref>. Figure <ref>(a) and <ref>(b) display the unstressed and partially stressed (8,0)PG-yneNT at 22.30% strain, respectively. The stress tends to be distributed almost equally throughout the structure, and no LACs are formed (refer to Figure <ref>(c) at 30.29% of strain). The tube wall does not collapse during the stretching dynamics, as shown in Figure <ref>(d). Snapshots of the lattice rearrangement in the region where the first bond breaking occurred for the critical fracture strain (25.49%) and a higher strain value (25.65%) are presented in Figures <ref>(e) and <ref>(f), respectively. These snapshots reveal that the bonds perpendicular to the strain direction are the first ones to break (Figure <ref>(e)), followed by the bonds aligned to the strain direction that break almost simultaneously, as illustrated in Figure <ref>(f).
§ CONCLUSIONS
In summary, we have used fully atomistic reactive (ReaxFF) MD simulations to investigate the mechanical properties and fracture patterns of PG-yne and PG-yneNTs systems. Our simulations encompassed a range of tube diameters and temperature values. As a general trend, our findings revealed that topology significantly influences the mechanical properties of PG-yne 1D (nanotubes) and 2D (monolayers) structures. PG-yne monolayers transition directly from elastic to completely fractured regimes at a critical strain. The stress-strain interplay shows a similar trend for tensile loading applied along the x- or y-direction of the PG-yne monolayer, in line with its topology. Higher temperatures result in lower critical strain values, indicating that PG-yne becomes easier to fracture as temperature increases.
PG-yneNTs, in turn, can exhibit a plastic stage before fracturing. Their topology significantly influences the mechanical properties. PG-yneNTs can withstand more strain than PG-yne layers before fracturing. This behavior can be attributed to the tubular topology of PG-yneNTs, which provides many pathways for stress dissipation, thus preventing early and brittle fractures compared to PG-yne monolayers. The mechanical properties of (n,n)-type and (n,0)-type PG-yneNTs differ due to the different arrangements of carbon atoms, leading to variations in bond angles and bond lengths.
The (n,n)-type PG-yneNTs contain sp^2-like C-C bonds aligned with the tube axis, which resist deformation and fracture by forming linear atomic chains at about 45%, within the plastic stage. However, the (n,0)-type PG-yneNTs have these bonds oriented at an angle to the tube axis, making them easier to fracture at about 25%, but with slightly higher critical strain. The fracture strain in (n,0)-type PG-yneNTs is not influenced by the tube diameter, and their stress-strain interplay is similar to the ones for PG-yne monolayers.
Our simulations indicate that the critical strain for PG-yne monolayers is approximately 198.33 GPa, while for PG-yneNTs, it ranges from 108.81 GPa up to 143.44 GPa. Moreover, smaller diameter nanotubes tend to be stiffer and have higher Young's modulus but smaller critical strain values. The Y_M Young's modulus for PG-yne monolayers is approximately 913 GPa at room temperature, while for PG-yneNTs, it can range from 497-789 GPa.
§ ACKNOWLEDGEMENTS
This work received partial support from Brazilian agencies CAPES, CNPq, FAPDF, FAPESP, and FAPEPI. J.M.S and D.S.G thank the Center for Computational Engineering and Sciences at Unicamp for financial support through the FAPESP/CEPID Grant #2013/08293-7. CENAPAD-SP (Centro Nacional de Alto Desenpenho em São Paulo - Universidade Estadual de Campinas - UNICAMP) provided computational support for L.A.R.J and J.M.S (proj634 and proj842). W.H.S.B. and J.M.S were supported by Laboratório de Simulação Computacional Cajuína (LSCC) at Universidade Federal do Piauí. L.A.R.J thanks the financial support from Brazilian Research Council FAP-DF grants 00193-00000857/2021-14, 00193-00000853/2021-28, and 00193-00000811/2021-97, CNPq grants 302236/2018-0 and 350176/2022-1, and FAPDF-PRONEM grant 00193.00001247/2021-20. L.A.R.J also thanks ABIN grant 08/2019 and Núcleo de Computação de Alto Desempenho (NACAD) for computational facilities through the Lobo Carneiro supercomputer. Additionally, Fundação de Apoio à Pesquisa (FUNAPE) provided financial support through Edital 02/2022 - Formulário de Inscrição N.4. This research also used computing resources and assistance from the John David Rogers Computing Center (CCJDR) in the Institute of Physics “Gleb Wataghin”, at State University of Campinas.
elsarticle-num
|
http://arxiv.org/abs/2306.02177v1
|
20230603191134
|
Towards Coding Social Science Datasets with Language Models
|
[
"Christopher Michael Rytting",
"Taylor Sorensen",
"Lisa Argyle",
"Ethan Busby",
"Nancy Fulda",
"Joshua Gubler",
"David Wingate"
] |
cs.AI
|
[
"cs.AI"
] |
Researchers often rely on humans to code (label, annotate, etc.) large sets of texts. This kind of human coding forms an important part of social science research, yet the coding process is both resource intensive and highly variable from application to application. In some cases, efforts to automate this process have achieved human-level accuracies, but to achieve this, these attempts frequently rely on thousands of hand-labeled training examples, which makes them inapplicable to small-scale research studies and costly for large ones. Recent advances in a specific kind of artificial intelligence tool - language models (LMs) - provide a solution to this problem. Work in computer science makes it clear that LMs are able to classify text, without the cost (in financial terms and human effort) of alternative methods. To demonstrate the possibilities of LMs in this area of political science, we use GPT-3, one of the most advanced LMs, as a synthetic coder and compare it to human coders. We find that GPT-3 can match the performance of typical human coders and offers benefits over other machine learning methods of coding text. We find this across a variety of domains using very different coding procedures. This provides exciting evidence that language models can serve as a critical advance in the coding of open-ended texts in a variety of applications.
§ INTRODUCTION
The analysis of textual data–from sources like open-ended survey responses, social media posts, and legislative transcripts–has become increasingly important across many disciplines. Traditionally, researchers quantitatively analyzing these text have trained research assistants (mostly undergraduate students) to code the material by assigning numbers and/or categories to text segments. However, such human coding is slow and expensive. Given variability in experience and perception among coders, researchers hire multiple people to evaluate the same texts when possible and then calculate intercoder agreement as a measure of confidence in the coding process. At times, even this repeated coding is not feasible, and researchers rely on a single human coder.
While this approach works for small amounts of text, it becomes impractical as a means to to analyze the texts available in an increasingly information-rich world. As a result, many scholars seek automated alternatives. Dictionary-based methods <cit.> work in cases where clearly defined sets of words indicate the presence of particular content but struggle with nuance and generalization <cit.>
One solution to this problem uses supervised machine learning (SML) models to code text in the place of humans, such as naive bayes, random forests, and SVMs <cit.>. Unfortunately, all of these require large datasets for training, which typically must be hand-generated by human coders, failing to eliminate the time and expense of using human coders <cit.>. SML methods also require large datasets with a sufficient sample size to train, test, and validate a SML procedure. Unsupervised methods exist - such as structural topic modeling <cit.> - but these still require significant amounts of data and extensive modeling and validation steps. Most importantly, they do not allow researchers to intentionally code specific themes and topics.
We propose that state-of-the-art artificial intelligence tools, known as language models (LMs), provide a powerful alternative to current techniques for coding texts in the social sciences, as has been done in labeling in other domains and methodologies including stance detection, psychology, and synthetic dataset generation <cit.>. We describe these tools and the application of one - GPT-3 <cit.> - to various coding tasks in political science. We show that GPT-3 performs coding tasks at or exceeding the level of human coders, even when it is given three or fewer labeled examples. We also find that GPT-3 performs comparably to SML procedures, with a fraction of the time and cost of those approaches.
§ LANGUAGE MODELS
In the most basic sense, LMs are a conditional probability distribution p(x_n|x_1,⋯,x_n-1) over tokens or words. LMs generate novel sequences of text by repeatedly sampling from this distribution. Crucially, LMs can be given initial inputs that reduce the probability of some output statements and increase the probability of others. Given the initial input of “Will you please”, a LM might assign high probability to “go” as the next term, and low probability to “fruit”. Changing the context to “Will you eat” switches those probabilities.
The use of LMs in social science has recently seen much progress and promise <cit.>. LMs can serve as useful tools in coding texts for at least two reasons. First, LMs are created and trained on massive amounts of human created statements. This means the models come already set up with an extensive understanding of human texts. Second (and relatedly), LMs have few-shot capabilities or the capacity to learn complicated tasks with only a handful of examples. This can almost entirely eliminate the need for hand-coded training data, providing advantages even over SML methods.
For our application here, we use GPT-3, one of the largest existing LMs. This language model was released by OpenAI in 2020, has 175 billion parameters, and was trained on more than 45 terabytes of text. In automated content analysis, others have considered different, custom-modified LMs such as BERT <cit.>, BART <cit.>, RoBERTa <cit.>, XLNet <cit.>, and ELMo <cit.>. However, these all require extensive fine-tuning and a similar number of labeled examples as SML methods. As such, we explore GPT-3 as a coding tool with only few-shot learning methods (and no fine tuning) to determine if it provides a more efficient automated coding tool that is more accessible to most social science researchers.
§ METHODOLOGY
To use GPT-3 to code texts, we provide it with a specific prompt designed to teach GPT-3 the coding process. This prompt varies from application to application, as the coding method depends on the specific concepts being coded. Throughout these applications, our goal is to give GPT-3 as little guidance as possible to demonstrate its flexibility and efficiency in learning how to act as a coder. In providing GPT-3 with these prompts, we discovered that the LM responded quite similarly across various versions of our guidance, and that it required only two or three coded examples to perform well on these tasks. For additional information on the process of engineering these prompts, see the Online Appendix.
After giving GPT-3 these prompts and observing how it codes a set of data, we compare that coding to a corresponding set of codes generated by humans. This allows us to directly compare the performance of GPT-3 to human coders. In the case of our last application, we also compare our results to a SML procedure. We make these comparisons based on coding agreement as well as efficiency (in terms of time and cost to code with other techniques).
We construct our prompts by providing instructions, categories (if necessary), exemplars (labeled examples of the task), and then the text to classify. We then compute GPT-3's probabilities for the next token over its vocabulary and select the token with the highest probability as the model's coding choice. For color-coded examples of prompts, see Figure <ref>.
We evaluate GPT-3's coding performance using various intercoder agreement measures between GPT-3's codes and the codes generated by humans we hired to code the same texts. These are as follows:
§.§ Intraclass correlation (ICC)
Intraclass correlation measures inter-coder agreement among human coders using numerically ordered, (quasi-) continuous values in their coding (e.g., rating a text by some characteristic on a 1-5 scale). ICC scores are between -1 and 1 and are typically interpreted as follows: <0.5 = poor inter-coder agreement, 0.5-.75 = moderate agreement, 0.75-0.9 = good, and >0.9 = excellent <cit.>.
§.§ Joint probability of agreement
For tasks with un-ordered, categorical codes, we use two different measures. The first, joint-probability of agreement, measures the probability of any two coders agreeing. In the 2-coder case, where one of the coders is ground truth, this reduces to raw accuracy. Joint probability agreement ranges from 0 to 1. Between two coders, it is calculated as follows: 1/N∑_i=1^N1(y_1,i = y_2,i), where N is the number of instances being coded, and y_1,i, y_2,i are the first coder's and the second coder's respective codings of instance i. In the case of K coders, the joint probability agreement is the mean of the pairwise agreements.
§.§ Fleiss' kappa
Fleiss' kappa measures the degree to which the proportion of agreement among coders exceeds the agreement of fully random coders <cit.>. Used specifically to quantify intercoder agreement for categorical data, this measure ranges from -1 to 1. When κ = 0, it means that the two raters agree at a rate not better than chance. κ < 0 means increasing agreement worse than chance, and κ > 0 means increasing agreement greater than chance.
§ EXPERIMENTS
We consider GPT's capacity to serve as a coder using data from four datasets: Pigeonholing Partisans (PP), New York Times Headlines (NYT), Congressional Hearings (Congress), and The Guardian Populism (TGP). We chose these datasets to maximize differences in coding tasks as a means of exploring GPT-3's limits. These four applications vary in the difficulty of the coding task, the domain (or topic) of the coding, the structure of the texts, and measurement of the coded variable (ordinal, categorical, binary, etc.).
§.§ Pigeonholing Partisans (PP)
We first consider the ability of GPT-3 to act as a coder with data on Americans' stereotypes of Republicans and Democrats <cit.>. These data, collected in 2016, asked individuals to list four words or phrases that described typical supporters of the Democratic and Republican Parties.[More methodological details can be found in published discussions of this work. See <cit.>.]
This procedure is common in psychological studies of stereotypes <cit.>, and allows survey takers to describe partisans in their own words
This dataset is too small for other kinds of automated coding and an ideal way to consider how well GPT-3 can classify texts without extensive training sets.
To evaluate how well GPT-3 can serve as a coder on these kinds of short, open-ended texts, we recruited 2873 human coders through the survey platform Lucid <cit.> to code a total of 7675 descriptions of partisans. Each description was coded at least three times by a random set of coders, who were given minimal instructions for coding the texts.[These texts include those created by human respondents in the original data as well as texts created by GPT-3 and discussed in other, published work <cit.>. That work indicates that human respondents cannot distinguish between the two kinds of statements.] As such, the coders in this study should be considered "lightly trained" rather than rigorously instructed on the coding.
Coders rated the texts along five dimensions: (1) positivity (general positive/negative valence), (2) extremity (extreme or moderate quality of the words), and whether the text mentioned (3) character or personality traits, (4) government or policy issues, or (5) social groups. Each of these domains is important to the theoretical ideas of the original work on partisan stereotypes <cit.>.
After the human coding process was complete, we asked GPT-3 to complete a series of coding tasks on all 7675 texts directly analogous those completed by humans. Next, we examined how closely GPT-3 follows individual human coders and human coding in the aggregate, along with how closely humans followed each other.
To that end, we calculated ICC scores with these data (Fig. <ref>). As coders are randomly assigned to texts and not all texts are scored by the same coders, we use ICC1k, which accounts for this structure <cit.>.
Our focus here is on the increase or decrease in ICC when GPT-3's codes are added to the three human codes. If GPT-3 improves the reliability of the coding, ICC should improve. If it does not offer this benefit, the ICC score should stay the same or decrease. We also compare adding GPT-3's scores to adding simulated scores to ensure that the addition of another coder by itself does not drive what we observe: (1) a coder who codes all texts as 0 (lacking the attribute), (2) a coder who codes all texts as 1 (containing the attribute), (3) a coder who codes randomly, and (4) a coder who codes all texts randomly, but with the same overall distribution as GPT-3's predictions. We also consider the ICC values when comparing GPT-3's codes to the average of the human coders (rather than individual coders separately).
The statistics in Figure <ref> suggest that adding GPT-3 as a coder adds a great deal to reliability for two measures (positivity, groups), slightly increases reliability of the coding for two others, (extremity, issues), and reduces reliability in one (traits). Notably, this last area is where human coders correlated the least with each other (correlations between human coders on this domain ranging from 0.07 to 0.08) and may represent a fundamentally challenging task.
There is also a stark difference between adding GPT-3 and adding each of the simulated coders. We conclude that the boost in ICC from GPT-3 is not due to simply adding another coder. Furthermore, since adding GPT-3's outputs to the human outputs generally either increases or maintains ICC across each attribute, we conclude that GPT-3 achieves human or better performance at this task. Importantly, achieving this level of performance required neither coding a large-scale dataset (on the order of tens of thousands or more) nor a large, labeled set of training data for the language model.
§.§ Comparative Agendas Project (CAP)
For a different application of GPT-3 as a coder, we during to the Comparative Agendas Project (CAP) system of coding. CAP provides a coherent framework for documenting media and government attention to various policy issues in a comprehensive set of policy domains
<cit.>. CAP datasets aim to be comprehensive, transparent, and replicable <cit.>, with many housed at the CAP website (www.comparativeagendas.net). More than 200 scholars have used CAP to test a vast range of empirical political science theories across more than a dozen countries <cit.>.
The CAP master codebook moves beyond the simple coding of the PP data, spanning at least 21 major categories (with others added for some specific applications). In order to succeed here, GPT-3 must produce a high probability for one of a large, unordered, pre-specified set of tokens that corresponds to the specific content of the input data.
Prior efforts to automate coding in the CAP framework have met limited success <cit.>. Sebok and Kacsuk <cit.> are able to achieve an 80%+ F1 score on average across categories, but this is reported after culling over 40% of their dataset due to difficulty of classification. We, on the other hand, provide scores given full coverage of the dataset. Reported performance in various approaches is substantially lower than this (accuracies near or below 50%) for dictionary methods, less efficient SMLs, corpora with less training data, or in specific hard-to code categories, which upper limit our average accuracy exceeds. Again, the highest performing outcomes are achieved by setting rejection thresholds (for ambiguous texts or cases where humans or models disagree) and either sacrificing coverage or targeting human coders to uncertain cases <cit.>. We achieve our without dropping cases, using multiple models, human disambiguation of difficult cases, and extensive labeled training data.
To account for class imbalances and differences in baseline probabilities of different tokens, we normalize the probability distributions in a manner similar to <cit.>. We estimate GPT-3's bias towards a category as the total weight given to each category over a balanced validation set, divide each category probability by GPT-3's bias towards it, and normalize to sum to 1. We found that this produced modest accuracy boosts of 4-5%. If a small validation set is available, we recommend this calibration technique; however, results were qualitatively the same without this calibration.
We consider two data sources that have previously been coded using the CAP framework - coding of U.S. Congressional hearing summaries and the New York Times front page. We conducted our coding with GPT-3 separately for each of these applications.
§.§.§ CAP: Congressional Hearing Summaries (Congress)
The Congressional Hearing corpus contains the Congressional Information Service summary of each U.S. Congressional hearing from 1946 to 2010. These summaries were read by human coders and assigned to CAP classifications. We hired and trained three human coders for this application, providing them with the same instructions outlined in the CAP codebook. This allows us to compare how different human coders and GPT-3 compare to one another (which is not possible with the original data, given that it lacks scores from multiple coders). We gave GPT-3 the full summary text, making the coding task is highly comparable between the humans and GPT-3. All results are reported for n=326 texts, which constitutes 16 texts for each category minus 10 for incompleteness in the human codes. We used a random subset of the dataset of over 10,000 texts for this application.
Figure <ref> presents our comparison of GPT-3's and the humans' codes. Both our intercoder agreement metrics tell the same story, and imply a finding that holds across metrics: GPT-3 correlates with each human just as well as or better than the humans correlate with each other. Note that the highest joint agreement (.63) and highest Fleiss' kappa (.61) both occur between GPT-3 and Human 2.
Despite there being no real ground truth for this task, we visualize “accuracy” statistics based on the original dataset's single coder as provided by CAP (Figure <ref>). The lack of ground truth is validated by a great deal of human disagreement, as the figure makes clear. We see the accuracy for each coder, with categories sorted in order of GPT-3's accuracy. Interestingly enough, GPT-3 seems to do better at categories that humans do better at, and worse at the categories that humans fail at. Overall, the accuracies were 60% for GPT-3, compared to 63%, 66%, and 55% for the three human coders respectively.
The high joint agreement and Fleiss' kappa between GPT-3 and the human coders, as well as the similar accuracies across categories, demonstrate GPT-3 performance on-par with humans on this dataset. Given the efficiency gains from using GPT-3, such as lower costs in training coders and scalability to a large number of texts, we suggest that this gives additional evidence in favor of the usefulness of LMs as coders.
§.§.§ CAP: New York Times Front Page Dataset (NYT)
The second CAP dataset we use is the New York Times Front Page Dataset, generated and contributed by Amber Boydstun <cit.>. The dataset includes 31034 front page New York Times headlines from 1996 - 2006, along with the policy category label assigned by trained human coders. The categories are adapted for media use, and so include 28 primary classification categories. For this application, we randomly sampled 20 texts from each of the 28 categories to be coded by four human coders and GPT-3. All results are reported for the correspondent set of n=560 texts.
The original human coders were instructed to read the headline and the first three paragraphs of the article. In our work, GPT-3 is only provided the headline, because the full article text is not available in the public data. To control for this difference in available information, we also hired four human coders complete an identical classification task to GPT-3, considering only the article headlines.
Since the structure of the NYT data is the same as the Congress data, we use the same kind of analyses. For both joint agreement and Fleiss' kappa (Figure <ref>), GPT-3 agrees with the humans about as much as they agree with each other. GPT-3's total accuracy was 55%, compared to 57%, 59%, 51%, and 45% for the four humans respectively. We also notice a strong trend between GPT-3's accuracy and the humans accuracy per category (Figure <ref>). Unlike Congress, however, there are 3 categories for which the humans all perform better than GPT-3: “International Affairs and Foreign Aid,” “Government Operations,” and “Death Notices.” On the other hand, GPT-3 performs better than humans at some other categories: “Environment,” “Health,” and “Labor.” Overall, these results again demonstrate that GPT-3 generally achieves on-par performance with humans.
§.§ The Guardian Populism (TGP)
For our final application, we consider how GPT-3 codes a multifaceted concept - populism. While disagreement exists about the meaning of this term, many scholars have gravitated towards a definition that populism is a discourse that describes politics as a struggle between the virtuous will of the common people and some evil, conspiring elite <cit.>.[This approach is sometimes called the "ideational" approach to populism] Coding for populism requires a process of marking the presence of a reference to the common people and an evil elite. As such, existing studies have primarily relied on extensively trained human coders that are instructed on how to holistically code an entire text, examining it for references of both of these components (for an example of such a coding process, see <cit.>).
Here we draw on a large dataset of short statements coded for populism. In the Fall of 2018, The Guardian created a series of articles on populism. At the end of one article, readers were invited to participate in a related survey on populism - over 20,000 individuals from more than 100 countries completed this survey. One question on this study asked respondents to discuss who or what was responsible for a pressing political problem in their country; two intensively trained human coders evaluated 4,000 of these texts and indicated if they did or did not contain populism. The process of training these coders involved initial instruction on a set of unrelated texts, repeated sessions to correct mistakes and clarify the coding process, and a review of the human codes <cit.>. Unlike the preceding studies, then, this application involves comparisons to highly trained human coders.
These data also allow for a comparison to SML methods, as about 16,000 texts were not coded by the human coders. As discussed below, we employ a SVC method to code the full set of texts and compare the performance of this technique to coding by GPT-3. We therefore compare the coding produced by GPT-3 on the set of human coded texts and in comparison to the SML approach. In each case, the coders (human or otherwise) generated a code of 1 when the text contained a populist statement and 0 when it did not. To be regarded as populist, the text needed to contain both a reference to the virtuous or good people and some kind of malicious elite group. [For more details on the human coding process, see other work explaining the codebook in more detail, such as <cit.> and <cit.>]
We begin by comparing GPT-3's coding to the two human coders. As before, we calculated ICC scores to measure agreement between the coders. In contrast to the Pigeonholing Partisans data, the same two coders and GPT-3 coded all of the texts. We therefore use ICC3k which is designed for these kinds of comparisons <cit.>. For these comparisons, We had GPT-3 code a random sample of 1,300 of the 4,000 texts coded by humans.
[ADD FIGURE HERE]
Figure [FILL IN] shows the ICC statistics with GPT-3, the human coders, and the same types of simulated coders show in Section <ref>. With these calculations, we find that GPT-3 performs well, although not quite as well as a thoroughly trained coder. The ICC statistic for the two human coders was 0.81, indicating high levels of agreement. Adding GPT-3 as a coder reduces this somewhat to 0.77, but this still indicates good agreement between the human coders and GPT-3. In contrast, adding one of the simulated coders dramatically reduces the ICC statistics. We take this as evidence that GPT-3 creates codes that are generally comparable to highly-trained human coders, with far less expense and training.
To compare GPT-3's performance to a supervised baseline, we fit a bag-of-words SML model on the populism data, using 3000 instances for training and 1000 instances for validation at a time. With this approach, the SML coding matched the human populism codes with an accuracy of 86 percent. Meanwhile, with only 4 coded examples, GPT-3 matched the human populism codes 79 percent of the time. While the SML baseline outperforms GPT-3 by about 7 percentage points, it does so at the cost of 3000 labeled examples. Given the drastically lower costs of coding with GPT-3 - in the case, the requirement of hiring, training, and supervising coders to classify 4,000 texts - we again see this as evidence of the value of GPT-3 as a coding tool for the social sciences.
§ ETHICS AND BIAS
Our results suggest that GPT-3 can automate specific coding tasks comparably to human coders and SML coding methods. However, much work remains to bring this possibility to full fruition. For example, LMs reflect and even amplify pathological human biases contained in their training data <cit.>, raising concerns about their use for coding. Much work has aimed to quantify and reduce this bias <cit.>. However, while LMs exhibit bias, it is a known, invariant, and quantifiable property, whereas individual humans' biases are typically unknowable and far more difficult to quantify. We submit that the ability to recognize and actively compensate for the coder's probable biases is more important than the magnitude of the biases themselves. Conversely, if a LM can be conditioned or fine-tuned into holding specific biases rather than others, then it could emulate specific heterogeneous groups of coders for a richer, more diverse, and representative coding than what we present in this paper.
In that sense, we suggest that bias in coders is an omnipresent problem in coding for the social sciences. Here again, LMs provide a way to evaluate and account for those problems. We encourage other researchers interested in and employing LMs in their coding to use this tool to improve the accuracy and inclusivity of their coding and not simply their efficiency.
§ CONCLUSION
With four dramatically different sources of data, we have demonstrated that LMs can be used to code social science datasets more efficiently and as accurately as existing human or SML techniques. Fine-grained analysis shows that GPT-3 can match the performance of human coders on average across small and large datasets; with both ordinal and categorical codes; and on tasks of varying complexity. In some cases, it even outperforms humans in increasing intercoder agreement scores, often with no more than 3 exemplars.
We suggest that these results indicate the promise of LMs (and other tools like them) for research in the social sciences. Our analyses are a first step in this direction, but tools like GPT-3 offer low-cost ways to process and evaluate large text corpora from various sources. They also allow researchers to perform these tasks while still using their theoretical and substantive knowledge of the topic at hand to guide the machine learning tools. As such, we view this as a productive synergy of human and computer components to generate an outcome more accurate and efficient than either element on its own. Given the turn of the social sciences towards promising new terrains of text and other complex data, LMs and other related tools offer great promise for nearly every domain of the social sciences.
§ APPENDIX
§.§ Prompt Engineering
One important part of this project is the way that we provide information or the context to GPT-3 to help it learn the coding process. As noted in the text, our overarching goal was to make this instruction as minimal and flexible as possible to evaluate GPT-3's potential without dramatic changes to the LM. In doing so, we learning a number of important lessons about giving GPT-3 information about the coding scheme and process. In this section, we seek to explain our prompt engineering protocol so that our results can be replicable and generalizable to other datasets and domains. We include both decisions made without conducting any experiments and those made by conducting experiments.
Some elements of prompt engineering seem to matter a great deal, and some seem to matter not much. Of all the sections of this paper, we spent the most time on this one, and ran the most experiments to fill it. Despite this, and the fact that slight changes to individual prompts on formatting cause significant changes to the probability distribution over tokens, we found that in the aggregate, prompt engineering tends to not make much of a difference in GPT-3's performance as a coder.
In this process, one has to be mindful of where the prompt ends and what next token is being modeled. Since generative language models sample one token at a time, we needed to be able to sample a unique first token (usually, a unique first word) for each category we attempt to model. For example, “very positive” and “very negative” both start with the token “very,” so it would be impossible for us to compare the two categories with a single token sample. Fortunately, all of our categories started with unique first tokens, but will not be true for all future applications of LMs as coders.
Another choice impacting results was the presentation of categories in the question format of the PP data. Specifically, GPT-3 performed significantly worse when asked to respond to a question with the tokens “yes” or “no” than when the choice was between substantive alternatives, such as “extreme” vs “moderate” or “positive” vs. “negative”. For the other three attributes, we found that restating the objective after the “yes” or “no” (e.g., “Yes, mentions personality or character traits”) substantially helped. These were the only prompt variations attempted for the PP dataset.
Other elements seemed to have minimal impact, like the number and type of exemplars. While we know that more labeled training data significantly improves SML performance <cit.>, it was unclear ahead of time whether more labeled exemplars to GPT-3 will achieve the same. In theory, more exemplars could more firmly teach the model the format of the task, and every marginal quality exemplar could help the model refine its understanding of the distributions of categories that the examples belong to.
As shown in Figure A.<ref>, we find that one exemplar performs much better than none, but there is little gain in accuracy achieved by providing more than 2 or 3 exemplars. We also conducted extensive experiments testing different classes of exemplars (more or less difficult to classify, in the spirit of active learning); this also seemed not to matter (See Appendix <ref> for more details).
We also tried many variations on the prompt format, including: surrounding categories in quotes; using slashes, pipes, and other delimiters to separate exemplar headlines from their respective categories; providing lists of example headlines for each category in parentheses right next to the category; new lines in specific places making boundaries between exemplars clearer; and other general rephrasing. None of these changes resulted in a marginal accuracy less than 50% or greater than 57%. This demonstrates a relative stability of the information retrieval process, allaying some concerns that minor changes in wording or punctuation will radically alter coding accuracy.
For all of our final prompts used, please refer the following section.
§.§ Prompts For Each Task
§.§.§ Pigeonholing Partisans
* Positivity:
Are the following descriptions of PARTY positive or negative?
-agreeable, reasonable, understanding, cooperative: Positive
-angry, bigoted, racist, homophobic: Negative
* Groups:
Do the following descriptions of PARTY mention social groups?
-Christian, privileged, young, white: Yes, mentions social groups.
-apathetic, agreeable, pro-environment, political: No, doesn't mention social groups.
* Traits:
Do the following descriptions of PARTY mention personality or character traits?
-accepting, tolerant, intellectual, charitable: Yes, mentions personality or character traits.
-black, young, female, poor: No, doesn't mention personality or character traits.
* Extremity:
Are the following descriptions of PARTY extreme or moderate?
-angry, racist, close-minded, homophobic: Extreme
-people, hopeful, educated, agreeable: Moderate
* Issues:
Do the following descriptions of PARTY include government or policy issues?
-aging, religious, accepting, patriotic: No, doesn't include government or policy issues.
-abortion, medical marijuana, gun control, anti-sexism: Yes, includes government or policy issues.
§.§.§ CAP
* Congressional Hearings:
Using only the following categories
"""
Macroeconomics
Civil Rights
Health
Agriculture
Labor
Education
Environment
Energy
Immigration
Transportation
Law and Crime
Social Welfare
Housing
Domestic Commerce
Defense
Technology
Foreign Trade
International Affairs
Government Operations
Public Lands
Culture
"""
Assign the following congressional hearing summaries to one of the categories:
Extend defense production act provisions through1970. -> Defense
FY90-91 authorization of rural housing programs. -> Housing
Railroad deregulation. -> Transportation
To consider Federal Reserve Board regulations and monetary policies after February 2016 report on monetary policy. ->'
* New York Times Headlines
Using only the following categories
"""
Macroeconomics
Civil Rights, Minority Issues, and Civil Liberties
Health
Agriculture
Labor
Education
Environment
Energy
Immigration
Transportation
Law, Crime, and Family Issues
Social Welfare
Community Development and Housing Issues
Banking, Finance, and Domestic Commerce
Defense
Space, Science, Technology and Communications
Foreign Trade
International Affairs and Foreign Aid
Government Operations
Public Lands and Water Management
State and Local Government Administration
Weather and Natural Disasters
Fires
Arts and Entertainment
Sports and Recreation
Death Notices
Churches and Religion
Other, Miscellaneous, and Human Interest
"""
Assign the following headlines to one of the categories:
IRAN TURNS DOWN AMERICAN OFFER OF RELIEF MISSION -> International Affairs and Foreign Aid
In Final Twist, Ill Pavarotti Falls Silent for Met Finale -> Arts and Entertainment
In Times Sq., a Dry Run for New Yearś 2000 -> Arts and Entertainment
House Panel Votes Tax Cuts, But Fight Has Barely Begun ->'
§.§ Exemplar Types Experiments
We also explored whether some exemplars were better or worse at “teaching” the categories to the model. We considered that for a given category, an instance could be a better or worse exemplar. We might define this by a quantity we'll call its margin: the difference between (1) the probability the model assigns to the correct category and (2) the highest probability of the probabilities for all the wrong categories. Thus, “prototypical" exemplars would have high positive margin (model guesses right), “ambiguous" exemplars would have margins with very low absolute values (model torn between multiple categories), and “tricky" exemplars would have margins with very high negative values (model guesses wrong).
In theory, prototypical exemplars could teach the model about the proper distribution of texts belonging to a category, ambiguous exemplars could teach the model about the boundaries between the distributions of each category, and tricky exemplars could correct the model's prior on categories by flagging common mistakes made in coding texts from that category's distribution.
To answer this question empirically, we first randomly sample 90 candidate exemplars from each category. We then code each with the model given a set of 4 exemplars sampled randomly once and then held constant specifically for this task. Then we sort them by their margin and construct one set of each: prototypical, ambiguous, and tricky exemplars. Finally, we perform 5 trials where we classify 4 instances from each category using an increasing number of these sets of exemplars and measure performance. The results, in Figure A.<ref>, demonstrate no discernible signal as to which kind of exemplar is best to present to the model in the context window. This is one bit of evidence that this dimension, of the prototypicality vs. ambiguity vs. trickiness of exemplars, is not at all determinative of a model's performance on a coding task, a dimension which is very important for active learning.
|
http://arxiv.org/abs/2306.06164v1
|
20230609180001
|
The Disturbed and Globular Cluster-Rich Ultra-diffuse Galaxy UGC 9050-Dw1
|
[
"Catherine E. Fielder",
"Michael G. Jones",
"David J. Sand",
"Paul Bennet",
"Denija Crnojevic",
"Ananthan Karunakaran",
"Burcin Mutlu-Pakdil",
"Kristine Spekkens"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Catherine Fielder
[email protected]
0000-0001-8245-779X]Catherine Fielder
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Rm. N204, Tucson, AZ 85721-0065, USA
0000-0002-5434-4904]Michael G. Jones
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Rm. N204, Tucson, AZ 85721-0065, USA
0000-0003-4102-380X]David J. Sand
Steward Observatory, University of Arizona, 933 North Cherry Avenue, Rm. N204, Tucson, AZ 85721-0065, USA
0000-0001-8354-7279]Paul Bennet
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
0000-0002-1763-4128]Denija Crnojević
Department of Physics and Astronomy, University of Tampa, 401 West Kennedy Boulevard, Tampa, FL 33606, USA
0000-0001-8855-3635]Ananthan Karunakaran
Instituto de Astrofísica de Andalucía (CSIC), Glorieta de la Astronomía, 18008 Granada, Spain
0000-0001-9649-4815]Burçin Mutlu-Pakdil
Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755, USA
0000-0002-0956-7949]Kristine Spekkens
Department of Physics and Space Science, Royal Military College of Canada P.O. Box 17000, Station Forces Kingston, ON K7K 7B4, Canada
Department of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada
We investigate the ultra-diffuse galaxy (UDG) UGC 9050-Dw1, which was selected because of its disturbed morphology as part of a larger sample of UDGs that display evidence for significant interactions. We use the Hubble Space Telescope's Advanced Camera for Surveys to identify globular clusters (GCs) associated with UGC 9050-Dw1, and the Jansky Very Large Array to measure its content. UGC 9050-Dw1, a neighbor to the low surface brightness spiral UGC 9050, exhibits a unique UV bright central “clump” with clearly associated gas and an extended stellar tidal plume to the north. We identify 52^+4_-6 GCs, implying a specific frequency S_N = 122_-24^+30, one of the highest reported for a UDG of this luminosity. Additionally, ∼ 20% of the total light of the galaxy is contributed by GCs.
Nearly uniform GC colors suggest they were formed during a single intense episode of star formation. We propose that UGC 9050-Dw1 formed via a rare dwarf merger event where induced, clumpy star formation led to its current observed properties.
§ INTRODUCTION
Ultra diffuse galaxies (UDGs) have garnered a lot of interest since the discovery of large populations of extreme low surface brightness (LSB) galaxies in the Coma <cit.>, Virgo <cit.>, Perseus <cit.>, and Fornax <cit.> clusters. A flurry of additional studies and searches ensued, in an attempt to understand these large (half light radii >1.5 kpc), LSB objects (central g-band surface brightness >24 mag arcsec^-2). UDGs are remarkable, with stellar masses similar to those expected for dwarf galaxies, yet with physical sizes comparable to the Milky Way (, see discussion on size in ). In addition, UDGs have been found in abundance across all environments in addition to clusters, including
groups <cit.>, and in the field <cit.>. The ubiquity of UDGs across all environments, and their extreme nature, have challenged galaxy formation and evolution models.
A number of potential formation mechanisms for UDGs have been proposed, and it is likely that multiple formation pathways are necessary <cit.>. Some explanations for the extended, low surface brightness nature of UDGs include strong stellar feedback <cit.>, high-spin dark matter (DM) halos <cit.>, early mergers <cit.>, or failed galaxies that reside in exceptionally massive DM halos <cit.>. Others are likely `puffed up' dwarfs <cit.>, or the result of gas rich galaxy collisions yielding tidal dwarf galaxies (TDGs; e.g., ). Some authors also suggest that the observed UDG population may be the result of a combination of several of these formation scenarios <cit.>.
One of the most important clues for understanding the origins of UDGs comes from their globular cluster (GC) population. Old GCs trace the early epochs of galaxy assembly <cit.>, making their abundance an excellent discriminator in constraining UDG origins. Some studies have found UDGs with much more abundant GC populations than would be expected for a dwarf-mass dark matter halo, while others find the GC systems of many UDGs are consistent with dwarf-mass halos <cit.>. Additionally, GC abundance has been found to be strongly correlated with the total system mass - particularly DM halo mass <cit.> - which is also true for the UDG regime <cit.> allowing for DM halo mass constraints of UDGs photometrically. Spectroscopic studies of GCs in UDGs can determine velocity gradients to further constrain evolutionary histories of UDGs via signals of tidal disruption <cit.>. Consequently GC abundance has often been used as a means for discriminating between potential formation scenarios when comparing sub-populations of UDGs that otherwise are quite similar in luminosity and stellar mass.
For UDGs with evidence of ongoing star formation, neutral gas () observations provide additional, complementary information. is typically one of the most loosely bound baryonic components of a galaxy and therefore is a remarkably sensitive tracer of tidal interactions. Thus, the morphology of a UDG, or even the absence of neutral gas <cit.>, can be used to narrow down potential formation scenarios. line widths and velocity maps also provide insight into the bulk kinematics of a galaxy, and have been used to argue that some -rich UDGs might be DM-poor <cit.>.
In this work we focus on a UDG identified by a semi-automated search <cit.> of the Canada–France–Hawaii Telescope Legacy Survey (CFHTLS): UGC 9050-Dw1. This UDG plausibly resides in a group environment with the low surface brightness spiral UGC 9050 (see e.g., ), and is detected in the UV. Only a handful of such group UDGs with indication of recent star formation have been studied <cit.>. In the CFHT data, the UDG has an evident tail-like feature and the central region exhibits an unusual morphology, indicative of an interaction. We use both Hubble Space Telescope (HST) and Jansky Very Large Array (VLA) observations to identify GC candidates and measure the morphology of UGC 9050-Dw1 in order to distinguish between the variety of formation scenarios befitting a UDG with evidence for tidal disturbances.
This paper is outlined in the following manner. <ref> provides an overview of the VLA and HST data, in addition to other ancillary data used in this work. In <ref> we detail the derived properties of UGC 9050-Dw1. <ref> describes the criteria for selecting globular cluster candidates and <ref> details the GC abundance, properties, and inferred halo mass. In <ref> we discuss possible formation mechanisms for UGC 9050-Dw1. Finally, <ref> provides a summary of our findings and conclusions. <ref> provides supplementary tables and figures, including a full table of GC candidates. All photometry used in this work is in the Vega magnitude system unless otherwise stated.
§ OBSERVATIONAL DATA
UGC 9050-Dw1 was identified in ground-based CFHTLS imaging by a semi-automated search for diffuse dwarfs (initial results are presented in ). This now completed search covers ∼150 deg^2, within which hundreds of diffuse dwarf and UDG candidates have been identified and several UDGs have been confirmed. <cit.> initially reported two UDGs as part of this automated search, which were subsequently followed up with HST and VLA observations in <cit.>. These two UDGs were selected due to their apparent association with tidal streams, part of a larger case study for constraining formation mechanisms for UDGs with plausible evidence for interactions with larger galaxies. These UDGs specifically had little evidence of star formation or neutral gas. UGC 9050-Dw1 was selected as an extension to this study, focusing on UDGs with signs of interaction and evidence of recent star formation. For the case of UGC 9050-Dw1 this is apparent because of its blue color and GALEX <cit.> NUV emission within the core.
UGC 9050 is UGC 9050-Dw1's nearest apparent neighbor and presumed host galaxy. The two are separated by less than 50 km/s in radial velocity (see <ref>). We adopt the distance of UGC 9050 as measured from the Virgo Infall Hubble flow <cit.> (H_0=67.8 km/sec/Mpc) of 35.2±2.5 Mpc, which we also adopt for UGC 9050-Dw1 given their extremely similar recessional velocity. At the adopted distance the two are separated by 69 kpc in projection.
UGC 9050-Dw1 and UGC 9050 also both lie close in proximity (∼ 300 kpc) and within redshift space (Δ v_helio<100 km/s) of the NGC 5480 and 5481 pair, indicating that they may also be fringe members of the NGC 5841 group (D = 35±2.5 kpc).
§.§ HST Observations
UGC 9050-Dw1 was observed in September of 2022 under HST program ID 16890 <cit.>. This target was observed with the Advanced Camera for Surveys (ACS) with the Wide Field Channel (WFC). Observations were completed in the F555W and F814W filters, with 2406 s and 2439 s exposure times, respectively. Additionally, Wide Field Camera 3 (WFC3) images were taken in parallel to use as a nearby reference background field.
<ref> shows the RGB color composite constructed with the stacked F555W and F814W images. A blue, higher surface brightness clump at the center of the UDG is apparent, for which we provide a high-contrast zoom-in (see <ref> for further discussion of sources marked in the image).
The Dolphot v2.0 <cit.> software was used with the standard ACS and WFC3 parameters to align the individual HST exposures and then generate a combined point source catalog for each field. V- and I-band magnitudes for sources determined by Dolphot are derived via the HST to Johnsons-Cousins magnitude conversion factors presented in <cit.>. The V- and I-band magnitude quantities are then corrected for Galactic extinction using the NASA/IPAC[<https://irsa.ipac.caltech.edu/applications/DUST/>] online tool, with values derived from the <cit.> extinction coefficients. Magnitudes presented in this work are Milky Way extinction corrected unless otherwise indicated.
Our point source completeness limits are determined via artificial star tests. Using the tools provided by Dolphot, we place nearly 10,000 artificial stars into the ACS field of view. These artificial stars span a large color range from -1 to 2 in F555W-F814W (well beyond the range used to select GCs). We then measure the fraction of those stars that we recover as a function of apparent magnitude. We find we are 90% complete to m_F814W = 26.8 and 50% complete to m_F814W = 27.4.
§.§ VLA Observations
UGC 9050-Dw1 was observed in the VLA D-configuration during July 2022, as a part of project 22A-225 (PI: D. Sand). The total on-source integration time was approximately 4.3 h. The data were simultaneously recorded with two correlator configurations, one with a 4 MHz bandwidth (approximately centered on the radial velocity of UGC 9050) and a channel width of 3.91 kHz (∼0.8 ), and the other with a 32 MHz bandwidth and 62.5 kHz channels. The latter was in case the UDG was a foreground or background object, not at the same redshift as UGC 9050. For the remainder of this work we will only consider the former setup. The results of these observations are presented in the left panel of <ref> and further discussed in <ref>.
The data reduction relied on the pipeline[<https://github.com/AMIGA-IAA/hcg_hi_pipeline>] of <cit.>. We refer the reader to that work for a full description, but we describe it briefly here. The pipeline began with a combination of manual and automated flagging, then proceeded with gain and phase calibration using standard tasks. Overall, the data suffered from moderate radio frequency interference and about 20% of visibilities were flagged. Imaging used automated masking and a Briggs robust parameter of 0.5 as a compromise between resolution and sensitivity. The spectral resolution was also smoothed to 5 . Multi-scale CLEANing was performed down to approximately 2.5σ_rms. The RMS noise in the final image cube is 0.9 mJy/beam and the synthesized beam size is 45.2×48.5.
§.§ Ancilliary Data
§.§.§ CFHTLS Observations
We also use data from the Wide portion of the CFHTLS. This is a survey that was conducted between 2003 and 2009 and covers 171 square degrees in the u, g, r, i & z bands. UGC 9050 and UGC 9050-Dw1 are found in the W3-1-3 and W3-2-3 fields, using the nomenclature presented in figure 4 of <cit.>. The exposure time for the g-band stacks used in this work was 2500s, with 2375s in r, and 6150s in i, all with a pixel scale of 0.186 arcsec per pixel. The fields were downloaded directly from the Canadian Astronomy Data Centre (CADC) and have been processed by the Terrapix 7 pipeline. The point spread functions (PSFs) for those image stacks were also downloaded from the CADC, which were used for measuring dwarf structural parameters. The construction and calibration of these utilized the MegaPipe data pipeline <cit.> and is described in detail by <cit.>. We provide a composite color image constructed from the g, r, and i-bands of the W3-1-3 field in the upper left panel of <ref>, and a high contrast image constructed from the g-band of the same field in the upper right panel of <ref> and the background of <ref>.
The CFHTLS images have the advantage of increased sensitivity to extended low surface brightness emission compared to HST ACS/WFC. The extended emission of UGC 9050-Dw1 also occupies a large fraction of the ACS field, which complicates background estimation. Therefore we supplement our HST data with the CFHTLS data to derive magnitudes, colors, surface brightness, and radius for UCG 9050-Dw1.
§.§.§ GALEX Observations
Data from GALEX <cit.> were used to measure the star formation rate (see <ref>) of UGC 9050-Dw1 and UGC 9050. Both targets were observed in NUV for ∼1600 s as part of the guest investigator program (GI5-028, PI: Balogh). This program did not include FUV observations and neither UGC 9050-Dw1 or UGC 9050 were observed as part of the GALEX all-sky imaging survey, therefore we are limited to NUV observations. We provide the GALEX image of UGC 9050-Dw1 in the bottom panel of <ref>.
§.§.§ Apertif Observations
We supplement our VLA observations with publicly available imaging from the Apertif (Aperture tile in focus) imaging survey on the Westerbork Synthesis Radio Telescope <cit.> (seen in the right panel of <ref>).
An advantage of the Apertif imaging is its finer spatial resolution which, at the declination of UGC 9050-Dw1, is approximately ∼15”×20”. These publicly available data[We obtained the cube from the ASTRON VO service: <https://vo.astron.nl/apertif_dr1/q/apertif_dr1_spectral_cubes/form>] have not yet been CLEANed or primary beam-corrected. We instead use these data as a qualitative check on the general morphology of UGC 9050-Dw1. We have compared the results from the dirty cube with upcoming cleaned and mosaiced Apertif data products (K. Hess, priv. comm.) and discuss this further in <ref>.
§ THE PHYSICAL PROPERTIES OF UGC 9050-DW1
The morphology of UGC 9050-Dw1 is elongated and clumpy, indicative of a disturbance. It appears to have a more distinct plume in the northern direction with less structured diffuse emission in the south and around the central clump. The derived surface brightness and size discussed in the following fall well within the standard definition of a UDG (half light radii >1.5 kpc; g-band surface brightness >24 mag arcsec^-2). In <ref> it is evident that the bright NUV emission of the very center of the UDG overlaps with the brightest core region in the UDG within the high contrast CFHTLS image. As seen in the HST image (<ref>) this core is clumpy and displays a somewhat distorted appearance, preferentially in the east-west direction towards the start of the extended emission. The Apertif data (<ref>) is distorted in the same east-west direction. The extended diffuse stellar component of UGC 9050-Dw1 is not unlike the tails studied in <cit.>, or those seen in dwarf galaxy mergers <cit.>. The overall morphology of UGC 9050-Dw1 points to a merger remnant or other strong interaction.
We note that it is unlikely that the central clump is a background galaxy, given the lack of clear indicative features, such as spiral arms or a redder bulge. In fact, in the radio data the is also centered around the clumpy central region of the UDG and we find the velocity to be the same as the gas observed in UGC 9050.
§.§ HI Content
To produce the VLA moment zero map (left panel, <ref>) we used <cit.> to create a source mask. We used the standard smooth and clip algorithm with no smoothing as well as Gaussian smoothing kernels approximately 0.5 and 1.0 times the beam size, plus boxcar spectral smoothing over 0 and 3 channels. The clip threshold was set to 3.5σ and we required a 95% reliability threshold. This returned just two objects, UGC 9050-Dw1 and UGC 9050 (the complementary moment zero map for UGC 9050 is shown in <ref>, <ref>). The integrated fluxes (corrected for the primary beam response) of each are 0.95 ± 0.03 and 3.02 ± 0.05 Jy , which at 35.2 Mpc equate to masses of log M_HI/M_⊙ = 8.44±0.04 and 8.94±0.04, respectively, which are strikingly similar. The error represents a 10% uncertainty estimate for the absolute calibration, which is the dominant source of error other than the distance. The mass of UGC 9050-Dw1 is included in <ref> in addition to optical and UV properties described in the following subsection. The documented heliocentric velocity of UGC 9050 is measured at 2001±5 km/s. From the VLA data we derive a velocity of 1952.1 ± 0.3 km/s for UGC 9050-Dw1 via a Gaussian fit to the spectral line in . This further supports the association of the two objects with each other.
We also estimate the size of the distribution from the mass-size relation derived in <cit.>, which has been shown to be tightly correlated and even holds for interacting galaxies. Assuming that the in UGC 9050-Dw1 is in a disk, the mass-size relation yields a disk diameter D_ = 9.5 kpc. At the adopted distance of UGC 9050-Dw1 this corresponds to an angular size of ≈56” in diameter, which is comparable to the resolution for the D-configuration (48.5”); the UDG is essentially unresolved. Therefore, while the VLA morphology contains little evidence of disturbance in UGC 9050-Dw1, the lack of resolution makes this unconstraining. In the VLA data of UGC 9050, which is marginally resolved, there is a suggestion of a disturbance to the in the direction of UGC 9050-Dw1 (see <ref>). It is not certain that an interaction with the UDG is the cause, but it is plausible.
In the right panel of <ref>, we also include higher spatial resolution data from the first Apertif Data Release (DR1, ).
We follow a similar source finding procedure as described above using and show the resulting moment zero map as contours on top of the same high contrast CFHTLS optical image of UGC 9050-Dw1. Since we do not clean or primary-beam correct these data (which is not feasible with the DR1 data), these contours do not have column density values as in the left panel. However, we do check the general morphology with upcoming cleaned Apertif imaging and find that it is consistent with the results presented here (K. Hess, priv. comm.) There is a clear concentration of near the central, UV-bright region of UGC 9050-Dw1 with hints of elongated morphology perpendicular to the northern diffuse extension, roughly in the direction of UGC 9050 (in the lower-right direction in the images). The deeper Apertif data that are forthcoming will provide more detail regarding the morphology.
§.§ Optical/UV Properties
<ref> includes the optical and UV properties of UGC 9050-Dw1, for which NUV, g, and r-band magnitudes are presented in the AB system, while V and I-band magnitudes remain in the Vega system. Similarly derived properties for UGC 9050 are presented in <ref>.
Due to the unique morphology of UGC 9050-Dw1, we split some of the derived properties between the “core”, the diffuse structure, the tail, and the entire object. Note that the core itself is at the border of the standard UDG definition in terms of surface brightness and size, and the entire galaxy falls well within the standard UDG definition.
We start with GALFIT <cit.> to model the inner core of the UDG and determine the properties of the core. The GALFIT determined core size corresponds to a half-light radius of 1.4 kpc or 8.3 arcsec.
However, GALFIT was unable to model the full surrounding extended emission, even after multiple techniques to boost the signal. Therefore we used standard aperture photometry to characterize the full galaxy. To determine the full aperture size of the UDG, we centered a circular aperture on the dwarf (determined by the center of the GALFIT model of the core region) and steadily increased the size of the aperture by 1 pixel (0.186 arcsec) until the additional flux was explainable as entirely background emission. With this method we find a circularized radius of 36.6 arcsec and characterized the uncertainty in a similar way to the GALFIT model of the core. Thus derived photometry of the diffuse structure is calculated within the 36.6 arcsec radius, excluding emission within the half-light radius of the core.
There is still substantial diffuse emission beyond the 36.6 arcsec radius still associated with UGC 9050-Dw1. But characterizing this in a robust way has not been possible, and thus we use a larger circularized radius for the remainder of this work. We opt to use 73.2 arcsec as our final circularized radius for the UDG, or 2× the radius at which the counts are approximately the background level, which fully encompasses all of the diffuse emission. We refer to this quantity simply as r_UDG.
Last, we quantify photometry within just the tail region, using an arbitrary ellipse and GALFIT, merely for comparison purposes since we are unable to determine this feature more robustly. We manually place an elliptical aperture centered on the approximate visible center of the tail (coordinates in <ref>). The ellipse has a semi-major axis of 25 arcsec (4.2 kpc), a semi-minor axis of 11 arcsec (1.9 kpc), and a position angle of 10 degrees to the east. GALFIT was then used to determine the color and magnitudes of the properties within the tail, using the arbitrary ellipse as limits. The errors presented are likely underestimates as they do not factor in the original determination of the spatial extent of the tail, which was somewhat subjective. Note that the placement of the ellipse for the tail means that some of what we refer to as the diffuse emission region is included within this ellipse and we do no separate out the two. Overall, we find that what we have selected as the `tail' is bluer and brighter than the remainder of the diffuse emission of UGC 9050-Dw1, although not as bright and blue as the core.
To quantify the uncertainties of the structural properties of UGC 9050-Dw1 in the CFHTLS images, we used the procedures from <cit.>. In brief, after the observational properties are derived we inject simulated UDGs with the same observed properties into the image to quantify uncertainties by repeating the fits on the simulated data. The standard deviation of the resulting set of fits is used to characterize the uncertainty of the observational properties of UGC 9050-Dw1.
We also provide the NUV magnitude obtained from the GALEX survey <cit.> and use the relation from <cit.> to derive the star formation rate. The NUV flux is determined with aperture photometry using an aperture equivalent to two half-light radii for the core model (16.6”). We find that any additional NUV flux outside the core region is entirely consistent with noise, indicating that all of the UV emission comes from the central “core” of the UDG (see <ref>, bottom panel). This was done by comparing the NUV flux from just the core to the NUV flux within the full UDG radius (73.2”), which were equivalent after accounting for noise.
While we do include a derived stellar mass in <ref> using the relations from Table 1 of <cit.>, this mass should be considered a guide and has significant uncertainty. Using the (V-I) color and M_V we derive a stellar mass-to-light ratio Γ_*=0.91 for the full UDG, Γ_*=0.24 for the core, Γ_*=1.76 for the total diffuse component, and Γ_*=0.47 for the tail alone.
§ SELECTION OF GLOBULAR CLUSTER CANDIDATES
The selection of globular cluster candidates (GCCs) follows that described in <cit.>, which we briefly summarize here. With the photometric analysis from Dolphot the sample is limited to sources classified as stars and having no flags in the photometry, with S/N > 5. In both HST filters sharpness is required to fall within the range of -0.3 to 0.3 and roundness is required to be less than 0.3. This aims to exclude extended or overly compact sources, and any that may be elongated. In either band we set a maximum limit on crowding to 0.5 mag, and require Dolphot magnitude uncertainties to be smaller than 0.3 mag. To select GCCs we also employ a color cut of 0.5 < V-I < 1.5, within which globular clusters are expected to lie <cit.>, which will largely eliminate blue star-forming clumps.
A cut in concentration index is employed in addition to the above criteria for GC candidate selection. Concentration index is determined by comparing flux in concentric circular apertures of 4 and 8 pixel diameters on the background subtracted F814W image (see similar approaches in e.g., ). We define this concentration index as C_4-8 = -2.5log_10(N_4pix/N_8pix) where N_4pix represents the sum of flux values in a 4 pixel aperture. We allow the concentration index parameter to span the range 0.2-0.8 mag, since we do not expect all of the GCs to be perfect point sources at HST resolution. This largely excludes any remaining background galaxies that exhibit discernible structure and other image artifacts.
In an effort to maximize the number of GCCs while minimizing the number of contaminants, we employ a cut in magnitude in addition to the cuts above. Similar to the procedure of <cit.>, we assume a Gaussian dwarf elliptical globular cluster luminosity function (GCLF) of <cit.>, which peaks at M_I=-8.12 with σ_M_I=1.42 mag. At the distance of UGC 9050-Dw1, this corresponds to a peak of m_I = 24.61. We use this peak to serve as a brightness minimum, instead of our completeness limit, as a means to minimize any possible stellar contaminants. Likewise, we also employ a brightness maximum above which any point sources are assumed to be foreground stars, M_I > -12.38 (or 3σ from the mean of the GCLF). Thus in apparent magnitude all UGC 9050-Dw1 GCCs were selected within the range 20.35 < m_I < 24.61. With these cuts, we sample approximately half of the luminosity function by number. As a result, our final GCC counts will be completeness corrected by multiplying by 2.
We include a color-magnitude diagram in <ref> of the acceptable Dolphot point sources and the GC color and magnitude selection box. Point sources that fall within the selection box and pass the concentration cut are circled in teal in <ref>. Point sources that fall within the narrow highlighted color range in orange in <ref> are subsequently circled in orange in <ref>. Sources plotted as red squares are those that also pass the concentration cut described above.
§ THE GC SYSTEM OF UGC 9050-DW1
§.§ Globular Cluster Abundance
After restricting the catalog of point-like sources to those with colors and concentration indices consistent with GCs, we ultimately consider GCs within 73.2” of the UDG center. In <ref> this would be all red square points in the left panel. This is the final assigned boundary of UGC 9050-Dw1 as derived in <ref> (see r in <ref>), which we will subsequently refer to as r_UDG. At the distance of UGC 9050-Dw1, this corresponds to a physical radius of ∼12.4 kpc. Within this aperture we count a total of 30 GCCs. These candidates are marked with cyan circles in <ref>, with the UDG radii marked by white dotted and dashed apertures (0.5× r_UDG and r_UDG).
Compared to the ACS/WFC field-of-view (202x202”), UGC 9050-Dw1 fills a substantial fraction. Therefore to estimate the contaminant count rate for false GCCs we utilize the parallel WFC3 field. Within this field we use the same basic cuts for point-like sources in addition to the magnitude, color, and concentration index cuts described in <ref> used to obtain the GCCs in the ACS field. In the WFC3 field a total of 7 sources pass these selection criteria. However, the WFC3 field-of-view (160”x160”) is smaller than the ACS field-of-view. To account for this we scale the number of the sources found in the parallel field relative to the ACS detector area, yielding a total of 11 expected GC contaminants in the full ACS area. Finally, this quantity is then scaled down to the aperture area we assign to the UDG (r_UDG), resulting in an estimated total of 4±2 contaminants rounded to the nearest whole number.
We do a second check of the foreground contaminant estimate with the TRILEGAL v1.6 simulation <cit.> for determining stellar distributions within the Milky Way. We run this simulation with the default settings and a Chabrier log-normal initial mass function (IMF; ). Extinction corrections are applied in the same manner as our HST observations (see <ref>). Instead of querying to just the ACS camera field-of-view we instead query a full square degree centered on the coordinates of UGC 9050-Dw1. The stars in this field are then down-sampled randomly 10,000 times to both the area of the ACS field and the WFC3 field to ensure our contamination estimates are consistent. After applying the color/magnitude cuts consistent with GCs we find an average of 6±2 contaminants in the ACS field and 4±2 contaminants in the WFC3 field. These numbers are smaller but consistent with the results obtained from the WFC3 parallel field described above. For a background count estimate with which we expect our final GC candidates to yield more `true' GCs we will proceed with foreground contaminant counts derived from the WFC3 field (4±2 in the UDG area).
We can also take a probabilistic approach to more accurately determine GC counts with Bayesian statistics, as in <cit.>. Our number counts are small enough that we are still operating in the regime where Poisson statistics dominate, so we can follow the calculation as described in Section 4.3 of <cit.>. They determine the final probability mass function for GC counts as
p(N_GC|O) = Γ(N_bg+N_GCC-N_GC+1)/(A_bg+1)^(N_bg+N_GCC-N_GC+1)(N_GCC+N_GC),
where N_GC is the number of genuine globular clusters in the field, N_GCC is the number of GCCs in the field, N_bg is the number of background sources that pass as false GCCs, and A_bg is the area of the background field. Because we use the WFC3 field to determine background counts for UGC 9050-Dw1, A_bg is scaled such that it is in units of the ACS pixel size. We correct the final result for covering only half of the GCLF by multiplying by 2, as discussed in <ref>. With this approach we calculate the number of GCs for UGC 9050-Dw1 as N_GC=52^+4_-6, with a probability of N_GC=0 negligibly small. While this is the final estimate of the GC abundance of the system, our following analyses will utilize the 30 observed candidates (foreground contaminant subtracted to 26 where relevant). These quantities are documented in <ref> for clarity. The 30 identified GCCs in this work are documented in <ref>.
The GC counts for this system are plotted in <ref> along with a number of UDG samples and individual UDG studies. This literature sample includes a general sample of nearby galaxies <cit.>, Coma cluster UDGs <cit.>, Virgo cluster UDGs <cit.>, and Hydra I cluster UDGs <cit.>, and group UDGs compiled in <cit.>. We also compare to the tidally puffed-up UDGs studied in <cit.> and some well-studied UDGs in the literature: UDGs with monochromatic GC populations such as Dragonfly 2 (DF2) and Dragonfly 4 (DF4) in the NGC 1052 group with updated measurements from <cit.>, NGC 5846-UDG1/MATLAS19 in the NGC 5846 group <cit.>, and DGSAT 1 in a low density environment associated with the Pisces-Perseus super-cluster <cit.>. Magnitude errors for NGC 5846-UDG1 are not quoted in <cit.> so we utilize those provided in <cit.>. Last, we include UDGs with rich GC systems - updated measurements of Dragonfly 44 (DF44; a Coma UDG) recorded in <cit.>, Dragonfly 17 (DF17; a Coma UDG studied in ), and VCC 1287 (a Virgo UDG studied in ). Magnitude errors are not quoted for the Dragonfly objects so we adopt the g-band errors from <cit.> and assume they are the same for V-band. We convert the VCC 1287 u and g-band photometry documented in <cit.> to V using documented SDSS conversions [<http://classic.sdss.org/dr4/algorithms/sdssUBVRITransform.html>].
In contrast to even the most GC-rich UDGs in the literature and the tidally puffed-up UDG sample of <cit.>, UGC 9050-Dw1 contains an exceptional abundance of globular clusters. Considering objects just within 2σ of the estimated magnitude and GC abundance of UGC 9050-Dw1, only two Coma cluster UDGs and updated measurements of NGC 5846-UDG1 <cit.> compare.
§.§ Properties of the Globular Cluster Population
§.§.§ The Radial Profile
Here we consider the projected spatial distribution of the GC population of UGC 9050-Dw1. As is evident in <ref> there appears to be a relatively central concentration of globular clusters. To further quantify this, we count the number of GCs per area in annuli of increasing multiples of 0.1r_UDG from the center. These results are presented in <ref> in teal. We include Poisson errors on our GC counts within each annulus bin, and mark the expected background counts per unit area within r_UDG by a horizontal solid line with respective Poisson errors marked by a dashed-dotted line. By ∼ r_UDG the GC counts are consistent with the background level. While a slight majority of the GCs project within 0.5× r_UDG, the GC population extends to approximately the selected UDG radius. The gray shaded region marks the central 7.32” (1.2 kpc) of the UDG that contains the starry clump within our bin spacing. Because the center is very crowded it is more challenging to reliably detect GCs.
The orange points in <ref> are for GCCs in UGC 9050-Dw1 if we extend the I-band magnitude cut discussed in <ref> from M_I < -8.12 down to our 90% completeness limit, which corresponds to M_I < -5.96 or 2 magnitudes fainter than the peak of the GCLF. For both cases we maintain the requirement that M_I > -12.38. Going down to fainter magnitudes we obtain 61 GCCs with a corresponding 7 contaminants in the UDG aperture (r_UDG) determined from the parallel WFC3 field. This relaxed cut may allow for more foreground contaminant stars, but we use this broader magnitude range simply as a consistency check. We note that with this extended cut, 7 GCCs fall within the gray shaded region which is well beyond the y-axis of <ref> (y = 0.04).
On average, the spatial distribution of GC populations is expected to be cored with an increasingly steep outer slope <cit.>, which is consistent with the general trend of our GC surface density distribution. The GC radial distribution for candidates down to the 90% completeness limit approximately follows that of our more conservative magnitude cut.
§.§.§ Globular Cluster Colors
The color distribution of the 30 GCCs in UGC 9050-Dw1 is relatively blue and monochromatic. We present the (V-I) color distribution of our GCCs in <ref> with solid teal bars, in addition to other UDGs in the literature that have also been found to have monochromatic GC populations. Across the full GCC range we find a mean (V-I) = 0.89 mag, median (V-I) = 0.86 mag, and standard deviation 0.19 mag. While we find a somewhat red secondary component of the GCC color distribution, about half of the contaminants found in the WFC3 parallel field overlap with most of these red sources (especially given small bin statistics). If we only consider sources between 0.6 ≤ (V-I) ≤ 1.0 we find a mean (V-I) = 0.84 and a standard deviation of only 0.08. Overall the GCC color distribution of UGC 9050-Dw1 is substantially different from the general dwarf galaxy population compiled from <cit.> and <cit.> (⟨(V-I)⟩ = 0.97, σ_(V-I)=0.26); the GCC colors of UGC 9050-Dw1 are much less bimodal and have a smaller spread.
So-called monochromatic GC populations have been observed in several other UDGs (NGC 1052-DF2 and NGC 1050-DF4 from ; NGC 5846-UDG1 from ; DGSAT 1 from ). One proposed explanation suggests these GC populations formed in a single starbust event, yielding GCs of the same color and metallicity <cit.>. While our GC colors are strikingly uniform, they are not as uniform as the observed GC colors in DF2 and DF4 (intrinsic observed σ_(V-I)=0.015 mag of a combined GC sample), which are thought to have formed in a collision <cit.>. In contrast, NGC 5846-UDG1 and DGSAT 1 are thought to have uniform populations due to rapid, possibly clumpy star formation at an early epoch <cit.>, but similar star formation conditions may arise from a collision.
To further inspect our GC candidate ages and metallicities we employ the Parsec v.1.2S code <cit.>. We produce results for a single burst stellar population with a Chabrier log-normal IMF. The results of this analysis are plotted in <ref>. Here we plot lines of constant age in color-metallicity space.
If we use the mass-metallicity relation from <cit.> and assume the ratio of oxygen to other metals is the same in the UDG as in the Sun (and using the solar Oxygen abundance from ) we find an upper limit on [M/H] ≈-0.6 for the UDG, which implies a minimum age of ∼1.5 Gyr for the GCCs. If the GCs formed at earlier times, which is likely, they will be lower than the Sun in metallicity. And, even if this galaxy does not follow the mass-metallicity relation, the GCs must still be older than ∼1 Gyr (the minimum age that falls within our GC color selection range). While spectroscopic follow-up is needed for additional constraints, this is difficult or impossible for most of the GCCs given their brightness (see <ref>).
§.§.§ The Globular Cluster Luminosity Function
The GCLF has a near universal peak, with some variance, dependent on the general galaxy type. In <ref> we plot the GCLF of UGC 9050-Dw1 with solid teal bars. Here we compare to two GCLFs: 1) the <cit.> GCLF determined from dwarf ellipticals in Virgo found to peak at μ_I,Vega = -8.12 mag with σ_I = 1.42 for which the probability distribution function (PDF) is plotted in red, and 2) the <cit.> GCLF determined from M87, a giant elliptical, found to peak at μ_I,Vega = -8.56 mag with σ_I = 1.37 for which the PDF is plotted in gray.
The observed GCLF for UGC 9050-Dw1 follows the expected GCLF for dwarfs reasonably well (marginally better than the standard GCLF); the V-band GCLF yields similar results. Hence why we select our GCCs in magnitude space covering the bright half of the GCLF. We do the same analysis down to the 90% completeness limit (m_F814W = 26.8 mag; M_I=-5.55) and again find general agreement with a standard dwarf GCLF, but with a slight overabundance at the faint end, likely due to contamination. The agreement with the GCLF also indicates that the distance we have assumed for UGC 9050-Dw1 is correct.
§.§.§ Specific Frequency
Specific frequency offers a measurement of how rich a GC system is relative to its host galaxy luminosity. We follow the definition presented in <cit.> of S_N = N_GC10^0.4(M_V+15) in combination with our ground-based CFHT measurements and final GC count of UGC 9050-Dw1 (<ref>).
This gives a specific frequency of S_N = 122_-24^+30.
<ref> includes diagonal lines depicting constant specific frequency of 1, 10, and 100. Other well-studied UDGs with high specific frequencies include DF17 (28±5), VCC 1287 (80±29), and NGC 5846-UDG1 (78 derived from , 58±14 in ). Even amongst these unusual UDGs, UGC 9050-Dw1 has an extraordinary specific frequency. The high specific frequency objects in clusters (S_N>100) at present do not have evidence of recent star formation. Other star forming dwarfs, low surface brightness galaxies, and even spiral galaxies typically have much lower specific frequencies. <cit.> derives a specific frequency for the Milky Way S_N= 58.
§.§.§ Stellar Fraction in Globular Clusters
Here we determine the fraction of light within the entire UDG that is contained in globular clusters. First, the total flux within the UDG globular clusters is determined by integrating the background subtracted histogram (in flux space) in V-band. We account for the portion of the V-band GCLF that we do not sample by multiplying the total flux by 10/7, as we sample approximately 70% of the luminosity function in V. Uncertainties are determined with the same approach as that presented in <cit.>. We perturb the measured histogram values by their errors 10,000 times and then derive the confidence interval of the resulting magnitude distribution at 2σ. We find that M_V,GCs=-12.3±0.5 mag. This implies a flux ratio of 0.21 ± 0.1 or that 21%± 10% of the stars measured in V-band reside in globular clusters. For comparison most galaxies fall within the range of 0.1-1%, with Coma UDGs showing higher fractions (in the 10% range), and NGC 5846-UDG1 containing the highest fraction at 12.9%±0.6%. Considering the substantial errors on UGC 9050-Dw1 it appears to be consistent with other UDGs on the high end.
§.§.§ The Dark Matter Mass
The abundance of GCs and the mass in GCs can both be used as an estimator of the dark matter halo mass of a system. The mass of a galaxy has been found to scale near linearly in GC abundance and total GC mass across a very large mass range <cit.>, down to M_total∼10^8.75<cit.>. In terms of total system mass (stellar mass + halo mass) <cit.> finds 1 GC per 2.9 ± 0.3 × 10^9. With our GC abundance this relation yields M_total = 1.5±0.3×10^11 for UGC 9050-Dw1. Because this mass estimate is 3 orders of magnitude larger than our stellar mass estimate we make the approximation that M_total≊ M_halo. <cit.> finds an average GC mass of 1×10^5 for UDGs VCC 1287 and DF44, a value similar to what is expected from other dwarf measurements. Using this average mass and the relation M_GC/M_halo = 2.9×10^-5 from <cit.> we find a halo mass of M_halo = 1.8±0.3×10^11 for UGC 9050-Dw1. Both approaches give consistent estimates of M_halo.
Using the loosely constrained stellar-to-halo mass relation for low redshift from <cit.>, with our inferred halo mass of M_halo = 1.5×10^11 we predict a stellar mass of M_*≊ 3×10^10 - an almost three order of magnitude difference from what we measure (∼ 3×10^7). And, utilizing the 1.5±0.3×10^11 mass estimate we find a M/L_V of 457 /L_⊙. These results imply an anomalously high dark matter halo mass for UGC 9050-Dw1.
For comparison, <cit.> and <cit.> find the mass of the LMC M_LMC = 2.5^+0.9_-0.8× 10^11, with a total of ∼40 GCs <cit.>. In general bright dwarfs are often defined to have stellar masses of up to 10^9 which corresponds to halo masses in the range of a ∼ a few ×10^10 to 10^11 <cit.> implying that UGC 9050-Dw1 may have an unusually high halo mass for its stellar mass, but within expectation for massive dwarfs. This dark matter halo mass estimate is also further complicated by the disturbed nature of UGC 9050-Dw1 meaning it may not be in dynamical equilibrium and the GC abundance — halo mass relation may not apply <cit.>. Other well-studied UDGs with presumed overly massive dark matter halos given their stellar mass include DF17 (∼ 9×10^10), VCC 1287 (∼ 8×10^10), and NGC 5846-UDG1 (1.6×10^11 using the relation and GC counts from ; 9±2×10^10 in ).
We do not present a dynamical mass estimate of UGC 9050-Dw1 for several reasons. First, the orientation of the disk (assuming there is one) is unknown. Without knowing the exact orientation employing marginally different inclination corrections can result in dramatically different predictions. Additionally, in the Apertif velocity maps there is no sign of ordered rotation in order to produce a dynamical mass estimate. This is another indicator that the system is not in dynamical equilibrium and we may not be able to obtain a dynamical mass estimate. Future modeling of the forthcoming Apertif data may be able to provide such a mass estimate. However, the line width is very narrow (σ∼ 10 km/s), so kinematic modeling may not be possible even with improved data.
§ THE FORMATION OF UGC 9050-DW1
In terms of formation mechanisms for the UDG UGC 9050-Dw1, there are several key observed features that must be considered:
* The optical morphology of UGC 9050-Dw1 coupled with the morphology observed with Apertif provide strong evidence of a past or ongoing interaction that is likely connected to the current UDG appearance. This morphology implies we may have caught UGC 9050-Dw1 in the act of becoming a UDG.
* UGC 9050-Dw1 is an -bearing UDG with bright NUV flux indicative of recent central star formation. Any interactions have not quenched the system, or must have done so very recently (< 100 Myr).
* The GC candidate colors of UGC 9050-Dw1 are strikingly uniform and blue.
Therefore there was likely a single epoch in which a majority of the GCs formed - either the progenitor was a massive galaxy where the GCs formed prior to transforming into a UDG or, more likely, the process that turned the progenitor into a UDG also formed a majority of the GCs.
* The GC counts of UGC 9050-Dw1 are exceptionally high, with one of the highest observed specific frequencies and fraction of stellar light in GCs for a UDG. This implies an epoch of clumpy and high star formation density in order to form such an abundance of GCs.
* UGC 9050-Dw1 and its assumed host UGC 9050 are strikingly similar in both estimated mass and color, with little evidence of direct interaction.
In the following subsections we comment on several of the proposed UDG formation mechanisms and how they mesh with the key criteria we note above. We favor the dwarf merger formation mechanism (<ref>), followed by the disrupted galaxy formation mechanism (<ref>) over other common UDG formation mechanisms.
§.§ Less Likely Formation Origins
Tidal Dwarf Galaxy These objects (TDGs) are thought to form in the debris of interacting spiral galaxies, either from instabilities in tidal tails or stellar material ejected into tidal tails <cit.>. As a result, TDGs are expected to be almost entirely lacking in dark matter <cit.> as well as GCs.
A TDG with a GC population as rich as what we measure for UGC 9050-Dw1 is unheard of, as well as a young TDG lacking in strong tidal tails (old TDGs may have no gas). These features strongly disfavor a TDG origin for UGC 9050-Dw1.
Tidally `Puffed' Dwarf
Tidal stripping and heating of dwarf-mass DM halos by a massive host results in an expansion of the half-light radii of the stellar component, which can result in a “puffed up” dwarf; i.e. a UDG <cit.>. Yet the GC abundance (and inferred halo mass) of UGC 9050-Dw1 is not reflective of the average dwarf GC population, in sharp contrast to the two group UDGs with associated tidal features studied in <cit.> (containing 2 and 5 GCs). The models of <cit.> are able to produce GC rich UDGs as a product of tidal heating, but it requires such UDGs to live in massive galaxy clusters that are tidally heated and thus “puffed up” during cluster in-fall, post intense star formation at high redshift.
In addition to the GC counts, the similarities between UDG UGC 9050-Dw1 and its presumed companion UGC 9050 (in particular rather similar masses) and relatively isolated environment indicate that the triggering mechanism for UGC 9050-Dw1 could not have been typical tidal stripping/heating.
Failed Galaxy <cit.> posit that some UDGs may be galaxies in which the old stellar halo was able to form, but then subsequent rapid gas removal resulted in the lack of bulge or disk formation, while a massive dark matter halo remains <cit.>. These UDGs are called “failed galaxies”, such that they were on the path to becoming galaxies within the mass range of the Magellanic Clouds or M33, but then this formation was interrupted. During an intense early epoch of star formation for which the gas surface density is high, high GC mass fractions arise, with the remainder of the stellar population old and metal poor <cit.>. It has also been argued that for some UDGs, like NGC 5846-UDG1 <cit.> and DGSAT 1 <cit.>, a single early star formation burst may have occurred before the galaxy “failed”, resulting in monochromatic GC populations. However, UDGs of this origin are generally thought to presently be devoid of cold gas as a result of
ram pressure stripping and/or strangulation post cluster infall <cit.>, or intense supernova feedback from the same star forming event responsible for the mono-GC population <cit.>. While at first glance UGC 9050-Dw1 meshes with this formation mechanism, the clear tidal features and substantial mass in conjunction with NUV luminosity indicative of recent star formation disfavor a failed galaxy origin.
§.§ A Disrupted Galaxy
In this subsection we propose a disruption event as a possible formation mechanism for UGC 9050-Dw1. Fundamentally this formation scenario is akin to a tidally “puffed up” dwarf, like those observed in <cit.> or <cit.>. However, in this instance the progenitor of UGC 9050-Dw1 is a low mass spiral galaxy (or possibly a massive dwarf). The tail then is a result of a tidal interaction between the progenitor of UGC 9050-Dw1 and another galaxy. This means that the galaxy with which UGC 9050-Dw1 interacted may have had a similar mass (like the close, equal mass passage explored in ) or been more massive. And, in this instance, the galaxies did not collide.
There are several pieces of evidence that support the proposition of the UGC 9050-Dw1 progenitor falling into the category of a “normal” galaxy, aside from the distribution of stars and present day morphology (such as the clumpy star formation). First, from the perspective UGC 9050-Dw1 may not be as unusual as it seems. The mass to stellar mass ratio is ≈ 8.7, which is comparable to gas bearing dwarfs in the field (typical ratios are 1 to 10, see e.g., fig.2 of ). In contrast, bearing UDGs have been found to be more rich, with somewhat higher ratios (; see fig. 4).
The GC abundance implies a halo mass of ∼10^11 (<ref>). Extrapolating the relations between mass and halo mass from <cit.> (fig. 2 and 3), a central galaxy with a logM_/ <9 is expected to have a halo mass of logM_halo/≈ 11 (this as a loose estimate given that we are extrapolating below the mass range covered in the study), which matches our measurements for UGC 9050-Dw1 and implies that the dark matter halo may not be overly massive (a point further supported by the GC abundance - halo mass relation breaking down for some UDGs, see e.g., ).
However, it is also important to note that other UDGs in the field with similar masses to our UDG have almost no GCs <cit.>, indicative of typical dwarf mass progenitors.
Galaxy disruption events may trigger GC formation which could explain the GC abundance and relatively monochromatic colors of the GCs resulting from the single burst, but it is unclear exactly how efficient this mechanism is at forming a high volume of GCs in these relatively lower mass systems.
However, there are a few important caveats to consider for a massive dwarf/lower mass spiral progenitor. The halo mass estimate of UGC 9050-Dw1 is comparable to the LMC. In order to disrupt the older stellar population of the LMC (M_*=2.7×10^9; ) enough for it to become a UDG (and assuming a similar initial stellar mass for UGC 9050-Dw1) almost 90% of the stars have been blown out and become nearly undetectable while the remains roughly intact, which is not a readily explainable interaction. Work by <cit.> finds that backsplash UDGs are entirely stripped of their gas via ram pressure stripping, which further necessitates the need for an approximate equal mass passage.
Second, it is not completely clear what UGC 9050-Dw1 would have interacted with to have disrupted. The “tail” feature observed in UGC 9050-Dw1 is not unlike those modeled in <cit.> (fig. 18), from an approximately equal mass passage. However, while UGC 9050 has a comparable mass, there is only a hint of possible disturbance in as see in <ref> and none in the optical. Typically we would expect to see a similar magnitude of disruption in UGC 9050, although it is possible that the dynamics may have been such that UGC 9050 was minimally impacted. It is plausible that UGC 9050-Dw1 interacted with the nearby (∼ 300 kpc) NGC 5841 group instead of UGC 9050. Such an interaction would have happened a ∼few Gyrs ago to explain the current location of UGC 9050-Dw1, but this passage would again be akin to the mass differential between the LMC and Milky Way. It is possible UGC 9050-Dw1 may instead have interacted with a galaxy that is no longer detectable due to strong disruption, somewhat akin to the dwarf collision discussed below.
Last, it is not clear how a monochromatic GC population would arise from a disruption event. Disruption events can lead to star formation bursts, but typically then we would expect a clear older GC component remaining from the progenitor. For example, the LMC has young and blue clusters like those observed in UGC 9050-Dw1 but also contains many old and red GCs. If the GCs formed prior to disruption, their uniform color is still difficult to explain, as dwarf and spiral galaxies tend to have a larger spread in color (see <ref> and discussion), but may be plausible.
§.§ Dwarf Collision
The second possible mechanism for the formation of UGC 9050-Dw1 is that of a dwarf collision/major merger, which we argue is the most plausible formation mechanism. In contrast to the galaxy disruption formation scenario where two objects had a close encounter, here two galaxies collided, and UGC 9050-Dw1 is the resulting amalgamation of the two.
The general morphology of the tail seen in UGC 9050-Dw1 is not unlike that observed in dwarf mergers <cit.>.
In UGC 9050-Dw1 the tail is large enough that we suspect such a merger would have been approximately equal mass with a relatively small impact parameter - i.e., a merger not cataclysmic enough such that much of the gas has remained intact and a tail is produced, with the remnant consisting of both parent objects one the perturber falls in.
<cit.> studied the formation of UDGs in low density environments in hydrodynamical simulations (ROMULUS25), finding that a majority of UDGs were produced as a result of major mergers at early times in the Universe. These mergers increase the effective radius and angular momentum of the progenitor, ultimately decreasing the central surface brightness and re-distributing star formation to the outskirts. Such an origin for UGC 9050-Dw1 implies that this may be a more recent merger (≲ 10 Gyr, depending on the age of the GCs), before the formation of the steep color gradient between a red center and blue outskirts that has been observed in simulations. That said, while <cit.> found that massive UDGs tend to have their star formation de-centralized, this trend was not as clear for lower mass UDGs. The stellar mass estimate of UGC 9050-Dw1 does place it in the low mass regime of the simulated ROMOULUS25 UDGs. It is also possible that instead the UDG is not face-on, and actually the star forming clump is in the outskirts and only projected to be at the center. The Apertif DR1 data in <ref> suggests the clump may be offset from the gas, but the apparent gas distribution may be significantly impacted by noise and the beam side-lobes making it difficult to constrain viewing angle.
For a dwarf major merger scenario to have occurred, we must explain how a high abundance of monochromatic GCs could form. Upon investigating a collision as a UDG origin for DF2 and DF4, which <cit.> invoked to explain the highly monochromatic GC populations, <cit.> find that as many as 30-60 GCs can be produced in a single burst. Studies of other dwarf mergers also find subsequent high star formation rates <cit.>. Therefore it is plausible that here in the immediate aftermath of the merger there was an episode of high density star formation that spurred the GC formation. Rapid, extensive GC formation requires this to be a “wet” merger, in which case the we see is the final remnant of gas from the pair.
The dwarf merger formation theory is further supported by the high luminosity fraction of GCs we find for UGC 9050-Dw1, which is comparable to UDGs in the Coma cluster and NGC 5846-UDG1. <cit.> argues that NGC 5846-UDG1 results from extreme conditions causing clumpy star formation yielding in a high fraction of stars to form as GCs. Collisions have been found to cultivate unusual/clumpy star formation conditions, even in the dwarf regime <cit.>, making it very plausible that the high GC abundance observed in UGC 9050-Dw1 is a result of a galaxy merger. This short lived, intense episode of star formation also neatly explains the monochromatic GC color we observe in which the GCs all formed at approximately the same time. We propose that, while these events are rare, dwarf mergers may serve as an avenue to form GC rich UDGs with relatively monochromatic GC populations.
Naively, the lack of an tail tracing the stellar tail may oppose a dwarf collision origin, as the standard theory is that tails are much longer lived (order of Gyr) than stellar tails (order of 100 Myrs; e.g., see review by ). However, at lower surface brightness this is difficult to observe. A merger would pull both gas and stars into the same tail, but low column-density would eventually be photoionised away from the background UV radiation field. Our data is also rather shallow, so any in the tail may also be below our detection limit.
§ CONCLUSION
In this paper we investigate the disturbed, bearing UDG UGC 9050-Dw1 and its GC system using HST/ACS F555W and F814W filters and the VLA D-array. UGC 9050-Dw1 was thought to be a companion to the low surface brightness spiral UGC 9050, lying ∼70 kpc to the West of UGC 9050-Dw1.
UGC 9050-Dw1 contains a central, elongated, UV bearing, blue core with an extended tail feature and a redder diffuse component. We find a total GC abundance of 52^+4_-6 with a specific frequency of S_N = 122_-24^+30 and GC luminosity fraction of 21%±10%, marking an extreme in UDG parameter space. The GC colors are surprisingly uniform and on the blue end of the GC color distribution, indicative of a possible epoch of intense star formation in which a majority of the GCs formed. We estimate the GC population to be a minimum of ∼1.5 Gyr old. The GC population of UGC 9050-Dw1 has a centrally peaked radial distribution. Likewise the GCLF of UGC 9050-Dw1 is consistent with that expected for dwarf galaxies at the presumed distance of UGC 9050-Dw1. The VLA data indicate that the mass of the system (10^8.44) is similar to the nearby low surface brightness galaxy UGC 9050 (10^8.94). Because we are at the resolution limit of the VLA D-array configuration we cannot reliably constrain the dynamical mass of the system. Instead we estimate the dark matter halo mass from GC counts, and infer a massive dark matter halo (M_halo=1.5±0.3×10^11) for its stellar mass (M_*∼10^8), although this relation may not apply well to UDGs and we caution that this halo mass estimate is possibly incorrect.
UGC 9050-Dw1 is truly an enigmatic object, with a number of unique properties that in combination appear peculiar for a UDG and at first glance thwart easy explanation. UGC 9050-Dw1 does not neatly fit into any of the standard proposed UDG formation scenarios. Most notably, in contrast to the two group UDGs with disturbed features studied in <cit.>, UGC 9050-Dw1 does not mesh as well with the tidally heated dwarf progenitor channel of UDG formation, given its GC abundance and gas content. Instead we find UGC 9050-Dw1 to be more consistent with a disrupted galaxy or dwarf major merger as its progenitor (we favor the latter). Clumpy, high star formation density induced by a dwarf major merger more easily explains the observed characteristics of UGC 9050-Dw1, especially if the GC abundance - halo mass relation does not hold for this system and the dark matter halo is not overly massive.
UGC 9050-Dw1 is an important object, potentially providing us insights into how UDGs form. Specifically, we propose dwarf major mergers may be another avenue for UDG formation, especially those with high GC abundance and monochromatic GC populations, without having to invoke complex or unknown exotic processes. That said, dwarf merger events are rather rare, so this mechanism alone cannot explain all GC abundant UDGs. UGC 9050-Dw1 may be one of the first observational analogues to the dwarf major merger UDGs produced in the ROMULUS25 simulation <cit.>. Constraints on the dynamics and metallicity of the system may provide further insight into just how this UDG came to be.
The authors would like to thank both Anil Seth and Anna Wright for useful discussions on the nature of this system. We also thank Steven Janssens for providing details on the properties of DGSAT 1.
This work is based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. These observations are associated with program # 16890
The work used data observed with the Karl G. Jansky Very Large Array. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
This work is based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/IRFU, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at Terapix available at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.
AK acknowledges financial support from the grant CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033 and from the grant POSTDOC_21_00845 funded by the Economic Transformation, Industry, Knowledge and Universities Council of the Regional Government of Andalusia. KS acknowledges support for the Natural Sciences and Engineering Research Council of Canada (NSERC).
HST(ACS), VLA, CFHT
Astropy <cit.>,
CASA <cit.>,
CGAT-core <cit.>,
Dolphot <cit.>,
Galfit <cit.>,
Photutils <cit.>,
Reproject <cit.>,
Source Extractor <cit.>,
§ SUPPLEMENTARY TABLES AND FIGURES
Here we provide additional details on the presumed host galaxy, UGC 9050, and on the individual GCCs of UGC 9050-Dw1. For UGC 9050, <ref> summarizes the measured properties and <ref> presents the VLA D-array results.
Last, <ref> presents detailed information on each of the 30 GCCs identified in this study.
aasjournal
|
http://arxiv.org/abs/2306.10234v3
|
20230617022556
|
Federated Few-shot Learning
|
[
"Song Wang",
"Xingbo Fu",
"Kaize Ding",
"Chen Chen",
"Huiyuan Chen",
"Jundong Li"
] |
cs.LG
|
[
"cs.LG",
"cs.DC"
] |
University of Virginia
[email protected]
University of Virginia
[email protected]
Arizona State University
[email protected]
University of Virginia
[email protected]
Case Western Reserve University
[email protected]
University of Virginia
[email protected]
Federated Learning (FL) enables multiple clients to collaboratively learn a machine learning model without exchanging their own local data. In this way, the server can exploit the computational power of all clients and train the model on a larger set of data samples among all clients.
Although such a mechanism is proven to be effective in various fields,
existing works generally assume that each client preserves sufficient data for training. In practice, however, certain clients may only contain a limited number of samples (i.e., few-shot samples). For example, the available photo data taken by a specific user with a new mobile device is relatively rare. In this scenario, existing FL efforts typically encounter a significant performance drop on these clients.
Therefore, it is urgent to develop a few-shot model that can generalize to clients with limited data under the FL scenario. In this paper, we refer to this novel problem as federated few-shot learning.
Nevertheless, the problem remains challenging
due to two major reasons: the global data variance among clients (i.e., the difference in data distributions among clients) and the local data insufficiency in each client (i.e., the lack of adequate local data for training).
To overcome these two challenges,
we propose a novel federated few-shot learning framework with two separately updated models and dedicated training strategies to reduce the adverse impact of global data variance and local data insufficiency.
Extensive experiments on four prevalent datasets that cover news articles and images validate the effectiveness of our framework compared with the state-of-the-art baselines. Our code is provided[https://github.com/SongW-SW/F2Lhttps://github.com/SongW-SW/F2L].
Federated Few-shot Learning
Jundong Li
July 31, 2023
===========================
§ INTRODUCTION
The volume of valuable data is growing massively with the rapid development of mobile devices <cit.>.
Recently, researchers have developed various machine learning methods <cit.> to analyze and extract useful information from such large-scale real-world data.
Among these methods, Federated Learning (FL) is an effective solution, which aims to collaboratively optimize a centralized model over data distributed across a large number of clients <cit.>. In particular, FL trains a global model on a server by aggregating the local models learned on each client <cit.>.
Moreover, by avoiding the direct exchange of private data, FL can provide effective protection of local data privacy for clients <cit.>.
As an example, in Google Photo Categorization <cit.>, the server aims to learn an image classification model from photos distributed among a large number of clients, i.e., mobile devices.
In this case, FL can effectively conduct learning tasks without revealing private photos to the server.
In fact, new learning tasks (e.g., novel photo classes) are constantly emerging over time <cit.>. In consequence, FL can easily encounter a situation where the server needs to solve a new task with limited available data as the reference.
In the previous example of Google Photo Categorization, as illustrated in Fig. <ref>, the server may inevitably need to deal with novel photo classes such as the latest electronic products, where only limited annotations are available.
Nevertheless, existing FL works generally assume sufficient labeled samples for model training, which inevitably leads to unsatisfying classification performance for new tasks with limited labeled samples <cit.>.
Therefore, to improve the practicality of FL in realistic scenarios, it is important to solve this problem by learning an FL model that can achieve satisfactory performance on
new tasks with limited samples.
In this paper, we refer to this novel problem setting as federated few-shot learning.
Recently, many few-shot learning frameworks <cit.> have been proposed to deal with new tasks with limited samples. Typically, the main idea is to learn meta-knowledge from base classes with abundant samples (e.g., photo classes such as portraits). Then such meta-knowledge is generalized to novel classes with limited samples (e.g., photo classes such as new electronic products), where novel classes are typically disjoint from base classes.
However, as illustrated in Fig. <ref>, it remains challenging to conduct few-shot learning under the federated setting due to the following reasons.
First,
due to the global data variance
(i.e., the differences in data distributions across clients), the aggregation of local models on the server side will disrupt the learning of meta-knowledge in each client <cit.>. Generally, the meta-knowledge is locally learned from different classes in each client and thus is distinct among clients, especially under the non-IID scenario, where the data variance can be even larger among clients compared with the IID scenario. Since the server will aggregate the local models from different clients and then send back the aggregated model, the learning of meta-knowledge in each client will be potentially disrupted.
Second,
due to the local data insufficiency in clients, it is non-trivial to learn meta-knowledge from each client. In FL, each client only preserves a relatively small portion of the total data <cit.>. However, meta-knowledge is generally learned from
data in a variety of classes <cit.>. As a result, it is difficult to learn meta-knowledge from data with less variety, especially in the non-IID scenario, where each client only has a limited amount of classes.
To effectively solve the aforementioned challenges, we propose a novel Federated Few-shot Learning framework, named F^2L.
First, we propose a decoupled meta-learning framework to mitigate the disruption from the aggregated model on the server. Specifically, the proposed framework retains a unique client-model for each client to learn meta-knowledge and a shared server-model to learn client-invariant knowledge (e.g., the representations of samples), as illustrated in Fig. <ref>.
Specifically, the client-model in each client is updated locally and will not be shared across clients, while the server-model can be updated across clients and sent to the server for aggregation. Such a design decouples the learning of meta-knowledge (via client-model) from learning client-invariant knowledge (via server-model). In this way, we can mitigate the disruption from the aggregated model on the server caused by global data variance among clients.
Second, to compensate for local data insufficiency in each client, we propose to leverage global knowledge learned from all clients with two dedicated update strategies.
In particular, we first transfer the learned meta-knowledge in client-model to server-model by maximizing the mutual information between their output (i.e., local-to-global knowledge transfer). Then we propose a partial knowledge distillation strategy for each client to selectively extract useful knowledge from server-model (i.e., global-to-local knowledge distillation). In this manner, each client can leverage the beneficial knowledge in other clients to learn meta-knowledge from more data.
In summary, our contributions are as follows:
* Problem. We investigate the challenges
of learning meta-knowledge
in the novel problem of federated few-shot learning from the perspectives of global data variance and local data insufficiency. We also discuss the necessity of tackling these challenges.
* Method. We develop a novel federated few-shot learning framework F^2L with three essential strategies: (1) a decoupled meta-learning framework to mitigate disruption from the aggregated model on the server; (2) mutual information maximization for local-to-global knowledge transfer; (3) a novel partial knowledge distillation strategy for global-to-local knowledge distillation.
* Experiments. We conduct experiments on four few-shot classification datasets covering both news articles and images under the federated scenario. The results further demonstrate the superiority of our proposed framework.
§ PRELIMINARIES
§.§ Problem Definition
In FL, given a set of I clients, i.e., {ℂ^(i)}_i=1^I, where I is the number of clients, each ℂ^(i) owns a local dataset 𝒟^(i). The main objective of FL is to learn a global model over data across all clients (i.e., {𝒟^(i)}_i=1^I) without the direct exchange of data among clients.
Following the conventional FL strategy <cit.>, a server 𝕊 will aggregate locally learned models
from all clients for a global model.
Under the prevalent few-shot learning scenario,
we consider a supervised setting in which the data samples for client ℂ^(i) are from its local dataset: (x,y)∈𝒟^(i), where x is a data sample, and y is the corresponding label. We first denote the entire set of classes on all clients as 𝒞. Depending on the number of labeled samples in each class, 𝒞 can be divided into two categories: base classes 𝒞_b and novel classes 𝒞_n, where 𝒞=𝒞_b ∪𝒞_n and 𝒞_b ∩𝒞_n=∅. In general, the number of labeled samples in 𝒞_b is sufficient, while it is generally small in 𝒞_n <cit.>. Correspondingly, each local dataset can be divided into a base dataset 𝒟^(i)_b={(x,y)∈𝒟^(i):y∈𝒞_b} and a novel dataset 𝒟^(i)_n={(x,y)∈𝒟^(i):y∈𝒞_n}.
In the few-shot setting, the evaluation of the model generalizability to novel classes 𝒞_n is conducted on 𝒟^(i)_n, which contains only limited labeled samples. The data samples in 𝒟^(i)_b will be used for training. Then we can formulate the studied problem of federated few-shot learning as follows:
Federated Few-shot Learning: Given a set of I clients {ℂ^(i)}_i=1^I and a server 𝕊,
federated few-shot learning aims to learn a global model after aggregating model parameters
locally learned from 𝒟^(i)_b in each client such that the model can accurately predict labels for unlabeled samples (i.e., query set 𝒬) in 𝒟^(i)_n with only a limited number of labeled samples (i.e., support set 𝒮).
More specifically,
if the support set 𝒮 consists of exactly K labeled samples for each of N classes from 𝒟^(i)_n, and the query set 𝒬 is sampled from the same N classes, the problem is defined as Federated N-way K-shot Learning. Essentially, the objective of federated few-shot learning is to learn a globally shared model across clients that can be fast adapted to data samples in 𝒟^(i)_n with only limited labeled samples. Therefore, the crucial part is to effectively learn meta-knowledge from the base datasets {𝒟_b^(i)}_i=1^I in all clients. Such meta-knowledge is generalizable to novel classes unseen during training and thus can be utilized to classify data samples in each 𝒟^(i)_n, which consists of only limited labeled samples.
§.§ Episodic Learning
In practice, we adopt the prevalent episodic learning framework for model training and evaluation, which has proven to be effective in various few-shot learning scenarios <cit.>. Specifically, the model evaluation (i.e., meta-test) is conducted on a certain number of meta-test tasks, where each task contains a small number of labeled samples as references and unlabeled samples for classification. The local model training (i.e., meta-training) process in each client is similarly conducted on a specific number of meta-training tasks, where each task mimics the structure of meta-test tasks.
It is worth mentioning that meta-training tasks are sampled from the local base dataset 𝒟^(i)_b of each client, while meta-test tasks are sampled from the local novel dataset 𝒟^(i)_n. That being said, the class set of samples in meta-training tasks is a subset of 𝒞_b, while the class set of samples in meta-test tasks is a subset of 𝒞_n, which is distinct from 𝒞_b. The main idea of federated few-shot learning is to preserve the consistency between meta-training and meta-test so that the model can learn meta-knowledge from clients for better generalization performance to novel classes 𝒞_n.
Specifically, to construct a meta-training task 𝒯
in client ℂ^(i) from its local base dataset 𝒟^(i)_b,
we first randomly sample N classes from 𝒟^(i)_b. Then we randomly select K samples from each of the N classes (i.e., N-way K-shot) to establish the support set 𝒮. Similarly, the query set 𝒬 consists of Q different samples (distinct from 𝒮) from the same N classes. The components of the meta-training task 𝒯 is formulated as follows:
𝒮 ={(x_1,y_1),(x_2,y_2),…,(x_N× K,y_N× K)},
𝒬 ={(q_1,y'_1),(q_2,y'_2),…,(q_Q,y'_Q)},
𝒯 ={𝒮,𝒬},
where x_i (or q_i) is a data sample in the sampled N classes, and y_i (or y'_i) is the corresponding label.
Note that during meta-test,
each meta-test task shares a similar structure to meta-training tasks, except that the samples are from the local novel dataset 𝒟^(i)_n, which are distinct from 𝒟^(i)_b.
§ METHODOLOGY
In this part, we introduce the overall design of our proposed
framework F^2L in detail.
Specifically, we formulate the federated few-shot learning problem under the prevailing N-way K-shot learning framework.
Our target of conducting federated few-shot learning is to learn meta-knowledge from a set of I clients {ℂ^(i)}_i=1^I with different data distributions, and generalize such meta-knowledge to meta-test tasks.
Nevertheless, it remains difficult to conduct federated few-shot learning due to the challenging issues of global data variance and local data insufficiency as mentioned before. Therefore, as illustrated in Fig <ref>, we propose a decoupled meta-learning framework to mitigate disruption from the servers. We further propose two update strategies to leverage global knowledge. The overview process is presented in Fig <ref>.
§.§ Decoupled Meta-Learning Framework
§.§.§ Federated Learning Framework
We consider a server-model, which consists of an encoder q_ϕ and a classifier f_ϕ that are shared among clients. We denote the overall model parameters in the server-model as ϕ.
Specifically, q_ϕ:ℝ^d→ℝ^k is a function that maps each sample into a low-dimensional vector 𝐡_ϕ∈ℝ^k, where d is the input feature dimension, and k is the dimension of learned representations. Taking the representation _ϕ as input, the classifier f_ϕ: ℝ^k→𝒞_b maps each _ϕ to the label space of base classes 𝒞_b and outputs the prediction _ϕ∈ℝ^|_b|, where each element in _ϕ denotes the classification probability regarding each class in 𝒞_b.
Following the prevalent FedAvg <cit.> strategy for FL, the training of server-model is conducted on all clients through T rounds. In each round t, the server 𝕊 first sends the server-model parameters ϕ to all clients, and each client will conduct a local meta-training process on τ randomly sampled meta-training tasks. Then the server 𝕊 will
perform aggregation on parameters received from clients:
ϕ^t+1=1/I∑_i=1^Iϕ_i^t,
where ϕ_i^t denotes the locally updated server-model parameters by client ℂ^(i) on round t. ϕ^t+1 denotes the aggregated server-model parameters which will be distributed to clients at the beginning of the next round. In this way, the server can learn a shared model for all clients in a federated manner.
Although the standard strategy of learning a single shared model for all clients achieves decent performance on general FL tasks <cit.>,
it can be suboptimal for federated few-shot learning. Due to the global data variance among clients, the aggregated model on the server will disrupt the learning of meta-knowledge in each client <cit.>.
As a result, the local learning of meta-knowledge in clients
will become more difficult.
In contrast, we propose to further introduce a client-model, which is uniquely learned and preserved by each client, to locally learn meta-knowledge. In other words, its model parameters will not be sent back to the server for aggregation. In this manner, we can separate the learning of client-model (meta-knowledge) and server-model (client-invariant knowledge) so that the learning of meta-knowledge is not disrupted.
Specifically, for client ℂ^(i), the client-model also consists of an encoder q_ψ_i and a classifier f_ψ_i.
We denote the overall model parameters in the client-model for client ℂ^(i) as ψ_i.
In particular, the encoder q_ψ_i takes the representation _ϕ learned by the encoder q_ϕ in server-model as input, and outputs a hidden representation _ψ∈ℝ^k.
Such a design ensures that the client-model encoder q_ψ_i does not need to process the raw sample and thus can be a small model, which is important when clients only preserve limited computational resources <cit.>.
Then the classifier f_ψ_i maps _ψ to predictions _ψ∈ℝ^N of the N classes.
§.§.§ Local Meta-training on Clients
Based on the episodic learning strategy, in each round, the training process of each client ℂ^(i) is conducted through τ steps, where each step is a local update based on a meta-training task randomly sampled from the local base dataset 𝒟^(i)_b. In particular, for client ℂ^(i) on round t=1,2,…,T and step s=1,2,…,τ, we denote the sampled meta-task as 𝒯_i^t,s={𝒮_i^t,s,𝒬_i^t,s}.
To learn meta-knowledge from meta-task 𝒯_i^t,s, we adopt the prevalent MAML <cit.> strategy to update client-model in one fine-tuning step and one meta-update step. We first fine-tune the client-model to fast adapt it to support set 𝒮_i^t,s:
ψ_i^t,s=ψ_i^t,s-α_ft∇_ψℒ_ft(𝒮_i^t,s;{ϕ_i^t,s,ψ_i^t,s}),
where ℒ_ft is the fine-tuning loss, which is the cross-entropy loss calculated on the support set 𝒮_i^t,s. Here, α_ft is the learning rate, and ψ_i^t,s (or ϕ_i^t,s) denotes the parameters of client-model (or server-model) on round t and step s.
Then we update the client-model based on the query set 𝒬_i^t,s:
ψ_i^t,s+1=ψ_i^t,s-α_ψ∇_ψℒ_ψ(𝒬_i^t,s;{ϕ_i^t,s,ψ_i^t,s}),
where ℒ_ψ is the loss for client-model on the query set 𝒬_i^t,s, and α_ψ is the meta-learning rate for ψ.
In this regard, we can update client-model with our global-to-local knowledge distillation strategy.
For the update of server-model, we conduct one step of update based on the support set and parameters of client-model:
ϕ_i^t,s+1=ϕ_i^t,s-α_ϕ∇_ϕℒ_ϕ(𝒮_i^t,s;{ϕ_i^t,s, ψ_i^t,s}),
where ℒ_ϕ is the loss for the server-model, and α_ϕ is the meta-learning rate for ϕ.
In this manner, we can update the server-model with our local-to-global knowledge transfer strategy.
After repeating the above updates for τ steps, the final parameters of server-model ϕ_i^t,τ is used as ϕ_i^t in Eq. (<ref>) and sent back to the server for aggregation, while the client-model (with parameters ψ_i^t,τ) will be kept locally.
By doing this, we can decouple the learning of local meta-knowledge in client-model while learning client-invariant knowledge in server-model to avoid disruption from the server.
§.§ Local-to-Global Knowledge Transfer
With our decoupled meta-learning framework, we can mitigate the disruption to the learning of local meta-knowledge in each client. Nevertheless, we still need to transfer the learned meta-knowledge to server-model (i.e., Local-to-global Knowledge Transfer), so that it can be further leveraged by other clients to handle the local data insufficiency issue.
Specifically, to effectively transfer local meta-knowledge, we propose to maximize the mutual information between representations learned from server-model encoder q_ϕ and client-model encoder q_ψ. In this way, the server-model can maximally absorb the information in the learned local meta-knowledge.
§.§.§ Mutual Information Maximization
Given a meta-training task ={𝒮,𝒬}, as described in Sec. <ref>, the server-model encoder q_ϕ and client-model encoder q_ψ will output 𝐡_ϕ and _ψ for each sample, respectively. By stacking the learned representations of samples in the support set 𝒮 (|𝒮|=D, where D=N × K), we can obtain the representations of support samples learned by the server-model, i.e., 𝐇_ϕ∈ℝ^D× k, and the client-model, i.e., 𝐇_ψ∈ℝ^D× k.
For simplicity, we omit the annotations of round t, step s, and client i.
The objective of maximizing the information between _ϕ and _ψ can be formally represented as follows:
max_ϕ I(_ϕ;_ψ)= max_ϕ∑_i=1^D∑_j=1^D p(_ϕ^i,_ψ^j;ϕ)logp(_ψ^j|_ϕ^i;ϕ)/p(_ϕ^j;ϕ),
where
_ϕ^i (or _ψ^i) is
the i-th row of _ϕ (or _ψ).
Since the mutual information I(_ϕ;_ψ) is difficult to obtain and thus infeasible to be maximized <cit.>, we re-write it to achieve a more feasible form:
I(_ϕ;_ψ)
=∑_i=1^D∑_j=1^D p(_ϕ^i|_ψ^j;ϕ)p(_ψ^j;ϕ)logp(_ψ^j|_ϕ^i;ϕ)/p(_ψ^j;ϕ)
.
Since the support set 𝒮 of size D is randomly sampled, we can assume that the prior probability p(_ψ^j;ϕ) follows a uniform distribution, and set it as p(_ψ^j;ϕ)=1/D. According to the Bayes' theorem, the Eq. (7) becomes:
I(_ϕ;_ψ)
=1/D∑_i=1^D∑_j=1^D p(_ϕ^i|_ψ^j;ϕ)(log(p(_ψ^j|_ϕ^i;ϕ))+log D)
.
We next present alternative strategies to estimate p(_ϕ^i|_ψ^j;ϕ) and p(_ψ^j|_ϕ^i;ϕ) in detail.
§.§.§ Estimation of p(_ϕ^i|_ψ^j;ϕ)
Since the client-model is fine-tuned on the support set 𝒮 of the meta-task 𝒯, we can leverage the classification results of the client-model to estimate p(_ϕ^i|_ψ^j;ϕ). We denote C(j) as the set of sample indices in the support set 𝒮 that shares the same class as the j-th sample (including itself), i.e., C(j)≡{k:y_k=y_j,k=1,2,…,D}. Here, we first set p(_ϕ^i|_ψ^j;ϕ)=0 for all i∉ C(j), since we assume the client-model can only infer representations from the same class.
Intuitively, in the case of i∈ C(j), which means the i-th and the j-th samples share the same class, p(_ϕ^i|_ψ^j;ϕ) can be considered as the confidence of client-model regarding the class of the j-the sample. Therefore, it should reflect the degree to which the sample representation _ψ^j is relevant to its class. Utilizing the client-model classification output (i.e., normalized class probabilities) for the i-th sample ^i_ψ∈ℝ^N, we can compute p(_ϕ^i|_ψ^j;ϕ) as follows:
p(_ϕ^i|_ψ^j;ϕ)={^i_ψ(y_j)/∑_k∈ C(j)^k_ψ(y_j) if i∈ C(j)
0 otherwise.
,
where _ψ^i(y_j)∈ℝ denotes the classification probability for the i-th sample regarding class y_j (y_i=y_j when i∈ C(j)).
§.§.§ Estimation of p(_ψ^j|_ϕ^i;ϕ)
Next we elaborate on how to estimate p(_ψ^j|_ϕ^i;ϕ). Although we can similarly leverage the classification results of the server-model, such a strategy lacks generalizability. This is because the server-model aims at classifying all base classes instead of the
N classes in each meta-training task.
We instead propose to estimate p(_ψ^j|_ϕ^i;ϕ) based on the Euclidean distance (divided by 2 for simplicity) between learned representations of the server-model and the client-model. Specifically, we normalize the distances
with a softmax function:
.p(_ψ^j|_ϕ^i;ϕ)
=exp(-_ϕ^i- _ψ^j_2^2/2)/∑_k∈ C(i)exp(-_ϕ^i- _ψ^k_2^2/2).
.
Then if we further apply the ℓ_2 normalization to both _ϕ^i and _ψ^j, we can obtain _ϕ^i- _ψ^j_2^2/2=1-_ϕ^i·_ψ^j. Moreover, since the value of ∑_i=1^D∑_j=1^Dp(_ϕ^i|_ψ^j;ϕ) equals a constant D, the term ∑_i=1^D∑_j=1^Dp(_ϕ^i|_ψ^j;ϕ)·log(D)/D in Eq. (<ref>) is also a constant and thus can be ignored in the objective:
1/D∑_i=1^D∑_j=1^D p(_ϕ^i|_ψ^j;ϕ)log(D)=1/D· D·log(D)=log(D).
Combining the above equations, the optimal server-model parameter ϕ^* for the final optimization objective (i.e., max_ϕ I(_ϕ;_ψ)) can be obtained as follows:
ϕ^*=_ϕI(_ϕ;_ψ)=_ϕℒ_MI.
Here ℒ_MI is defined as follows:
ℒ_MI =1/D∑_i=1^D∑_j=1^D p(_ϕ^i|_ψ^j;ϕ)log(p(_ψ^j|_ϕ^i;ϕ))
=1/D∑_j=1^D∑_i∈ C(j)
-^i_ψ(y_j)(_ϕ^i·_ψ^j)/∑_k∈ C(j)^k_ψ(y_j)
+^i_ψ(y_j)/∑_k∈ C(j)^k_ψ(y_j)log(∑_k∈ C(i)exp(_ϕ^i·_ψ^k))
,
where we exchange the order of summation over i and j for clarity.
It is noteworthy that ℒ_MI is different from the InfoNCE loss <cit.>, which considers different augmentations of samples, while ℒ_MI focuses on the classes of samples in 𝒮.
Moreover, ℒ_MI also differs from the supervised contrastive loss <cit.>, which combines various augmentations of samples and label information. In contrast, our loss targets at transferring the meta-knowledge by maximally preserving the mutual information between representations learned by the server-model and the client-model.
More differently, the term ^i_ψ(y_j)/∑_k∈ C(j)^k_ψ(y_j) acts as an adjustable weight that measures the importance of a sample to its class. Combining the objective described in Eq. (<ref>) and the standard cross-entropy loss, we can obtain the final loss for the server-model:
ℒ_ϕ =(1-λ_MI)ℒ_CE(𝒮)+λ_MIℒ_MI,
where ℒ_CE(𝒮) is defined as follows:
ℒ_CE(𝒮)
=-1/D∑_i=1^D∑_j=1^|𝒞_b|y^i_c_jlog^i_ϕ(c_j),
where _ϕ^i(c_j)∈ℝ denotes the classification probability for the i-th support sample belonging to the j-th class c_j in 𝒞_b, computed by the server-model. Here y^i_c_j=1 if the i-th support sample belongs to c_j, and y^i_c_j=0, otherwise. Moreover, λ_MI∈[0,1] is an adjustable hyper-parameter to control the weight of ℒ_MI.
§.§ Global-to-Local Knowledge Distillation
With the learned meta-knowledge in each client transferred from the client-model to the server-model,
other clients can leverage such meta-knowledge to deal with the local data insufficiency issue.
However, since each meta-task only contains N classes, directly extracting meta-knowledge in the server-model can inevitably involve meta-knowledge from other classes, which can be harmful to the learning of local meta-knowledge from these N classes in each client.
Instead, we propose a partial knowledge distillation strategy to selectively extract useful knowledge from the server-model, i.e., global-to-local knowledge distillation.
§.§.§ Partial Knowledge Distillation
Specifically, we focus on the output classification probabilities of the server-model regarding the N classes in support set 𝒮 while ignoring other classes. In this regard, we can extract the information that is crucial for learning local meta-knowledge from these N classes and also reduce the irrelevant information from other classes.
Particularly, we consider the same meta-task 𝒯={𝒮,𝒬}.
We denote the output probabilities for the i-th query sample q_i in 𝒬 (with label y_i) of the server-model and the client-model as ^i_ϕ∈ℝ^|𝒞_b| and ^i_ψ∈ℝ^N, respectively.
It is noteworthy that the N classes in this meta-task, denoted as 𝒞_m, are sampled from the base classes 𝒞_b (i.e., |𝒞_m|=N and 𝒞_m⊂𝒞_b). Therefore, the output of server-model (i.e., ^i_ϕ) will include the probabilities of classes in 𝒞_m. In particular, we enforce the probabilities of
in _m from the client-model to be consistent with the probabilities of the same classes from the server-model. As a result, the learning of local meta-knowledge can leverage the
information of data in the same N classes from other clients, which is encoded in the server-model.
In this regard, we can handle the local data insufficiency issue by involving information from other clients while reducing the irrelevant information from other classes not in 𝒞_m. In particular, by
utilizing the output of the server-model as the soft target for the client-model, we can achieve an objective
as follows:
ℒ_KD=-1/Q∑_i=1^Q∑_j=1^N^i_ϕ(c_j)log^i_ψ(c_j),
where c_j is the j-th class in _m (i.e., the N classes in meta-task ). ^i_ϕ(c_j) and ^i_ψ(c_j) are the knowledge distillation values for c_j from server-model and client-model, respectively. Specifically, the values of ^i_ϕ(c_j) and ^i_ψ(c_j) are obtained via the softmax normalization:
^i_ϕ(c_j)=exp(_ϕ^i(c_j)/T_i)/∑_k=1^Nexp(_ϕ^i(c_k)/T_i),
^i_ψ(c_j)=exp(_ψ^i(c_j)/T_i)/∑_k=1^Nexp(_ψ^i(c_k)/T_i),
where _ϕ^i(c_j) are _ψ^i(c_j)) are the logits (i.e., output before softmax normalization) of class c_j from server-model and client-model, respectively. T_i is the temperature parameter for the i-th query sample. In this way, we can ensure that ∑_j=1^N_ϕ^i(c_j)=∑_j=1^N_ψ^i(c_j)=1.
§.§.§ Adaptive Temperature Parameter
Generally, a larger value of T_i denotes that the client-model focuses more on extracting information from the other classes in _m <cit.> (i.e., {c|c∈_m,c≠ y_i}), denoted as negative classes. Since the classification results can be erroneous in the server-model, we should adaptively adjust the value of T_i for each meta-task to reduce the adverse impact of extracting misleading information from the server-model.
However, although negative classes can inherit useful information for classification, such information is generally noisier when the output probabilities of these negative classes are smaller. Therefore, to estimate the importance degree of each negative class, we consider the maximum output logit for negative classes to reduce potential noise.
Particularly, if the probability of a negative class from the server-model is significantly larger than other classes, we can conjecture that this class is similar to y_i and thus potentially contains the crucial information to distinguish them.
Specifically, the temperature parameter T_i for the i-th query sample q_i is computed as follows:
T_i=σ(max_c∈𝒞_m,c≠ y_iexp(_ϕ^i(c))/exp(_ϕ^i(y_i))),
where σ(·) denotes the Sigmoid function, and y_i is the label of q_i.
In this way, the temperature parameter T_i will increase when the ratio between the largest probability in negative classes and the probability for y_i is larger. As a result, the client-model will focus more on the negative class information.
Then by further incorporating the cross-entropy loss on the query set 𝒬, we can obtain the final loss for the client-model:
ℒ_ψ=(1-λ_KD)ℒ_CE(𝒬)+λ_KDℒ_KD,
where ℒ_CE(𝒬) is defined as follows:
ℒ_CE(𝒬)=-1/Q∑_i=1^Q∑_j=1^Ny^i_c_jlog^i_ψ(c_j),
where ^i_ψ(c_j) is the probability of the i-th query sample belonging to class c_j computed by the client-model.
y^i_c_j=1 if the i-th query sample belongs to c_j, and y^i_c_j=0, otherwise. Moreover, λ_KD∈[0,1] is an adjustable hyper-parameter to control the weight of ℒ_KD. In this manner, the client-model can selectively learn useful knowledge from both the local and global perspectives, i.e., global-to-local knowledge distillation.
§.§ Overall Learning Process
With the
proposed losses ℒ_ϕ and ℒ_ψ, on each round, we can conduct meta-training on each client ℂ^(i) by sampling τ meta-training tasks from the local base dataset 𝒟^(i)_b. The detailed process is described in Algorithm <ref>.
After T rounds of meta-training on all the clients, we have obtained a model that accommodates comprehensive meta-knowledge for federated few-shot learning. For the meta-test phase,
since we have aggregated learned local meta-knowledge from each client to the server-model, we can leverage the server-model to generate data representations for classification. Specifically, during evaluation, for each meta-test task 𝒯={𝒮,𝒬} sampled from local novel datasets {𝒟_n^(i)}_i=1^I in all clients,
we follow the same process as meta-training including fine-tuning, except that the meta-update process is omitted. The output of the client-model will be used for classification.
§ EXPERIMENTS
In this part, we conduct extensive experiments to evaluate our framework F^2L on four few-shot classification datasets covering both news articles and images under the federated scenario.
§.§ Datasets
In this section, we introduce four prevalent real-world datasets used in our experiments, covering both news articles and images: 20 Newsgroup <cit.>, Huffpost <cit.>, FC100 <cit.>, and miniImageNet <cit.>. In particular, 20 Newsgroup and Huffpost are online news article datasets, while FC100 and miniImageNet are image datasets. The details are as follows:
* 20 Newsgroup <cit.> is a text dataset that consists of informal discourse from news discussion forums. There are 20 classes for documents in this dataset, where each class belongs to one of six top-level categories. The classes are split as 8/5/7 for training/validation/test, respectively.
* Huffpost <cit.> is a text dataset containing news headlines published on HuffPost[https://www.huffpost.com/] between 2012 and
2018. Generally, the headlines are significantly shorter and less grammatical
than the 20 Newsgroup dataset. Moreover, each headline belongs to one of 41 classes, which are then split as 20/5/16 for training/validation/test, respectively.
* FC100 <cit.> is an image classification dataset based on CIFAR-100 <cit.>. Specifically, this dataset contains 100 image classes, where each class maintains 600 images with a low 32×32 resolution. The classes are split as 60/20/20 for training/validation/test, respectively.
* miniImageNet <cit.> is an image dataset extracted from the full ImageNet dataset <cit.>. This dataset consists of 100 image classes, and each class maintains 600 images with a resolution of 84×84. The classes are split as 64/16/20 for training/validation/test, respectively.
§.§ Experimental Settings
To validate the performance of our framework F^2L, we conduct experiments with the following baselines for a fair comparison:
* Local. This baseline is non-distributed, which means we train an individual model for each client on the local data. The meta-test process is conducted on all meta-test tasks, and the averaged results of all models are reported.
* FL-MAML. This baseline leverages the MAML <cit.> strategy to perform meta-learning on each client. The updated model parameters will be sent back to the server for aggregation.
* FL-Proto. This baseline uses ProtoNet <cit.> as the model in each client. The classification is based on the Euclidean distances between query samples and support samples.
* FedFSL <cit.>. This method combines MAML and an adversarial learning strategy <cit.> to construct a consistent feature space. The aggregation is based on FedAvg <cit.>.
During meta-training, we perform updates for the client-model and the server-model according to Algorithm <ref>. Finally, the server-model that achieves the best result on validation will be used for meta-test. Then during meta-test, we evaluate the server-model on a series of 100 randomly sampled meta-test tasks from local novel datasets {𝒟_n^(i)}_i=1^I in all clients. For consistency, the class split of 𝒞_b and 𝒞_n is identical for all baseline methods. The classification accuracy over these meta-test tasks will be averaged as the final results. The specific parameter settings are provided in Appendix <ref>.
For the specific choices for the encoder and classifier in server-model and client-model (i.e., q_ϕ, f_ϕ, q_ψ, and f_ψ) and model parameters, we provide further details in Appendix <ref>. Note that for a fair comparison, we utilize the same encoder for all methods.
§.§ Overall Evaluation Results
We present the overall performance comparison of our framework and baselines on federated few-shot learning in Table <ref>. Specifically,
we conduct experiments under two few-shot settings: 5-way 1-shot and 5-way 5-shot. Moreover, to demonstrate the robustness of our framework under different data distributions, we partition the data in both IID and non-IID settings. For the IID partition, the samples of each class are uniformly distributed to all clients. For non-IID partition, we follow the prevailing strategy <cit.> and distribute samples to all clients based on the Dirichlet distribution with its concentration parameter set as 1.0. The evaluation metric is the average classification accuracy over ten repetitions. From the overall results, we can obtain the following observations:
* Our framework F^2L outperforms all other baselines on various news article and image datasets under different few-shot settings (1-shot and 5-shot) and data distributions (IID and non-IID). The results validate the effectiveness of our framework on federated few-shot learning.
* Conventional few-shot methods such as Prototypical Network <cit.> and MAML <cit.> exhibit similar performance compared with the Local baseline. The result demonstrates that directly applying few-shot methods to federated learning brings less competitive improvements over local training. This is because such methods are not proposed for federated learning and thus lead to unsatisfactory training performance under the federated setting.
* The performance of all methods degrades at different extents when the data distribution is changed from IID to non-IID. The main reason is that the variety of classes in each client results in a more complex class distribution and brings difficulties to the classification task. Nevertheless, by effectively transferring the meta-knowledge among clients, our framework is capable of alleviating such a problem under the non-IID scenario.
* When increasing the value of K (i.e., more support samples in each class), all methods achieve considerable performance gains. In particular, our framework F^2L obtains better results compared to other baselines, due to our decoupled meta-learning framework, which promotes the learning of meta-knowledge in the support samples.
§.§ Ablation Study
In this part, we conduct an ablation study on FC100 and Huffpost to validate the effectiveness of three crucial designs in F^2L (similar results observed in other datasets). First, we remove the decoupled strategy so that the client-model will also be sent to the server for aggregation.
We refer to this variant as F^2L\M. Second, we remove the local-to-global knowledge transfer module so that the meta-knowledge in the client-model will be effectively transferred to the server-model. This variant is referred to as F^2L\T. Third, we eliminate the global-to-local knowledge distillation loss. In this way, the client-model cannot leverage the global knowledge in the server-model for learning meta-knowledge. We refer to this variant as F^2L\A. The overall ablation study results are presented in Fig. <ref>.
From the results, we observe that F^2L outperforms all variants, which verifies the effectiveness of the three designs in F^2L. Specifically, removing the design of local-to-global knowledge transfer leads to significant performance degradation. This result demonstrates that such a design can effectively aggregate learned meta-knowledge among clients and thus bring performance improvements.
More significantly, without our decoupled strategy, the performance deteriorates rapidly when federated few-shot learning is conducted in the non-IID scenario. This phenomenon verifies the importance of mitigating the disruption from the server in the presence of complex data distributions among clients.
§.§ Parameter Sensitivity Study
§.§.§ Effect of λ_MI and λ_KD
In this section, we further conduct experiments to study the sensitivity of several parameters in our framework F^2L. During the process of transferring and achieving meta-knowledge, we introduce two novel losses ℒ_MI and ℒ_KD, respectively, along with the traditional cross-entropy loss.
To empirically evaluate the impact brought by different values of λ_MI and λ_KD in Eq. (<ref>) and Eq. (<ref>) , we adjust the values of λ_MI and λ_KD from 0 to 1 and present the results in Fig <ref>. From the results, we can observe that the performance generally increases with a larger value of λ_MI, while decreasing with λ_MI approaches 1. The results indicate the importance of transferring learned local meta-knowledge, while also demonstrating that the cross-entropy loss is necessary. On the other hand, the performance first increases and then degrades when a larger value of λ_KD is presented. That being said, although partial knowledge distillation can enable each client to benefit from the global data, a larger λ_KD can potentially lead to more irrelevant information when learning local meta-knowledge.
§.§.§ Effect of Client Number
In this section, we study the robustness of our framework under the scenario with a varying number of clients. In particular, we keep the total training data unchanged, which means with more clients participating in the training process, each client preserves fewer training samples. As a result, the training performance will be inevitably reduced.
Specifically, we partition the total training data into I=1,2,5,10,20, and 50 clients. Note that I=1 denotes the setting of completely centralized training. The results on FC100 with 1-shot and 5-shot settings are presented in Fig <ref> (we have similar results for other datasets and omit them for brevity). From the results, we can observe that all methods encounter a performance drop in the presence of more clients. Nevertheless, our framework F^2L can reduce the adverse impact brought by more clients through effectively leveraging the global knowledge learned from all clients. In consequence, the performance degradation is less significant for F^2L.
§ RELATED WORK
§.§ Few-shot Learning
The objective of Few-shot Learning (FSL) is to learn transferable meta-knowledge from tasks with abundant information and generalize such knowledge to novel tasks that consist of only limited labeled samples <cit.>. Existing few-shot learning works can be divided into two categories: metric-based methods and optimization-based methods. The metric-based methods target at learning generalizable metric functions to classify query samples by matching them with support samples <cit.>.
For instance, Prototypical Networks <cit.> learn a prototype representation for each class and conduct predictions based on the Euclidean distances between query samples and the prototypes.
Relation Networks <cit.> learn relation scores for classification in a non-linear manner.
On the other hand, optimization-based approaches generally optimize model parameters based on the gradients calculated from few-shot samples <cit.>. As an example, MAML <cit.> proposes to optimize model parameters based on gradients on support samples to achieve fast generalization. In addition, LSTM-based meta-learner <cit.> adjusts the step size to adaptively update parameters during meta-training.
§.§ Federated Learning
Federated Learning (FL) enables multiple clients to collaboratively train a model without exchanging the local data explicitly <cit.>.
As a classic example, FedAvg <cit.> performs stochastic gradient descent (SGD) on each client to update model parameters and send them to the server. The server averages the received model parameters to achieve a global model for the next round. FedProx <cit.> incorporates a proximal term into the local update of each client to reduce the distance between the global model and the local model. To deal with the non-IID problem in FL, recent works also focus on personalization in FL <cit.>. For instance, FedMeta <cit.> incorporates MAML <cit.> into the local update process in each client for personalization. FedRep <cit.> learns shared representations among clients. Moreover,
FedFSL <cit.> proposes to combine MAML and an adversarial learning strategy <cit.> to learn a consistent feature space.
§ CONCLUSION
In this paper, we study the problem of federated few-shot learning, which
aims at learning a federated model that can achieve satisfactory performance on new tasks with limited labeled samples.
Nevertheless, it remains difficult to perform federated few-shot learning due to two challenges: global data variance and local data insufficiency.
To tackle these challenges, we propose a novel federated few-shot learning framework F^2L. In particular, we handle global data variance by decoupling the learning of local meta-knowledge. Then we leverage the global knowledge that is learned from all clients to tackle the local data insufficiency issue. We conduct extensive experiments on four prevalent few-shot learning datasets under the federated setting, covering both news articles and images. The experimental results further validate the superiority of our framework F^2L over other state-of-the-art baselines.
§ ACKNOWLEDGEMENTS
The work in this paper is supported by the National Science Foundation under grants (IIS-2006844, IIS-2144209, IIS-2223769, CNS-2154962, and BCS-2228534), the Commonwealth Cyber Initiative awards (VV-1Q23- 007 and HV-2Q23-003), the JP Morgan Chase Faculty Research Award, the Cisco Faculty Research Award, the Jefferson Lab subcontract 23-D0163, and the UVA 4-VA collaborative research grant.
ACM-Reference-Format
§ NOTATIONS
In this section, we provide details for the used notations in this paper and their corresponding descriptions.
§ ALGORITHM
We provide the detailed training process of our framework F^2L in Algorithm <ref>.
§ REPRODUCIBILITY
§.§ Model Details
In this section, we introduce the specific choices for the encoders and classifiers in both server-model and client-model (i.e., q_ϕ, f_ϕ, q_ψ, and f_ψ).
§.§.§ Server-model Encoder q_ϕ
For the server-model encoder, we adopt different models for news article datasets and image datasets. In particular, for news article datasets 20 Newsgroup and Huffpost, we leverage a biLSTM <cit.> with 50 units as the server-model encoder. For the image datasets FC100 and miniImageNet, following <cit.>, we utilize a ResNet12 as the server-model encoder. Similar to <cit.>, the Dropblock is used as a regularizer. The number of filters is set as (64, 160, 320, 640).
§.§.§ Client-model Encoder q_ψ
Considering that the client-model is required to process the entire support set in a meta-task for
learning local meta-knowledge, we propose to further utilize a set-invariant function that takes a set of samples as input while capturing the correlations among these samples. In practice, we leverage the Transformer <cit.> as the client-model encoder q_ψ to process the entire support set:
(𝐡^1_ψ,𝐡^2_ψ,…,𝐡^D_ψ)=Transformer(𝐡^1_ϕ,𝐡^2_ϕ,…,𝐡^D_ϕ),
where ^i_ϕ (or ^i_ψ) denotes the representation of the i-th sample in 𝒮 learned by the server-model encoder q_ϕ (or client-model encoder q_ψ). With the Transformer, the representations learned by the client-model can effectively capture the correlations among samples in the entire support set 𝒮 for learning meta-knowledge.
§.§.§ Server-model Classifier f_ϕ and Client-model Classifier f_ψ
The classifiers f_ϕ and f_ψ are both implemented as a fully-connected layer, where the output size is |𝒞_b| for f_ϕ and N for f_ψ, as described in Sec. <ref>.
§.§ Baseline Settings
In this section, we provide further details in the implementation of baselines in our experiments.
* Local. For this baseline, an individual model is trained for each client over the local data. Specifically, we use the same architecture of encoders in our framework to learn sample representations.
* FL-MAML. For this baseline, we leverage the MAML <cit.> strategy and set the meta-learning rate as 0.001 and the fine-tuning rate as 0.01. The encoders are the same as our framework.
* FL-Proto. For this baseline, we follow the setting in ProtoNet <cit.> with the same encoders in our framework. The learning rate is set as 0.001.
* FedFSL <cit.>. For this baseline, which combines MAML and an adversarial learning strategy <cit.>, we follow the settings in the public code and set the learning rate as 0.001. The adaptation step size is set as 0.01.
§.§ Parameter Settings
For our framework F^2L, we set the number of clients as 10. The number of training steps τ in each client is set as 10, and the number of training rounds T is set as 200. Moreover, the meta-learning rates α_ψ and α_ϕ are both set as 0.001 with a dropout rate of 0.1. The fine-tuning learning rate α_ft is set as 0.01. We leverage the Adam <cit.> optimization strategy with the weight decay rate set as 10^-4. During the meta-test, we randomly sample 100 meta-test tasks from novel classes 𝒞_n with a query set size |𝒬| of 5. In order to preserve consistency for fair comparisons, we keep identical meta-test tasks for all baselines. The loss weights λ_MI and λ_KD are both set as 0.5. The default value of I is set as 10.
|
http://arxiv.org/abs/2306.05314v2
|
20230608160451
|
Mode-locked laser in nanophotonic lithium niobate
|
[
"Qiushi Guo",
"Ryoto Sekine",
"James A. Williams",
"Benjamin K. Gutierrez",
"Robert M. Gray",
"Luis Ledezma",
"Luis Costa",
"Arkadev Roy",
"Selina Zhou",
"Mingchen Liu",
"Alireza Marandi"
] |
physics.optics
|
[
"physics.optics"
] |
Mode-locked lasers (MLLs) have enabled ultrafast sciences and technologies by generating ultrashort pulses with peak powers substantially exceeding their average powers. Recently, tremendous efforts have been focused on realizing integrated MLLs not only to address the challenges associated with their size and power demand, but also to enable transforming the ultrafast technologies into nanophotonic chips, and ultimately to unlock their potential for a plethora of applications. However, till now the prospect of integrated MLLs
driving ultrafast nanophotonic circuits has remained elusive because of their typically low peak powers, lack of controllability, and challenges with integration with appropriate nanophotonic platforms. Here, we overcome these limitations by demonstrating an electrically-pumped actively MLL in nanophotonic lithium niobate based on its hybrid integration with a III-V semiconductor optical amplifier. Our MLL generates ∼4.8 ps optical pulses around 1065 nm at a repetition rate of ∼10 GHz, with pulse energy exceeding 2.6 pJ and a high peak power beyond 0.5 W. We show that both the repetition rate and the carrier-envelope-offset of the resulting frequency comb can be flexibly controlled in a wide range using the RF driving frequency and the pump current, paving the way for fully-stabilized on-chip frequency combs in nanophotonics. Our work marks an important step toward fully-integrated nonlinear and ultrafast photonic systems in nanophotonic lithium niobate.
Mode-locked laser in nanophotonic lithium niobate
Qiushi Guo^1,2,3†, Ryoto Sekine^1, James A. Williams^1, Benjamin K. Gutierrez^4, Robert M. Gray^1, Luis Ledezma^1, 5, Luis Costa^1, Arkadev Roy^1, Selina Zhou^1, Mingchen Liu^1, Alireza Marandi^1†
^1Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, USA
^2Photonics Initiative, Advanced Science Research Center, City University of New York, NY, USA
^3Physics Program, Graduate Center, City University of New York, New York, NY, USA
^4Department of Applied Physics, California Institute of Technology, Pasadena, CA, USA
^5Jet Propulsion Laboratory, Pasadena, CA, USA
^†Email: mailto:[email protected]@gc.cuny.edu; mailto:[email protected]@caltech.edu
July 31, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Mode-locked lasers (MLLs), which generate intense and coherent ultrashort optical pulses on picosecond and femtosecond timescales, have enabled numerous sciences and technologies in photonics such as extreme nonlinear optics<cit.>, femtochemistry<cit.>, supercontinuum generation<cit.>, optical atomic clocks <cit.>, optical frequency combs<cit.>, biological imaging<cit.>, and photonic computing<cit.>. Today’s state-of-the-art MLLs are based on discrete fiber-based and free-space optical components and are expensive, power-demanding, and bulky. Realizing MLLs on integrated photonic platforms promises widespread utilization of ultrafast photonic systems which are currently limited to table-top laboratory experiments. However, so far the performance of integrated MLLs has not been on par with their table-top counterparts, lacking the required peak intensities and degrees of controllability required for on-chip ultrafast optical systems<cit.>, and many of the high-performance MLLs are not yet integratable with nanophotonic platforms. A major challenge lies in the simultaneous realization of large laser gain and an efficient mode-locking mechanism on integrated photonic platforms. Although III-V semiconductor gain media can be electrically pumped and they generally exhibit a very high gain per unit length and high saturation powers<cit.>, the conventional method of achieving mode-locking and short pulse generation on the same semiconductor chip requires a narrow range of pumping current, thus significantly limiting the output power and the tunability of the integrated MLLs<cit.>.
To realize high-peak-power integrated MLLs, a promising approach consists of the hybrid integration of a semiconductor gain medium and an external mode-locking element based on electro-optic (EO) or nonlinear optical effects. Recently, thin-film lithium niobate (TFLN) has emerged as a promising integrated nonlinear photonic platform with access to power-efficient and high-speed EO modulation<cit.> and strong quadratic (χ^(2)) optical nonlinearity<cit.>. Hybrid integration of semiconductor gain with TFLN enables a strong interplay between the laser gain and the EO or nonlinear effects to achieve active or passive mode-locking with high efficiency and tunability. Moreover, many of the nonlinear and ultrafast optical functionalities such as supercontinuum generation<cit.>, optical parametric oscillation<cit.>, pulse shortening<cit.>, all-optical switching<cit.>, and quantum squeezing<cit.> can be realized in quasi-phase-matched LN nanophotonic devices with orders of magnitude lower peak powers compared to other platforms. Therefore, developing high-peak-power MLLs integrated into nanophotonic LN can enable a suite of nonlinear and ultrafast optical phenomena on a chip, promising integrated photonic systems with unprecedented performance and functionalities.
In this work, we demonstrate a high-peak-power, electrically-pumped integrated actively MLL by hybrid integration of III-V semiconductors and LN nanophotonics. In contrast to conventional integrated MLLs based on hybrid integration of a III-V active region and a passive waveguide<cit.>, our MLL synergistically exploits the high laser gain of III-V semiconductors and the efficient active optical phase modulation in LN nanophotonic waveguides as the mode-locking mechanism. Such a design eliminates the complexities associated with realizing gain and saturable absorption on the same semiconductor chip, allowing a much higher output power and a wider tunability of the laser. Under an external RF drive of less than 300 mW, our MLL generates ultrashort optical pulses around 1065 nm with a pulse duration of approximately 5 ps, pulse energy greater than 5 pJ, and a peak power greater than 0.5 W. This represents the highest reported peak power at a repetition rate of ∼10 GHz for integrated MLLs in nanophotonics. The MLL can operate over a broad range of electrical pumping currents and RF driving frequencies, and provide precise control of the carrier frequency and repetition rate of the resulting frequency comb, which can lead to fully-stabilized comb sources. In contrast to other recent demonstrations of ultrashort pulse sources on the TFLN platform such as Kerr soliton micro-combs<cit.> and electro-optic (EO) combs<cit.>, our MLL provides significantly higher on-chip peak power and pulse energies, and electrical-to-short-pulse efficiencies. Moreover, our MLL offers great system simplicity by eliminating the need for wavelength-tunable pump lasers, external optical amplification stages, complex cavity locking schemes, and/or pulse compression elements with high optical loss. The simplicity of our MLL design, combined with its high peak power, few-picosecond pulses, and controllable frequency comb parameters offers a practical path for fully-integrated nonlinear and ultrafast photonic systems in LN nanophotonics.
Operating principle and design of the MLL: Figure 1a shows the concept of active mode-locking by electro-optic phase modulation inside a laser cavity.
In the time domain, when a phase modulator (PM) is driven by a sinusoidal RF signal at a frequency f_m, the intra-cavity phase modulation is equivalent to the cavity length modulation. Therefore, the laser cavity can be considered as having a moving end mirror with a sinusoidal motion at frequency f_m. When an optical signal inside the cavity strikes this moving end mirror and gets reflected back, its optical frequency acquires a Doppler shift. After successive round trips, these Doppler shifts will accumulate, resulting in no steady-state solution. However, when a short circulating pulse strikes the end mirror at either of the “turning points” where the mirror reverses its direction (the extremum of the phase variation as shown in Fig. 1b), it will not acquire a Doppler frequency shift but instead a small quadratic phase modulation or chirp<cit.>. Thus, an optical pulse can be maintained in the laser cavity after successive round trips as a steady-state solution. The characteristics of the pulse depend on the gain, loss, and dispersion in the cavity as well as the chirp from the PM. While in principle optical pulses can occur at either of the two phase modulation extrema and acquire chirp of different signs, the dispersion in the cavity can compensate for the chirp imposed by the PM at one extremum, and further chirp the pulse formed at another extremum. As a result, the combination of dispersion, gain, and nonlinear effect in the cavity can favor one pulse over the other, leading to only one pulse in the cavity<cit.>. Such a mode-locking condition necessitates a good match between the phase modulation time period and the cavity round-trip time (f_m should be close to the cavity free spectral range (FSR)). The mode-locking mechanism can also be understood in the frequency domain. As shown in Fig. 1c, if the intra-cavity phase modulation frequency f_m matches the cavity FSR, the sidebands produced by each of the running axial modes are injected into the adjacent axial modes, resulting in the phase locking of adjacent modes. While the mutual injection of spectral modes within the cavity bears similarity to EO comb sources, it is important to note that in MLLs, these modes will lase due to the presence of laser gain within the cavity, whereas in EO comb sources, they are generated by dispersing the energy from a single pump laser line<cit.>. This distinction gives rise to stark differences in their operations.
Based on this principle, we design the integrated actively MLL as shown in Figure 1c. In our MLL, an electrically pumped gain section based on a single-angled facet GaAs gain chip (SAF gain chip) is butt-coupled to a TFLN chip, which contains an integrated EO phase modulator (PM) and a broadband loop mirror. A Fabry-Perot laser cavity configuration is formed between the reflective facet on the left end of the SAF gain chip and the broadband loop mirror on the TFLN chip. Here, an integrated PM is preferred over a Mach Zehnder interferometer (MZI)-based intensity modulator (IM) because the PM offers a lower insertion loss and avoids effects from the DC bias drift of the MZI modulator<cit.>. Thus, our MLL principle is distinct from that of an actively MLL based on an IM, in which the mode-locking is enabled by loss modulation<cit.>. It is worth noting that semiconductor gain medium typically has a short carrier relaxation time (gain recovery time (T_G)) on the order of ns<cit.>. To ensure the mode-locking and the formation of ultrashort optical pulses, T_G has to exceed the cavity round-trip time (T_RT) of pulses by a large amount (T_G≫ T_RT)<cit.>. In our design, by controlling the length of the TFLN waveguide, we realized this condition by having a cavity FSR of ∼ 10 GHz, which translates to a cavity round-trip time of ∼ 100 ps.
We fabricated our devices on a 700-nm-thick X-cut magnesium-oxide (MgO) doped TFLN on a SiO_2/Silicon (4.7 μm/500 μm) substrate (NANOLN). The details about the device fabrication can be found in the Methods. As shown in Fig. 2a, in the PM region, the RF electrodes are fabricated on top of the SiO_2 cladding layer. Such a design allows us to achieve high modulation efficiency (simulated value of 1.1 V· cm) by having a small gap (4 μm here) between the ground and signal electrodes and a significant overlap between the RF field and the optical field in the waveguide<cit.>. It also ensures a low optical propagation loss by offering a high tolerance to misalignment between the electrodes and the optical waveguide<cit.>. We designed the geometry of the RF electrode to ensure a 50 Ω impedance around 10 GHz. Figures 2b, c, and d show the scanning electron microscope (SEM) images of the PM and the loop mirror regions of the fabricated device. In the loop mirror, we adopted a curved coupling region design<cit.> which increases the reflective bandwidth. The detailed design of the broadband loop mirror is described in the Supplementary Information Section II. Based on the length (1.5 mm) and the refractive index of the SAF gain chip around 1065 nm, we estimate that a ∼3-mm-long TFLN waveguide can lead to a laser cavity FSR of ∼10 GHz.
Figure 2e shows the 1065-nm fundamental TE mode profile in the waveguide of the SAF gain chip, which is calculated from the divergence angle of its emission. To minimize the coupling loss between the SAF gain chip and the TFLN chip, the top width of the input facet of the TFLN waveguide is tapered out to be 10.3 μm. The 1065-nm fundamental TE mode profile in the tapered TFLN waveguide is shown in Fig. 2f. This design ensures a maximal overlap with the optical mode produced by the SAF gain chip and yields a minimal coupling loss of ∼1.5 dB. In Section I of the Supplementary Information, we discuss the dependence of coupling loss on the lateral misalignment and gap between the SAF gain chip and TFLN waveguides. The coupling loss can be further reduced by employing a polymer-based mode-size converter<cit.>. Fig. 2g shows the microscope image of the coupling region after the alignment, in which the gap between the two chips is minimized. When the SAF gain chip is electrically pumped with a driving current (I_drive) of 160 mA, we observe green light (the second harmonic of the 1065 nm light) inside the laser cavity (Fig. 1h), which indicates a high intra-cavity power around 1065 nm and a good alignment between the two chips.
Characterization of the MLL: We characterized the integrated actively MLL using an optical setup shown in Fig. 3a. We applied a ∼ 280 mW sinusoidal RF signal to the left end of the traveling wave electrodes (TWE) of the PM by the RF probe. The right end of the TWE is terminated by another RF probe with a 50 Ω load resistor mounted on it. To investigate the operating regimes of the MLL, we simultaneously collect the laser output spectra, the intensity autocorrelation of the laser output in the time domain, and the heterodyne beat notes between two neighboring laser emission lines and a narrow-linewidth (∼10 kHz) reference CW tunable laser (CTL, Toptica). In order to get intensity autocorrelation with a good signal-to-noise ratio, we pre-amplified the laser output power by a Ytterbium-doped fiber amplifier (YDFA). We also used a pulse shaper (Waveshaper 1000A, II-VI) to compensate for the group velocity dispersion (GVD) imposed by the phase modulator, YDFA, and the single-mode fiber. In the measurement, the gain chip is electrically pumped with an I_drive of 185.2 mA.
As shown in Fig. 3b, when we widely scan the f_m, the laser output exhibits a clear spectral broadening when f_m is between 10.1 and 10.4 GHz (labeled by the white dashed box). Meanwhile, within this f_m range, two distinct intensity autocorrelation peaks separated by ∼ 98 ps emerge (Fig. 3c), which indicates that optical pulses are formed in this f_m regime. At a f_m of 10.17 GHz, we measured the laser output power from the output facet of the TFLN chip with a single-mode lensed fiber. As shown in Fig. 3 d, the laser exhibits a very low threshold I_drive of 22 mA. Given the measured coupling loss of ∼11 dB between the TFLN waveguide and the single-mode lensed fiber, the on-chip laser output average power is more than 50 mW when the I_drive is greater than 180 mA.
We further use the heterodyne beat notes to characterize the mode-locking and the resulting frequency comb. As illustrated in Fig. 4a, when the frequency of the reference CTL is resting in between the two neighboring comb lines of the MLL near the center of its spectrum, two RF beat notes at f_1 and f_2 are generated on the fast detector. Figure 4b shows the evolution of heterodyne beat notes as a function of f_m. When f_m is between 10.165 and 10.173 GHz as labeled by the white dashed box, two spectrally narrow beat notes are observed. This suggests that within this range of f_m, the laser is operating in the mode-locked regime so that neighboring axial modes of the laser are phase-locked and it produces a frequency comb with narrow spectral peaks. In the time domain, this means a good synchronization between the round-trip of the pulse in the cavity and the phase modulation has been achieved, and the laser produces ultrashort optical pulses with high coherence.
We also want to comment on some of the important behaviors of the resulting frequency comb when f_m is detuned from the cavity FSR. First, when the detuning is small (10.165 GHz<f_m<10.173 GHz), we can still get two narrow beat notes, but f_1 and f_2 can shift significantly with f_m, as shown in Fig. 4b. This indicates that the carrier frequency of the MLL sensitively depends on f_m. Second, when the f_m is further detuned from the cavity FSR, the MLL exhibits a transition to a turbulent regime<cit.>, which is manifested by multiple noisy beat notes around f_1 and f_2 in Fig. 4b. In the turbulent regime, the MLL can no longer reach steady state. In this regime, the laser can still emit ultrashort pulses as shown in Fig. 3c, albeit with low coherence.
As shown in Fig. 4c, at f_m=10.17 GHz, we obtained two spectrally narrow RF beat notes at f_1=3.68 GHz and f_2=6.49 GHz, with a full width at half maximum (FWHM) linewidth of 3.95 MHz and 3.91 MHz, respectively. Given that the RF drive has a very small phase noise and no active locking of the laser cavity is used here, the linewidths of the heterodyne beat notes can be mainly limited by the drift of pulse carrier frequency. As shown in Fig. 4d, when a 280 mW RF drive at 10.17 GHz is applied to the PM, significant spectral broadening is observed. The pulse spectrum is centered at 1064.9 nm and the FWHM of the spectrum is 0.35 nm. Meanwhile, we also collected the intensity autocorrelation of the MLL output at f_m=10.17 GHz, as shown in Fig. 4e. The autocorrelation trace indicates that the MLL produces one strong pulse at one of the modulation turning points, while the other pulse is significantly suppressed. The Gaussian fit of the intensity autocorrelation trace yields a pulse width of 4.81 ps (5.03 ps) with (without) the external pulse shaping. Since the pulse shaper can compensate for the chirp on the output pulse and the additional chirp imposed by the SMF and the YDFA, we expect the output pulse width directly after the MLL facet to be between 4.81 ps and 5.03 ps. The pulse width of 4.81 ps after pulse shaping corresponds to a time-bandwidth product of 0.445, which is very close to the transform-limited time-bandwidth product (0.44) of a Gaussian pulse<cit.>. To conservatively estimate the pulse energy and peak power, we use the measured output average power of 53 mW at I_drive=185.2 mA and assume both pulses exist in the cavity. Hence, the output pulse energy of our MLL is at least 2.6 pJ and the pulse peak power is greater than 0.51 W.
We further studied the limits of the output pulse width of our MLL. First, we measured how the pulse width changes with the RF power (P_RF) applied to the PM. As shown in Fig. 4f, the measured pulse only slightly decreases with increasing P_RF, which is in good agreement with the power scaling law according to the Haus Master Equation (HME)<cit.>. We also found that further increasing the RF power will not shorten the pulse significantly. Instead, it can lead to laser instability due to index modulation of the TFLN waveguide caused by RF heating. By using the HME and ignoring the GVD and nonlinear effects in the laser cavity, we estimate that the pulse width limit of our actively MLL is ∼ 2.3 ps (see Supplementary Information Section IV for details). The experimentally measured pulse width is wider likely due to several factors, including cavity GVD, and the self-phase modulation and nonlinear chirp of the pulses imposed by the dynamical refractive index variation of the III-V gain medium associated with gain depletion and partial gain recovery<cit.>.
Current tuning of the MLL: The electrical pumping current of the III-V gain chip (I_drive) can serve as an important tuning knob of our MLL. Since I_drive can alter the gain spectrum and the refractive index of the gain medium, it can in turn vary the carrier frequency, the coherence property, and the repetition rate (f_rep) of the MLL, and potentially lead to locking of the carrier frequency by applying active feedback on I_drive. Figure 5 a and b show the dependence of the output spectra and autocorrelation of the MLL on the I_drive with 280 mW RF drive fixed at 10.17 GHz. It is evident that within a wide range of I_drive (140 - 205 mA), optical pulses can be formed inside the laser. In addition, it can be seen in Fig. 5a that the carrier frequency of the MLL blueshifts by ∼0.3 nm as the I_drive is increased from 140 mA to 200 mA. This blueshift, which has also been observed in other reports<cit.>, is likely caused by the blueshift of the peak wavelength of the gain spectrum due to band filling and screening effects induced by carrier injection<cit.>.
We then investigated the effect of I_drive on the coherence property and the f_rep of the laser. We kept the RF drive fixed at 280 mW and 10.18 GHz, and monitored the change in heterodyne beat notes f_1 and f_2 as we slightly varied I_drive. The measurement results are summarized in Fig. 5c. As the I_drive is tuned from 189.6 mA to 194.6 mA, the laser transitions from the turbulent regime to the mode-locked regime, and then back to the turbulent regime. These results suggest that, with a frequency-stable reference CW laser and active feedback on I_drive, it may be possible to lock the carrier frequency of the MLL and operate the device as a stable frequency comb, as the f_rep of the MLL has already been locked by the external RF oscillator. As shown in Fig. 5d, when we widely vary I_drive from 144 to 204 mA, the optimum f_m that enables mode-locking with high coherence can be varied from 10.04 GHz to 10.23 GHz, indicating the repetition rate of the laser can also be adjusted by ∼200 MHz by the I_drive. Moreover, the optimum f_m increases almost linearly with I_drive, which results from an increase of the cavity FSR caused by carrier injection in the gain medium.
Conclusion and outlook: In summary, we have demonstrated an integrated actively MLL in nanophotonic LN operating around 1065 nm, which can generate ∼5 ps ultrashort optical pulses. We estimate that the MLL produces an output pulse energy ∼2.6 pJ and a high peak power greater than 0.5 W, representing the highest pulse energy and peak power of any integrated MLLs in nanophotonic platforms. In contrast to conventional integrated MLLs that integrate both the gain and mode-locking elements on the same III-V chip, our MLL design decouples these elements, resulting in a significantly wider current tuning range and reconfigurability. This, in turn, allows for a wide tuning range of the laser f_rep of ∼200 MHz and precise control of the laser's coherence properties.
Looking forward, the current tuning capability of our MLL indicates that by using a reference and implementing active feedback to the I_drive, we can achieve simultaneous locking of the carrier frequency and f_rep of the MLL. This allows the MLL to operate as a stable frequency comb with locked carrier frequency offset (f_CEO) and f_rep. In addition, comprehensive theoretical modeling of the laser dynamics and the identification of the single-pulse operating regime of the laser are of great importance for obtaining even higher peak powers and shorter pulses. We envision that the semiconductor gain and LN nanophotonic mode-locking elements can be fully integrated into the same chip and a better optical coupling between the two platforms can be achieved by adopting an advanced flip-chip bonding process<cit.> or heterogeneous integration process<cit.>. Furthermore, seamless integration of our high peak power MLL with other χ^(2) nonlinear optical functionalities provided by quasi-phase-matched TFLN nanophotonic devices offer exciting opportunities for the development of photonic systems that have yet to be realized in nanophotonics, such as fully integrated supercontinuum sources, self-referenced frequency combs, visible/ultraviolet femtosecond lasers, and atomic clocks.
§ METHODS
Device fabrication. We fabricated the integrated waveguides, phase modulators, and broadband loop mirrors on a 700-nm-thick X-cut MgO-doped LN thin-film on 4.7-μm-thick SiO_2 on top of a silicon substrate (NANOLN). We first patterned the waveguides using e-beam lithography by employing Hydrogen Silsesquioxane (HSQ) as the e-beam resist. The designed top width of the LN waveguide is 800 nm. The LN layer was etched by 350 nm using Ar^+ plasma. This etching process yields a waveguide sidewall angle of ∼60^∘. Next, we deposited an 800 nm SiO_2 cladding layer using plasma-enhanced chemical vapor deposition (PECVD). Another e-beam lithography step was used to pattern the RF metal electrodes on top of the cladding layer, in which PMMA was used as the e-beam resist. Then, we deposited Cr/Au (10 nm/300 nm) using e-beam evaporation. Metal electrodes are formed after metal lift-off in acetone. Finally, the waveguide facets were mechanically polished to enable good light coupling efficiencies.
Optical measurements. For butt coupling the SAF gain chip and the TFLN chip together, the SAF gain chip was placed on a 6-axis nano-positioning stage (Thorlabs) and the TFLN chip was clamped on a fixed sample stage. The two chips could be visually aligned by using a microscope from above. After visual alignment, the alignment was further optimized by maximizing the output power measured by a power meter, which is related to the intra-cavity optical power of the laser. The output power of the MLLs is probed by a single-mode lensed fiber connected to an optical power meter (Thorlabs). The RF drive is provided by an RF signal generator (Rohde & Schwarz SMA100B) and is subsequently amplified by a high-power RF amplifier (Mini-Circuits ZVE-3W-183+). The input RF power is calibrated by an RF power meter (Ladybug). For the results in Fig. 3-5, the laser output spectra were collected by an optical spectrum analyzer (OSA) covering 600-1700 nm (Yokogawa AQ6370B) with a 0.01 nm resolution bandwidth. The RF spectra were collected by an electronic spectrum analyzer (Rohde & Schwarz FSW) with a 100 Hz resolution bandwidth.
Numerical simulations. The optical and RF field distributions shown in Fig. 1a were simulated by COMSOL Multiphysics. We also used commercial software (Lumerical Inc.) to solve for the waveguide modes in order to design the waveguide taper and obtain the dispersion characteristics of the waveguide. In the simulation, the anisotropic index of the LN was modeled by the Sellmeier equations<cit.>.
§ DATA AVAILABILITY
The data that support the plots within this paper and other findings of this study
are available from the corresponding author upon reasonable request.
§ CODE AVAILABILITY
The computer code used to perform the nonlinear simulations in this paper is available from the corresponding author upon reasonable request.
§ ACKNOWLEDGEMENTS
The device nanofabrication was performed at the Kavli Nanoscience Institute (KNI) at Caltech. The authors thank Prof. K. Vahala for loaning equipment. Q.G. thanks Dr. M. Xu for the helpful discussions. The authors gratefully acknowledge support from ARO grant no. W911NF-23-1-0048, NSF grant no. 1846273 and 1918549, AFOSR award FA9550-20-1-0040, and NASA/JPL. The authors wish to thank NTT Research for their financial and technical support.
§ AUTHORS CONTRIBUTIONS
Q.G. and A.M. conceived the project; Q.G. fabricated the devices with assistance from R.S.. Q.G performed the measurements, numerical simulation, and analyzed the data. R.S., J.W., B.G., R.M.G., L.L., L.C., and S.Z. assisted with the measurements. B.G., A.R., and M. L. helped with the numerical simulation and data analysis. Q.G. wrote the manuscript with inputs from all authors. A.M. supervised the project.
§ COMPETING INTERESTS
Q.G. and A.M. are inventors on a patent application (US patent application no.
17/500,425) that covers the concept and implementation of the actively mode-locked laser here. The remaining authors declare no competing interests.
|
http://arxiv.org/abs/2306.08406v1
|
20230614100333
|
Feature Normalization for Fine-tuning Self-Supervised Models in Speech Enhancement
|
[
"Hejung Yang",
"Hong-Goo Kang"
] |
eess.AS
|
[
"eess.AS",
"cs.LG",
"cs.SD"
] |
Autothermotaxis of volatile drops
Detlef Lohse
July 31, 2023
=================================
Large, pre-trained representation models trained using self-supervised learning have gained popularity in various fields of machine learning because they are able to extract high-quality salient features from input data.
As such, they have been frequently used as base networks for various pattern classification tasks such as speech recognition.
However, not much research has been conducted on applying these types of models to the field of speech signal generation.
In this paper, we investigate the feasibility of using pre-trained speech representation models for a downstream speech enhancement task.
To alleviate mismatches between the input features of the pre-trained model and the target enhancement model, we adopt a novel feature normalization technique to smoothly link these modules together.
Our proposed method enables significant improvements in speech quality compared to baselines when combined with various types of pre-trained speech models.
Index Terms: speech enhancement, self-supervised model, feature normalization
§ INTRODUCTION
Large, pre-trained models trained using self-supervised learning have become popular in various speech processing tasks <cit.>. Due to their ability to leverage representations learned from large amounts of unlabeled data, these models have been successfully applied to a wide variety of downstream tasks such as automatic speech recognition (ASR), speaker verification (SV) <cit.>, and audio scene classification <cit.>.
Depending on the characteristics of their training objectives, these pre-trained models can be categorized into either generative models or contrastive models <cit.>. In generative models, decoders are trained to reconstruct masked frames or future frames; they include models such as APC <cit.>, Mockingjay <cit.>, and TERA <cit.>. Contrastive models are trained by utilizing similarities and differences between latent embeddings. A representative example is wav2vec 2.0 <cit.>, and several variants such as HuBERT <cit.> and WavLM <cit.> have been proposed that adopt various types of regularizers for each target objective. Meanwhile, some models utilize both generative and contrastive objectives, such as PASE+ <cit.>.
Most upstream models adopting self-supervised pre-training for speech applications have focused on solving discriminative tasks <cit.>. Recently, several attempts <cit.> have been made to apply these models for speech enhancement (SE) or speech separation tasks.
When pre-trained speech models are used for SE, it is inevitable that they encounter a domain mismatch problem. This is because most of these pre-trained models are trained on clean data, while the SE task fundamentally requires corrupted or noisy speech inputs (Fig. <ref>). Although this domain mismatch problem can be relieved by pre-training the upstream models with both noisy and clean data <cit.>, this requires a huge amount of data and additional training time, which takes much longer than fine-tuning a downstream model with labeled data.
Another drawback is that this precludes the use of well-trained upstream models that have been made publicly available online.
In this paper, we propose an effective feature normalization technique that facilitates the use of representations from pre-trained speech models on a downstream SE task.
To alleviate the dissonance between the noisy inputs fed into the downstream model in the fine-tuning phase and the initial weight distribution of the upstream body trained with clean speech, we normalize the latent features from the noisy input with its clean referential statistics, which can be estimated by feeding clean targets to the frozen upstream model.
By adjusting the degree of normalization during training, the model can smoothly change its input domain from clean to noisy, thereby improving SE performance in the end.
More generally, it can be applied to any type of downstream task that utilizes such pre-trained models.
Our contributions are as follows: 1) With our proposed feature normalization technique applied during training, we show that it is possible to achieve better SE performance when using the representations from pre-trained speech models without introducing any additional parameters or training losses. 2) We can directly utilize these pre-trained models without the need for any additional domain-adaptation to calibrate the pre-trained models or to train them from scratch.
The rest of the paper is organized as follows. After describing the related works in section <ref>, base downstream models trained on top of different upstream architectures are introduced in section <ref>. Section <ref> depicts feature normalization algorithm, and results are presented in section <ref> and <ref>.
§ RELATED WORK
Previous works have shown that the domain mismatch between the pre-training and fine-tuning phases degrades the performance of the downstream models that utilize pre-trained speech representations <cit.>. Utilizing target-domain data in the pre-training stage can mitigate the degradation <cit.>, but this requires additional training which removes the efficiency we can obtain by using existing pre-trained models.
One algorithm for domain adaptation is Domain Adversarial Training (DAT), where the domain classifier is trained adversarially so that the representation cannot be identified by the domain context <cit.>. DAT has been applied to various supervised tasks including domain-specific ASR, SV and SE <cit.>. However, it requires multi-task learning for training the domain classifier with a categorization of domain classes.
Other methods include residual adapters <cit.> or auxiliary contrastive loss <cit.>. However, injection of new parameters dedicated for domain adaptation or training paths for multi-task training complicates the base model training.
While most previous works focus on more generic cases of domain mismatch, we focus specifically on the domain mismatch between the domain on which the pre-trained models were trained and the domain on which they are fine-tuned in the context of SE.
SE models utilize pairs of clean and noisy speech for training. Under the assumption that pre-trained speech models are trained using clean data, the paired fine-tuning data can be treated as a pair consisting of source-domain data (clean speech) and target-domain data (noisy speech). Thus, during fine-tuning, we can extract both source- and target-domain statistics. Our proposed method is related to utilizing these estimated statistics without requiring any additional trainable variables or pre-training phases.
Our method shifts the distribution of the latent features from the target-domain to the source-domain to alleviate domain conflicts during fine-tuning. To smoothly adjust the model from source-domain features to target-domain features, we decrease the shifting factor during the fine-tuning phase.
§ BASE MODEL
For our experiments, we used several Mockingjay variants (Mockingjay, TERA) and wav2vec 2.0 variants (wav2vec 2.0, WavLM, HuBERT) as the base upstream models. The Mockingjay variants use generative learning methods and the wav2vec 2.0 variants use contrastive learning methods.
We followed the BASE setups for all of the models, and implemented appropriate SE networks depending on their architectures and the structure of the projection layers.
§.§ Base Mockingjay downstream
Mockingjay is a speech representation network which consists of a multi-layer transformer encoder <cit.>. During the pre-training phase, the output of the transformer encoder is fed to feed-forward projection layers to predict masked frames.
Given this, we implemented an SE network by adding a single convolutional layer on top of the body, which is trained to generate the real and imaginary parts of cIRM masks <cit.>.
In other words, our downstream model is trained to predict the complex masks of each time-frequency bin, and the estimated complex masks are multiplied with the complex noisy inputs to obtain noise suppressed outputs.
§.§ Base wav2vec 2.0 downstream
Wav2vec 2.0 consists of a multi-layer feature encoder followed by a transformer encoder <cit.>. In the pre-training phase, latent features extracted from the feature encoder are quantized, and they are used as a target for training with a BERT-style objective <cit.>.
To implement the downstream SE model, we added a U-Net style decoder <cit.> that would utilize representations from the transformer encoder and the feature encoder.
Specifically, we used the output of the first layer of the transformer encoder as the bottleneck feature of our SE model. This took into account prior work that found lower layers to contribute more informative features for the SE task <cit.>.
From the bottleneck feature, a stack of deconvolution layers are paired with the output of the feature encoder layers in reverse order. The features from the feature encoders are passed into point-wise convolutions for dimension reduction, concatenated with the corresponding decoder features and passed to the next deconvolution layers.
§ FEATURE NORMALIZATION
Consider a latent feature X∈ℛ^d from the upstream body of the network under dimension d.
Given its mean μ∈ℛ^d and standard deviation (std) σ∈ℛ^d, we can
calculate the normalized feature X_n with new target mean μ_n and std σ_n as follows:
X_n = X - μ/σσ_n + μ_n
= σ_n/σX + (μ_n - σ_n/σμ)
= r_nX + (μ_n - r_nμ), where r_n = σ_n/σ.
Fig. <ref> shows the proposed feature normalization method that maintains latent statistics r_n, μ_n and μ required when normalizing the latent feature. Through normalization, the (noisy) input features follow the same statistics as the corresponding clean features.
We used the exponentially moving average (EMA) method for each feature to recursively estimate the statistics of the features in the training corpus.
We gradually decrease the normalization effect by controlling a scale factor k. k becomes zero when the model is close to reaching convergence, which means that normalization is barely done at this stage.
This factor scheduling method means that the model does not require additional statistical parameters r_n, μ_n and μ during evaluation. The re-normalized feature X̂_n with the scale factor can be calculated as follows:
δ_n = k(X_n - X)
= k ( σ_n/σ-1 ) X + k (μ_n - σ_n/σμ),
X̂_n = X + δ_n
= kσ_n+(1-k)σ/σ X + k (μ_n - σ_n/σμ ).
The detailed process can be found in Algorithm <ref>.
From an implementation point of view, a frozen duplicate of the upstream body, ranging from the first layer to the layer in which the last normalization occurs, must be maintained, as denoted by layer_frozen in Algorithm <ref>. The frozen layer is used for extracting clean latent features not affected by the training status. The number of additional frozen parameters retained during training is negligible from the fact that the normalization occurs at the very beginning of each layer.
§ EXPERIMENTAL SETUP
§.§ Data
We used pre-trained Mockingjay, TERA, wav2vec 2.0, WavLM, and HuBERT trained using the Librispeech-960hr dataset <cit.>.
For the training and evaluation of the downstream models, we used the Voicebank-DEMAND corpus <cit.> after downsampling to 16kHz. Training data was cropped to correspond to 2 seconds or 1 second for the Mockingjay variants and wav2vec 2.0 variants, respectively. The Mockingjay variants were trained for 50k iterations with a batch size of 8, while wav2vec 2.0 variants were trained for 100k iterations with a batch size of 8.
For evaluation, we used WB-PESQ, a wideband version of PESQ <cit.>, and STOI <cit.>.
§.§ Model details
For Mockingjay variants, training was performed with cIRM loss, time-domain MSE loss, and MSTFT loss <cit.>.
For cIRM loss, K = 10 and C = 0.1 are used.
For MSTFT loss, we set three sub-losses L^(1)_s, L^(2)_s and L^(3)_s with FFT size, window size, and frame shift (1024, 400, 80), (2048, 800, 160), and (512, 160, 32), respectively.
For wav2vec 2.0 variants, training was performed with time-domain MSE loss and MSTFT loss following the same configurations as above.
We linearly decreased the scaling factor k towards zero during training. k was initially set to 0.5 for the wav2vec 2.0 variants, 0.8 for TERA, and 1.0 for Mockingjay. Experiments on different factor scheduling schemes besides linear decay showed no significant differences in output performance.
Momentum for μ and r were set to 0.99 and 0.999 respectively.
§ RESULTS
§.§ Results on downstream models
Table <ref> shows the results of applying our proposed feature normalization to several upstream models. Methods with the suffix “base" refer to a model with random weight initialization. Models are tagged as “pretrained" if they are initialized with pre-trained weights, and further noted as “normed" if feature normalization is applied during training. The results show that feature normalization improves both WB-PESQ and STOI for almost every type of model, demonstrating that the feature normalization scheme is effective and model agnostic.
We also note that for the generative models, performance nearly matches that of other SE-specific architectures including DEMUCS <cit.> and SE-Conformer <cit.>.
One interesting aspect of the results is the improvement ratios between the generative and contrastive models. For generative models, initial baseline performance tends to be lower than that of the contrastive models, but their performance improves significantly after loading pre-trained weights. This is possibly due to the generative models being trained to fill in masked frames, which causes them to capture more local information between frames, whereas contrastive models tend to learn high-level semantics which capture global-level dependencies.
§.§ Effect of normalization layer
Table <ref> shows the results when normalizing each layer of TERA and wav2vec 2.0. We can see that the improvement rate increases when normalization is applied to the lower layers.
This is perhaps unsurprising, given that domain mismatch has the greatest impact on performance at lower layers. Additionally, we hypothesize that the non-linearities introduced by stacking multiple layers with non-linear activations contributes to these results. Instead of linear normalization, non-linear smoothing that takes into account the non-linearities in the overall network may help improve results in later layers.
To further inspect the effect of normalization in each layer, we evaluated cosine similarities of the encoder output features between clean and noisy inputs. Figure <ref> shows the similarities between the clean reference features and the features from the noisy input with normalization applied to each of the layers over the course of training. The features have higher similarities when lower layers are normalized during training. This implies that normalization in the lower layers helps the upstream body maintain the clean feature semantics and avoid catastrophic forgetting, which can occur when domain adaptation is done during fine-tuning.
Conversely, applying normalization to the highest layer led to higher similarities in the early stages of training, but a rapid drop later on. This suggests that choosing proper normalization layers can either make the upstream body maintain its good representative power from the pre-training phase or confuse the network and cause it to lose its ability to generalize.
§ CONCLUSIONS
We introduced a feature normalization method for representations from pre-trained speech models that facilitate the use of these representations for downstream speech enhancement. By normalizing the features from noisy inputs to have the same statistics as clean reference inputs, we were able to significantly improve enhancement performance when fine-tuning several different pre-trained speech models. Our results showed that meaningful improvements occur the most when feature normalization is applied only to the lower layers of the pre-trained model. This implies that, for features from higher layers, more sophisticated normalization methods may need to be designed to deal with the various non-linearities in the network.
For future work, we will study more generic feature normalization that can be applied to representations from multiple layers within a network. Additionally, domain mismatch can arise in speech separation tasks. Speech separation model needs to be trained on input with multiple speakers, but a pre-trained upstream model may have been trained on single-speaker data. To further explore the use of pre-trained models in downstream tasks, we aim to extend our feature normalization approach to speech separation and gain additional insights.
IEEEtran
|
http://arxiv.org/abs/2306.02684v1
|
20230605082037
|
A Novel Multi-Agent Deep RL Approach for Traffic Signal Control
|
[
"Shijie Wang",
"Shangbo Wang"
] |
cs.AI
|
[
"cs.AI",
"cs.MA"
] |
A Novel Multi-Agent Deep RL Approach for Traffic Signal Control
1st Wang Shijie
School of Advanced Technology
Xi'an Jiaotong-Liverpool University
Suzhou, China
[email protected]
2nd Wang Shangbo*
School of Advanced Technology
Xi'an Jiaotong-Liverpool University
Suzhou, China
[email protected]
July 31, 2023
============================================================================================================================================================================================================================================================================
As travel demand increases and urban traffic condition becomes more complicated, applying multi-agent deep reinforcement learning (MARL) to traffic signal control becomes one of the hot topics. The rise of Reinforcement Learning (RL) has opened up opportunities for solving Adaptive Traffic Signal Control (ATSC) in complex urban traffic networks, and deep neural networks have further enhanced their ability to handle complex data.
Traditional research in traffic signal control is based on the centralized Reinforcement Learning technique. However, in a large-scale road network, centralized RL is infeasible because of an exponential growth of joint state-action space. In this paper, we propose a Friend-Deep Q-network (Friend-DQN) approach for multiple traffic signal control in urban networks, which is based on an agent-cooperation scheme. In particular, the cooperation between multiple agents can reduce the state-action space and thus speed up the convergence. We use SUMO (Simulation of Urban Transport) platform to evaluate the performance of Friend-DQN model, and show its feasibility and superiority over other existing methods.
Traffic Signal Control; Machine Learning; Deep Reinforcement Learning; Decentralized Multi-Agent
§ INTRODUCTION
The rapid development of urbanization has facilitated people's daily travel. Still, at the same time, traffic problems have become more and more serious, and traditional traffic management models and traffic systems are no longer able to meet the actual requirements of the times <cit.>. In the new context, urban traffic should change in the direction of intelligence, actively introduce advanced artificial intelligence technology, and carry out targeted solutions to the current problems to effectively solve a series of traffic problems.
Traffic signals in urban road networks are almost always fixed-phase and cannot adapt to different traffic conditions, and thus cause congestion at intersections <cit.>. To relieve urban congestion problems, some literature has applied adaptive traffic signal control (ATSC) strategy to minimize the average waiting time of the urban network by dynamically adjusting signal timing according to real-time traffic state <cit.>. However, it becomes challenging to dynamically predict the traffic flow and adjust the signals when dealing with massive traffic. Reinforcement learning technique has shown many significant achievements and traffic signal control and management in complex traffic environment <cit.>. However, traditional reinforcement learning methods like Deep Q-network (DQN) and Q-learning can become very large in the action space and state space when dealing with complex traffic networks, and in many cases are slow to converge. Therefore, the centralized RL has mainly two drawbacks. The first one is high latency caused by collecting all the traffic measurements in the network and feeding them back to the center for centralized processing. The second one is large space occupation caused by the joint action of the agents as the number of traffic junctions grows <cit.>.
To overcome the limitations, multi-agent RL technique can be applied to urban networks by considering each road intersection as a local RL agent. Although different techniques and algorithms are used for different scenarios like traffic signal control and vehicle signal coordination control, most introduce neural networks in reinforcement learning, using the robust representational power of neural networks to build models <cit.>. According to Matthew E.Taylor's survey, transportation problems can be defined as the work of Learning cooperation <cit.>, where each agent aims to learn a dominant strategy which is trying to maximize the value function obtained by the traffic network. At the same time, each agent will develop its own strategy with consideration of neighboring agents' strategies. Agents need to learn collaboratively to find a policy maximizing the global reward, instead of maximizing an agent's own reward to reduce the average wait time for all vehicles in the system.
To realize the target, cooperation learning strategy should be applied to multiple intersection signal control problems, that is, learning how to cooperate under incomplete communication conditions. To solve ATSC effectively, we have developed an adaptive intelligent traffic control algorithm using multi-agent RL based on improved Friend Deep Neural Network Q-learning, namely, Friend-DQN <cit.>. The Friend-DQN method does not increase joint state-action space exponentially as the number of intersections increases, and thus, Friend-DQN will converge faster than traditional Q-learning and DQN.
More specifically, the main contributions of this paper are:
* We propose a Friend-DQN model to deal with multi-intersection problems, which aims to minimize average vehicle waiting time;
* We compare the Friend-DQN model with fixed-phase, centralized DQN and independent DQN for different numbers of intersections in terms of convergence speed and average waiting time;
* We justify the effectiveness and superiority of the Friend-DQN model on the SUMO platform.
§ RELATED WORK
The first academic application of RL technique in a traffic signal control problem was the successful application of SARSA to traffic signal control <cit.>. SARSA is an on-policy algorithm for learning a Markov decision process policy, which combines timely and intelligent traffic control policies with real-time road traffic <cit.>. Srinivasan et al. <cit.> uses a distributed multi-agent model to solve the traffic signal control problem, where each agent has an independent Q table to learn and judge the execution phase. Experiments demonstrate the effectiveness of Q learning.
Recently, IntelliLight was proposed to be implemented using DQN and tested in a real road network <cit.>. IntelliLight is combined with a specific traffic signal control problem. The environment consists of traffic signal phases and traffic conditions, and the state is a characteristic representation of the environment information. The agent inputs the state and controls the signals as an action, such as changing the traffic signal phase or the duration of the signal, and then the agent gets a reward from the environment. The agent in IntelliLight implements this through a DQN network, which updates the model based on the loss function of the DQN network to maximize the reward. It is worth noting that the research argues that the agent has to analyze and understand the strategy in the context of the actual scenario. There is no denying that IntelliLight does perform well in real cities. However, it is still a centralized RL algorithm, which means that it still cannot avoid the vast space and time occupation when dealing with a large-scale network.
A multi-agent deep RL method that combines the DQN algorithm with transfer planning can solve the difficulty of centralized RL <cit.>. Transfer planning can avoid the problems of previous multi-agent reinforcement learning i.e. space and time occupation and allow for faster and more scalable learning. This study introduces a new reward function to the ATSC problem. It solves the problem of extreme delays previously caused by a single average vehicle waiting time as a reward by combining criteria such as transport penalties and vehicle delays with different weights to calculate a new reward. Finally, the control of multi-agent is achieved by transfer planning and max-plus coordination algorithms. This approach reduces the problem of large spaces for single agents to some extent, but it is still precarious and sometimes underperforms due to the use of deeper networks.
Recent research has proposed the use of independent advantage actor-critic (A2C) for traffic signal control instead of Q-learning <cit.>. Although they expanded the state representation by including observations and fingerprints of neighbouring agents in each agent's state and used a spatial discount factor to adjust the global reward for each agent, they did not consider the higher-order relationships of the agents. Others have used the more robust DDPG instead of the A2C method. However, in the past DDPG-based traffic control frameworks <cit.> focused only on single intersections and could not be applied to large-scale traffic networks.
§ METHODOLOGY
To implement more robust adaptive traffic signal control, we propose the decentralized Friend-DQN algorithm in the framework of reinforcement learning. Since the theory of Friend-learning is an enhancement of Nash-learning, we firstly briefly review fundamentals of MDP and Nash-Q before introducing Friend-learning.
§.§ Model
It can be observed from some literature <cit.> that traffic signal control problem can be described as a Markov Decision Process (MDP). MDP aim to detect the state of the environment, select actions, and associate goals related to the state of the environment in a simple form. The definition of the MDP contains the state space S, action set A, transition probability P and reward R. The agent reacts to an environmental state s_t∈ S by taking a possible action a_t∈ A. It ends up in state s_t+1 with some transition probability p(s_t+1|s_t,a_t)∈ P and receives a reward signal r(s_t,a_t,s_t+1)∈ R <cit.>. The process is shown in Figure <ref>.
State S: Figure <ref>a shows a four-intersection traffic signal control problem, where the matrix represents an image in SUMO to represent the state of the vehicles around the intersections. In figure <ref>b, referring to the definition as Tobias <cit.>, we use a matrix to represent the vehicle position information in the traffic signal controlled lane as the state. The whole two-dimensional space shown in figure <ref>a is split into several squares with the same length and width, each of which is filled with one or zero representing the existence of vehicles.
Action A: At each step, agents can choose a traffic signal duration as the action which will change the states. In our design, there are six actions per junction, which are six different phases in five-second intervals from 10 to 30 seconds and can be selected as the phase of the junction traffic signal at each update.
Transition probability P: Transition probability p(s_t+1|s_t,a_t) defines the probability of state transition from the current s_t to the next state s_t+1 when the agent takes action a_t.
Reward R: Reward is defined as the difference the average waiting time of vehicles between the next state and the current state.
The rewards that define the traffic signal control problem are not uniform. Here we take the metric of the reduced waiting time for vehicles at intersections. However, in a real traffic road network, the average waiting time will be calculated when a vehicle finishes its journey. This leads to severe latency problems. So here we select the action at time t and calculate the reward and learning at the next phase, i.e., time t+1.
The final reward rt for each time step is as follows:
r_t = n=1N∑ w_t-n=1N∑ w_t+1
where N represents the number of vehicles on the lanes, and w is the waiting time.
The objective of the agent is to maximize the accumulation of rewards. By formalizing the reward, it is passed from the environment to the agent. The agent is rewarded with the sum of the rewards:
G_t = r_t+1+r_t+2+r_t+3+...+r_T
To make the agent more "farsighted", i.e., to consider future rewards, introduce a discount factor gamma, then the agent chooses action A_t at time t to maximize the desired discounted reward:
G_t = r_t+1+γr_t+1+γ^2r_t+2+...=0∞∑ γ^kr_t+k+1
If gamma is equal to 0, then the agent only considers current rewards and the objective of the agent is to learn how to choose an action A_t to maximize r_t.
§.§ Nash-Q and Friend-Q algorithm
The Multi-agent Nash Q algorithm considers other agents when selecting actions, i.e., it selects the action that makes the system reward and largest.
The Nash equilibrium locks in each agent's strategy because it cannot simply change its own strategy to increase its payoffs <cit.>. The learning agent, indexed by i, learns its Q-value by making arbitrary guesses at moment t. At moment t, agent i takes an action by observing the current state. Afterward, it learns the reward of itself, the actions taken by all other agents, the rewards of others and the new state s'. A Nash equilibrium is then calculated for the current phase and updated the Q-value according to:
Q^i_t+1(s,a^1,...,a^n) = (1-α)Q^i_t(s,a^1,...,a^n)
+ α_t[r^1_t+β NashQ^i_t(s')]
Where
NashQ_t^i( s') = π^1( s')...π^n( s')Q_t^i( s')
Its updates are asynchronous, that is, only actions relating to the current state are updated.
The convergence condition for the Nash Q-Learning algorithm to converge in a cooperative or adversarial equilibrium setting is that a global optimum or saddle point can be found in each state s of the stage game. The Nash Q-learning algorithm can only converge if this condition is satisfied.
Nevertheless, in a transport network, the relationship between different agents is not just competitive, we want multiple agents to work together to get the most rewards for the whole transport system. We, therefore, refer to the Friend or Foe Q-Learning (FFQ) proposed by Littman <cit.>. Friend-Q assumes that the opponent is like a friend who maximizes everyone's benefits, so add the action space of player B to Q.
FriendQ_t^i( s,Q_1,Q_2)=a_1∈A_1,a_2∈A_2max Q[ s,a1,a2 ]
By appropriately replacing NashQ with FriendQ, distributed agent systems can achieve a balanced strategy through cooperative learning.
§.§ Multi-agent Friend-DQN algorithm
However, this is not enough, we also need to extend Friend-DQN so that it can be adapted to larger systems of agents. The Friend-Q value becomes as follows:
Friend_i( s,Q_1,… ,Q_n) =
π∈Π( X_1···X_k)max x_1,...,x_k∈X_1···X_k∑ π( x_1)...π( x_k)Q[ s,x_1,...,x_k]
FriendQ is a strategy for improving Q-learning to find equilibrium for multi-agent systems. Q-learning needs to generate Q-tables in runtime, so when processing traffic, the large state space can result in the need to generate a colossal q-table. We examined the ATSC of DQN and Q-learning at two junctions and figure <ref> showed that DQN converges much faster than Q-learning due to the neural network it introduces. Therefore, it is necessary to improve Friend-DQN to increase the convergence speed further.
DQN uses neural networks to represent Q values, which is what becomes represented by Q networks. We use the target Q value as a label to get the Q value to converge to the target Q value <cit.>.
Therefore, the loss function for Q-network training is:
L( w ) = E[(r+γa'max Q( s',a',w )-Q( s,a,w ))^2]
Where r+γa'max Q( s',a',w ) is the target.
We determined the loss function, i.e., cost, and the way to obtain the samples, the whole algorithm of DQN is shaped.
where X represents the set of all agents.
The hyperparameters of Friend-DQN are shown in TableI. Algorithm<ref> illustrates our proposed algorithm.
§ EXERIMENTAL RESULTS
Here we choose the SUMO platform for the simulation. Using four traffic lights as an example, see Figure <ref>, all eight traffic roads will depart at a specific frequency. The system configuration parameters are shown in TableII.
We compared Friend-DQN with a traditional centralized DQN, independent-DQN and a fixed-time ATSC on SUMO. Because the traffic network is asymmetric when there are three agents, the independent method cannot be directly experimented with the traffic network. Thus we compared Friend-DQN with independent-DQN in the two-agents and four-agents settings. The results are shown in Figure <ref>, which shows that the fixed time is not optimized for traffic. Although the traditional DQN can optimize ATSC, the convergence speed is much slower than the decentralized Friend-DQN algorithm. Moreover, as more junctions are added, the gap between Friend-DQN and centralized DQN grows. At two agents, the convergence speed of independent-DQN and Friend-DQN is approximated. This is due to that the action spaces of both methods are the same. However, Figure <ref>(c) shows that the indenpendent-DQN method keeps oscillating unable to converge to a policy when there are four agents.This is because there is no communication between agents and cannot converge to a stable policy.
In addition, Figure <ref>(c) shows that after 500 trailing epochs at four traffic junctions, the DQN still does not converge. When we analyze the complexity of the action space of these two algorithms, we can see that Friend-DQN outperforms centralized RL to a large extent. Action space represents the projection of the actions in the system. The action space complexity of centralized RL is O(2^n), while the action space complexity of Friend-DQN is only O(n). So when there are four junctions, the centralized approach has 1296 action choices compared to 24 for Friend-DQN. This is why there is such a big difference between the two methods in convergence speed.
Figure <ref>(d) shows that as the number of traffic junctions increases, the time and space required for centralized algorithms will become a huge hassle. That is why we are developing decentralized multi-agent to implement ATSC.
§ CONCLUSIONS
In this paper, we have demonstrated that Friend-DQN is a promising approach to adaptive traffic signal control. In an entire traffic network, traffic flows are dynamic and change over time. Our proposed approach is based on information in a phase statistic and learning the optimal joint action of multiple agents in different situations. Vehicle location information and queue length are used as collected information. Many states and joint actions are learned in training, so our algorithm can be extended to more extensive traffic networks.
At the same time, the decentralized Friend-DQN algorithm is a scalable multi-agent approach to deep reinforcement learning. Using cooperation to achieve equilibrium avoids the problems of single-agent reinforcement learning and allows for faster and more scalable learning. Its action space complexity is linear, so as more traffic intersections are added, the algorithm performs significantly better than earlier single-agent traffic signal control efforts.
We conducted simulation experiments on SUMO for four junctions and proved the performance and stability of the algorithm. Simulation results also illustrate that our approach outperforms the fixed-time and DQN algorithms in different traffic conditions.
Our current system only considers cooperation between traffic junctions. It can lead to some inequities i. e. excessive waiting times at certain junctions. Future work includes applying the Nash Equilibrium and Friend-DQN so that agents can cooperate and also secure their interests.
§ ACKNOWLEDGMENT
This work was supported in part by XJTLU Research Development Funding RDF-21-02-015. Any comments should be address to Dr. Wang Shangbo.
ieeetr
|
http://arxiv.org/abs/2306.03508v1
|
20230606085353
|
Semantic Segmentation on VSPW Dataset through Contrastive Loss and Multi-dataset Training Approach
|
[
"Min Yan",
"Qianxiong Ning",
"Qian Wang"
] |
cs.CV
|
[
"cs.CV"
] |
Semantic Segmentation on VSPW Dataset through Contrastive Loss and Multi-dataset Training Approach
Min Yan^1 † Qianxiong Ning^1,2 † Qian Wang^1
^1 China Mobile Research Institute,
^2 Xi'an Jiaotong University
[email protected]; [email protected]; [email protected]
===========================================================================================================================================================================================================
^†Equal contribution.
Video scene parsing incorporates temporal information, which can enhance the consistency and accuracy of predictions compared to image scene parsing. The added temporal dimension enables a more comprehensive understanding of the scene, leading to more reliable results. This paper presents the winning solution of the CVPR2023 workshop for video semantic segmentation, focusing on enhancing Spatial-Temporal correlations with contrastive loss. In addition, we explore the influence of multi-dataset training by utilizing a label-mapping technique. And the final result is aggregating the output of the above two models.
Our approach achieves 65.95% mIoU performance on the VSPW dataset, ranked 1st place on the VSPW challenge at CVPR 2023.
§ INTRODUCTION
The Video Scene Parsing in the Wild (VSPW) <cit.> is a recently introduced video semantic segmentation dataset comprising 3536 videos with an average duration of approximately 5 seconds and a frame rate of 15. This dataset encompasses annotations for 124 categories. The main objective of the challenge is to perform semantic segmentation of the test set videos in the VSPW dataset by assigning predefined semantic labels to pixels across all frames. The prominent evaluation metric for the challenge is the mean Intersection over Union (mIoU).
After conducting a thorough investigation of the dataset, we made numerous attempts during the competition, and we will describe our experimental process in the subsequent section to inspire future researchers. We first compared several open-source models, considering both model size and performance. As a result, we selected the existing open-source model with decent performance on the ADE20k<cit.> dataset, which is Vit-Adapter + Mask2former<cit.>, as our baseline model.
Furthermore, we recognized that Transformer-based models have powerful learning capabilities, and additional data training can improve the model's learning ability. Therefore, during the training phase, we adopted the joint training method on multiple datasets, which significantly affected the training dataset. However, due to the distribution differences between the validation and test sets, introducing the ADE20k dataset did not achieve the desired results. In the final outcome, we only used the coco-stuff 160k<cit.> and VSPW datasets for joint training. Specifically, when processing the COCO-Stuff dataset<cit.>, we used the label-mapping method to map the semantic labels of the coco-stuff training and validation sets to the categories in the VSPW dataset with the closest semantic meaning. Then, we set a threshold to retain images whose effective labels were greater than 80% as the final extra-training set data.
To enhance the correlation between temporal and spatial features in video semantic segmentation, we considered that the predicted results should have strong frame-to-frame correlation, meaning that for the same object, the predicted results in adjacent frames of the same video should be as consistent as possible, while the semantic segmentation predicted results between different object categories should be as different as possible. Therefore, we introduced a contrastive loss function<cit.>. Specifically, when reading the data, we selected two adjacent frames as input for the training set. We calculated the difference between the semantic pixel features of different categories using the contrastive loss function to increase their differences while reducing the differences within the same category.
In the end, we combined the above two models for the model aggregation in the testing phase and achieved a mIoU of 65.95% in the final test set.
§ METHOD
§.§ Baseline Model
Transformer-based models have shown excellent performance in semantic segmentation tasks due to their powerful learning capabilities. Therefore, we utilize Vit-Adapter and Mask2former model, a transformer-based model with a strong performance on the Ade20k leaderboard, as our baseline model. The Beitv2<cit.> is chosen as the backbone of our baselines, which is pre-trained on ImageNet21k<cit.>. The entire model has been fine-tuned on the COCO-Stuff and ADE20k datasets.
§.§ Multi-dataset training
To enhance the performance of our model, we have included additional training data beyond the VSPW training set, namely ADE20k and COCO datasets.
We attempted to train our model on multiple datasets through a label mapping technique where we processed the COCO-Stuff and ADE20k datasets by associating their labels with those of the VSPW dataset that had similar semantics. After processing, we filtered the resulting dataset to remove images with a lot of irrelevant pixels, resulting in a joint dataset. We regard the two datasets as a whole to train the network. However, in our final experimental results, we observed a significant improvement in performance using the COCO dataset, while ADE20k did not. We believe that the reason for the poor performance may be the significant differences in data distribution and scale between ADE20k and VSPW datasets. Therefore, we did not use the ADE20k dataset for joint training in the final results.
§.§ Contrastive Loss
We argue that pixel features of the same semantic class should be as consistent as possible for consecutive frames in a video sequence. In contrast, those of different semantic classes should be as distinct as possible. To establish temporal connections between the sequence features, we introduced a spatial-temporal contrastive loss function to establish continuity between adjacent video frames and increase the contrast between different semantic classes within the same image. Formally, our spatial-temporal contrastive loss is defined as:
ℒ_i = 1/| ϕ|∑_x_i^+∈ϕlogexp((x_i · x_i^+)/τ)/exp((x_i · x_i^+)/τ) + ∑_j=1^Kexp(x_i · x_j^-)/τ)
ℒ_Nce = -1/M∑_i=1^Mℒ_i
where x_i is the input sample, including two consecutive frames from the encoder, The positive sample x_i^+ contains semantic labels that belong to the same category as x_i, while the negative sample x_j^- contains semantic labels that belong to a different category. The temperature coefficient τ is used to control the smoothness of the probability distribution. ϕ denotes the positive samples for pixel i.The parameters K denote the number of negative samples, respectively. The variable M represents the number of patches obtained from the two-frame images after passing through the backbone and encoder in the NCE loss.
By minimizing this loss, we can establish spatial-temporal consistency between adjacent frames. However, we can only conduct experiments on resolution-cropped images with a crop size 480x480 due to memory limitations.
§.§ Model aggregation
Aggregating models is a widely recognized technique for improving model performance. Training two models under different conditions can produce different results. Consequently, a combination of the outputs of these models through a weighted summation approach can yield a substantial improvement in the test sets of VSPW. This technique leverages the complementary strengths of each constituent model, resulting in a more robust and accurate prediction. The weighted summation approach is a simple and effective means of model aggregation, allowing for the integration of multiple models with minimal computational overhead. The specific results and other attempts will be described in detail in section <ref>.
§.§ Loss function
We train the model with the following loss function
ℒ = λ_1ℒ_seg + λ_2ℒ_Nce
ℒ_seg = λ_3ℒ_Dice + λ_4ℒ_CE
while ℒ_seg is consist of Dice loss<cit.> and cross-entropy loss and λ_1, λ_2, λ_3, λ_4 is set to 1, 0.1, 5, 1 respectively. We only used ℒ_seg separately during multi-dataset joint training. After adding the contrastive loss, we used the above-mentioned loss ℒ on the model.
§ EXPERIMENTS RESULT
.
§.§ Dataset and Evaluation Metrics
VSPW dataset is a large-scale video semantic segmentation dataset. It includes 3536 videos and 251633 frames with 124 classes. The training, validation, and testing sets contain 2806/343/387 videos with 198244,24502,28887 frames, respectively.
COCO-Stuff dataset is an image semantic segmentation dataset that contains about 123k images with 182 classes. We include some of the COCO-Stuff dataset in training. The usage of this dataset can be found in section <ref>
We adopt mean Interaction over Union(mIoU) as evaluation metrics in this paper consistent with the leaderboard metric of the competition.
§.§ Implementation details
Our final model is the ensemble of a model incorporating extra-data mapping and the other model utilizing contrastive loss. The first model is trained with crop size 896 on 6 GPUs with one image per GPU for 60k iterations, and the second model is trained with crop size 480 on 2 GPUs with 2 images per GPU for 40k iterations. We adopt AdanW optimizer in our training process with a learning rate of 2e-5 and a linear warmup of 1500 iters with ratio 1e-6. For data augmentation, random horizontal flipping, random cropping, and random resizing with a ratio range [0.5, 2.0] are used. In the testing phase, random horizontal flipping and a sliding window are applied for test-time augmentation. The backbone is pre-trained on ImageNet, and the whole model is pre-trained on ADE20K and COCO-Stuff. The experiments in this work are conducted with the support of the CMCC (China Mobile Communications Corporation) Jiutian Deep Learning Platform.
§.§ Ablations Studies
Baseline We explored some Transformer-based networks with different backbones and conducted a series of experiments, and results are shown in Table <ref>. Mask2former framework showed better performance compared to Upernet. Besides, Mask2former with beit2-adapter as backbone shows extraordinary performance among all the settings.
Multi-dataset model
In this model, we adopted some components on the baseline model and obtained improvements. Firstly, enlarging the crop size during the training phase can boost performance. The possible reason can be that the model obtains more detailed information and the model was pre-trained with a crop size of 896 with positional embedding, which is conflicting when the number of patches changes. Secondly, utilizing the the mapped extra dataset in the training process. This operation greatly increased diversity of data. The details of the extra-data mapping are in <ref>. Finally, we set the dropout rate to 0.5 which greatly improved the generalization performance of the model. We added the above settings step by step and the results are shown in Table <ref>. We can see that enlarging the crop size brings about 1.8 points improvements, using extra data adds 0.5 points, and the dropout skill obtains about 0.6 points improvements.
Contrastive-loss model
In this study, we aimed to improve the continuity of predicted results between adjacent frames in video sequences. We introduced a contrastive loss function into the baseline model to accomplish this.The experimental results, as presented in Table <ref>, demonstrate that the performance of the final model was significantly enhanced by using the contrastive loss function. In particular, we found that using two frames of images as input for the loss function yielded the best performance. However, It should be noted that we could not conduct experiments on images with a crop size of 896 or include more adjacent frames due to memory limitations. As a result, the following experimental results were based on images with a crop size of 480 and only two consecutive frames.
Overall, our findings suggest that the use of a contrastive loss function can greatly improve the performance of predicted results in video sequences.
where NCE_4 and NCE_2 represent the input of the contrastive loss function as two or four consecutive images, respectively, in one video.
Model aggregation
We conducted experiments on two model aggregation methods, including soft ensemble and voting.
As for soft ensemble, we adopted two ways of ensemble. In the first way,
we applied a weighted summation on the soft result of the two chosen models
P = τP_1+(1-τ) P_2
where τ is the ensemble coefficient.
The two model ensemble results are shown in Table <ref>.
In the other way, we average all soft model results.
P = -1/N∑_i=1^N𝒫_i
where P_i represents the i th soft result and N is the total number of models. In our experiments, we chose two multi-dataset model separately with crop size 480 and 896 with best results, and a contrastive-loss model.
Voting follows the principle of majority rule. We randomly chose some models and obtained the results, and then for every pixel in the image, we selected the highest number of votes as the final class. In this phase, we randomly chose several models with competitive performance.
All the methods mentioned above have achieved expressive results in the final phase. The results are shown in Table <ref>
§.§ Comparisons
Our solution achieves 65.95% mIoU on the final testing set and obtains the 1st place on the VSPW challenge at CVPR 2023. The results are shown in Table <ref>
§ CONCLUSION
In this study, we started by selecting a strong baseline model that is well-suited for the task of multi-class semantic segmentation. To improve the performance of the model, we mapped an additional dataset to the VSPW dataset and employed a contrastive loss function to constrain temporal variations in the data. Additionally, we leveraged an effective model aggregation method to further enhance the overall performance of the system. All of these techniques were combined to create a comprehensive solution that achieved first place in the VSPW challenge at the prestigious CVPR 2023 conference. Our results demonstrate the effectiveness and versatility of our approach for addressing multi-task semantic segmentation problems, thereby offering the potential for improved performance in a wide range of scenarios. The noteworthy point is that we have many interesting attempts, such as post-processing the model results with optical flow, separately training individual categories, and designing imbalanced weights to address class imbalance issues, but ultimately, their performance on the test set was not stable. Therefore, we did not use these strategies in the final results.
ieee_fullname
|
http://arxiv.org/abs/2306.10837v1
|
20230619103737
|
On the curvature of the blowup of a point
|
[
"Gunnar Þór Magnússon"
] |
math.DG
|
[
"math.DG",
"math.AG",
"math.CV",
"32Q15, 32J27"
] |
We consider the blowup of a point of a compact Kähler manifold and a metric
of the form μ^*h + t on it, where h is a Kähler metric on the
original manifold and is Hermitian form that looks like the
Fubini–Study metric near the exceptional divisor.
We calculate the curvature tensor of this metric on the exceptional divisor
and show that its holomorphic sectional curvature is negative in some
directions for all small enough t, which torpedos a natural approach to
showing that blowups of manifolds of positive holomorphic sectional curvature
have positive curvature.
Leveraging The Edge-to-Cloud Continuum for Scalable Machine Learning on Decentralized Data
Ahmed M. Abdelmoniem^*, Member, IEEE
^*Corresponding Author, E-mail: [email protected]
Ahmed is an Assistant Professor at Queen Mary University of London, UK. Webpage: <http://eecs.qmul.ac.uk/ãhmed>
Received July 31, 2023; accepted ???
===================================================================================================================================================================================================================
§ INTRODUCTION
Let X be a compact Kähler manifold of dimension n.
Let h be a Kähler metric on X.
We write D for its Chern connection and
R(α,β,γ,δ) = h(i/2
D^2_α,βγ, δ) for its curvature tensor. The
holomorphic sectional curvature of h is
H(ξ)
= R(ξ, ξ, ξ, ξ)/|ξ|^4,
where ξ is a nonzero tangent field.
We say that h has positive holomorphic sectional curvature if H > 0 for all
tangent fields.
Examples of manifolds that carry such metrics are the complex projective space
with its Fubini–Study metric, the standard metric on a Grassmannian manifold,
and projectivized bundles over bases of positive holomorphic sectional curvature
<cit.>.
Tsukamoto <cit.> showed that manifolds that admit such
metrics are simply connected, and recently Xiaokui Yang <cit.>
proved that such manifolds are projective and rationally connected, answering a
question of Yau <cit.>.
A related question of Yau (again Problem 67) is whether the blowup of a compact
Kähler manifold of positive along a smooth submanifold again has
positive .
This is open; even for complex surfaces it is not known whether del Pezzo
surfaces admit positively curved metrics (aside from the del Pezzo surfaces of
degrees 8 and 9, where the nontrivial one admits such a metric because it's
also a Hirzebruch surface and thus a projectivized bundle). In this note we
explain that the brute force approach to this problem does not work.
Prior art in this direction has focused on showing that projective bundles
over manifolds with positive are also positive, along with more
general fibrations with positively curved fibers and base.
The common thread in the papers of
Hitchin <cit.>,
Sung <cit.>,
Álvarez <cit.>,
Álvarez,
Heier and Zheng <cit.> or Chaturvedi and
Heier <cit.>
is to consider metrics of the form μ^* h + t, where h is a metric
on the base and a fiberwise metric on the fibers and show they have
positive curvature for small enough t.
In this note we try this approach for the blowup of a point.
We calculate the full curvature tensor of h_t on a point on the exceptional
divisor and show that the metric
does not have positive there, no matter what metric h we started with.
This of course doesn't mean the blowup doesn't have positive , only that
the most straightforward method of constructing a metric on it fails to show that.
§ CURVATURE
Let X be a compact Kähler manifold of dimension _ X = n
and let h be a Kähler metric on it.
Let μ : X → X be the blowup of X at a point p.
There exists a closed (1,1)-form β on whose restriction to
the exceptional divisor is the Fubini–Study metric.
This is proved for blowups of smooth submanifolds in
Voisin's textbook <cit.>.
We can simplify that proof a little since we're only blowing up points.
In that case the statement goes back at least to Kodaira's proof of the
embedding theorem.
Locally around p, which may assume is the origin in a complex vector space
V, the blowup is
V
= { (v,[w]) ∈ V × P(V) | v ∈ w }
and μ is the projection onto the first factor.
Pick an inner product on V and let B(r) ⊂ V be a ball of radius r
centered at 0.
We pick r so that B(r) fits into the coordinate chart that implicitly lurks
in the background.
Let ψ be a bump function supported on that chart that is identically 1
on B(r).
The (1,1)-form i/2∂∂̅log |v|^2 on V ∖{0} descends to P(V) and defines the Fubini–Study metric.
If p_j : V × V ∖{0}→ V for j = 1,2 are the projections
onto the first and second factors then
i/2∂∂̅(p_1^*ψlog |p_2^*v|^2)
defines a closed (1,1)-form on V × P(V) that restricts to the
pullback of the Fubini–Study metric by p_2 on B(r) × P(V).
It also extends to the rest of X by zero.
Its restriction to V is the form we want.
Let p ∈ X be a point and blow it up to obtain μ : X → X.
Let be the Hermitian form associated to the (1,1)-form β on X
we constructed in Proposition <ref>.
Then h_t = μ^*h + t is a Kähler metric on X for all t small
enough.
We're going to calculate its curvature tensor at a point on the exceptional
divisor.
Recall that locally around p the blowup is
V
= { (v,[w]) ∈ V × P(V) | v ∈ w }.
If f ∈ V then f acts on the blowup by f(v, [w]) = (f(v), [f(w)]).
This is an isomorphism that maps the exceptional divisor to itself, and we can
map any point on the divisor to any other point on it.
Let (0, [w]) be a point on E.
Let's choose normal coordinates (x_1,…,x_n) centered at p.
There exists f ∈ U(n) so that f(w) = (0 …, 0, 1), and the
coordinates obtained by applying f to the old ones are still centered at p
and are normal there because f ∈ U(n).
Picking the chart {(y_1, …, y_n) ∈^n | y_n ≠ 0 } for
P(V)
we realize the blowup as
X
= { (x,y) ∈^n ×^n-1| x_j y_k = x_k y_j for j,k = 1,…,n, where y_n = 1}.
In these coordinates the point we want to calculate the at is (0,0)
and we have D_h,ξ∂ / ∂ x_j = 0 at 0 for j = 1, …, n.
We also note that close to the exceptional divisor, h_t = μ^* h + t is
just the restriction of the product metric p_1^* h ⊕ t p_2^* on
V × P(V) to X, where p_j are the projections onto the
factors and is the Fubini–Study metric.
The only equations that give us any information at (0,0) are
x_j - y_j x_n = 0, j = 1, …, n-1.
Their differentials are
dx_j - y_j dx_n - x_n dy_j = 0, j=1,…,n-1
and so the tangent fields
ξ_j = x_n e_j + f_j,
j=1,…,n-1,
ξ_n = ∑_j=1^n-1 y_j e_j + e_n
are a basis for
the intersection of all the kernels of the differentials, that is, of T_.
Here e_j is the tangent field corresponding to the coordinate x_j
and f_k the one corresponding to y_k.
Note that the orthogonal complement of T_ at (0,0) is spanned by
(e_1, …, e_n-1).
Any holomorphic tangent field ξ on near (0,0) can be written as
ξ = ∑_j=1^n _j ξ_j
= ∑_j=1^n-1 (_j x_n + _n y_j) e_j
+ ∑_j=1^n-1_j f_j
+ _n e_n,
where the _j are holomorphic functions.
While writing out the curvature tensor it will be useful to have the
semipositive Hermitian form τ(, ) = , -
_n _n at hand. This is the pullback of the standard inner product
by the projection onto the subspace spanned by the first n-1 basis elements.
Let
α = ∑_j _j ξ_j,
β = ∑_k _k ξ_k,
γ = ∑_l _l ξ_l
and δ = ∑_m _m ξ_m
be holomorphic tangent fields close to (0,0).
The curvature of h_t at the origin is
R_h_t(α, β, γ, δ)
= H_h(e_n) _n _n _n _n
+ t (
τ(, ) τ(, ) + τ(, ) τ(, )
)
- _n _n τ(, )
- _n _n τ(, )
- _n _n τ(, )
- _n _n τ(, ).
Recall that the curvature of h_t is
R_h_t(α, β, γ, δ)
= p_1^* R_h(α, β, γ, δ)
+ t p_2^* R_b(α, β, γ, δ)
- σ(α, γ), σ(β, δ),
where σ(α, β) = π_N(D_h ⊕ t ,αβ) is
the second fundamental form, and the inner product is on the orthogonal complement
of the tangent bundle.
First,
p_1^*R_h(α, β, γ, δ)
= H_h(e_n) _n _n _n _n
at the origin. Second,
p_2^*R_b(α, β, γ, δ)
= p_2(α), p_2(β)_b
p_2(γ), p_2(δ)_b
+ p_2(α), p_2(δ)_b
p_2(γ), p_2(β)_b
= τ(, ) τ(, ) + τ(, ) τ(, )
at the origin
because the Fubini–Study metric has constant 2
and τ is just the inner product on ^n-1 extended to ^n.
Let's now write π_N for the projection onto the orthogonal complement of
T_ in T_^n ×^n-1|.
At the origin it is just the projection onto the first n-1 coordinates.
Third, then,
π_N(D_h_t,αγ)
= π_N(
∑_j=1^n_α_j ξ_j
+ ∑_j=1^n _j D_αξ_j
).
Note that π_N(ξ_j) = 0 for j=1,…,n so the first term from that sum
is zero. The second term is
π_N(D_αξ_j)
= π_N(D_α(x_n e_j + f_j))
= _n e_j + x_n D_α e_j
= _n e_j
for j=1,…,n-1 at the origin because D_α e_j = 0 there.
We also have
π_N(D_αξ_n)
= π_N(
∑_k=1^n-1_α y_k e_k + y_k D_α e_k
+ D_α e_n
)
= ∑_k=1^n-1_k e_k
at the origin.
Together we get
∑_j=1^n _j π_N(D_αξ_j)
= ∑_j=1^n-1_j _n e_j + _n ∑_j=1^n-1 e_j
= ∑_j=1^n-1 (_n _j + _n _j) e_j.
And fourth,
π_N(p_2^*D_g,αγ)
= π_N (
∑_j=1^n-1_α_j f_j
+ _j p_2^*D_g,α f_j ) = 0
at the origin because there the argument to π_N takes values in the bundle
spanned by the f_j.
The second fundamental form then contributes
σ(α, γ), σ(β, δ) =
∑_j=1^n-1 (_n _j + _n _j) (_n _j + _n _j)
= _n _n τ(, )
+ _n _n τ(, )
+ _n _n τ(, )
+ _n _n τ(, )
to the curvature. Putting all of this together we get the result.
A fairly natural strategy to proving that the blowup of a point of a manifold
that has a metric with positive is also positively curved is to
show that h_t has positive on a neighborhood around the exceptional
divisor for all small enough t. Then we could conclude that the blowup has
positive by using Wu's <cit.> characterization of the
to show positivity outside of that neighborhood for t small enough.
The following corollary shows this strategy cannot work.
The holomorphic sectional curvature of h_t is negative in
some tangent directions at the origin for all small enough t.
Let ξ = ∑_j _j ξ_j be a holomorphic tangent field.
Note that |ξ_j|_h_t^2 = t for j = 1, …, n-1 and |ξ_n|_h_t^2 = 1
at the origin.
We have
H_h_t(ξ)
= |_n|^4 H_h(e_n)
+ 2t τ(, )^2
- 4 |_n|^2 τ(, )/(t τ(, ) + |_n|^2)^2.
First suppose that H_h(e_n) < 0. Picking with |_n| = 1 we have
τ(, ) = 0 so H_h_t(ξ) = H_h(e_n) < 0.
Then suppose that H_h(e_n) ≥ 0.
We may assume that || = 1 when checking for positivity and focus only on
the numerator. We're then left considering the positivity of the polynomial
p_t(x)
= H_h(e_n) x^2 + 2t(1-x)^2 - 4x(1-x)
= (H_h(e_n) + 2t + 4) x^2 - 4(1 - t) x + 2t
for 0 ≤ x ≤ 1.
Its critical point is
x_t = 2(1-t)/H_h(e_n) + 2t + 4
where its value is
p_t(x_t) = -4(1-t)^2/H_h(e_n) + 2t + 4 + 2t,
which is negative for all small t.
This corollary shows that no metric of the form μ^* h + tb can have
positive when h does, so we can't prove that the blowup carries such
a metric in this way.
Note however that the blowup of the projective plane at a single point
is a Hirzebruch surface, thus a projective bundle, and thus carries
a metric of positive .
Let
α = ∑_j _j ξ_j,
β = ∑_k _k ξ_k
be holomorphic tangent fields close to (0,0).
The Ricci tensor of h_t is
r_t(α, β)
= H_h(e_n) _n _n
+ (n-1) τ(, )
- (n-1) _n _n / t
at the origin.
The tangent fields that correspond to
v_j = (δ_1j/√(t), …, δ_n-1,j/√(t), δ_nj) are
orthonormal at the origin.
If ξ and η are the fields corresponding to (_j) and (_j) we
first have
∑_j=1^n H_h(e_n) _n _n δ_jn = H_h(e_n) _n _n,
and then
∑_j=1^n
τ(, ) τ(v_j, v̅_j) + τ(, v̅_j) τ(v_j, )
= n τ(, ) / t.
The second fundamental form contributes
∑_j=1^n
_n _n τ(v_j, v̅_j)
+ _n δ_jnτ(v_j, )
+ δ_jn_n τ(, v̅_j)
+ δ_jn^2 τ(, )
-10pt
= (n-1) _n _n / t + τ(, )
because τ(, v̅_n) = 0.
Together we get
r_t(, )
= H_h(e_n) _n _n
+ (n-1) τ(, )
- (n-1) _n _n / t.
We see that the Ricci tensor can also be negative in some directions for t
large enough, but is positive in other directions for all t.
The scalar curvature of h_t is
s_t
= H_h(e_n)
+ (n-1)(n-2) / t
at the origin.
This is the same as Hitchin gets for the scalar curvature if we take his
expressions for R_0 and R_1 in <cit.> and evaluate them at r = 0, which is
a nice sanity check.
With more work I think we could recover Hitchin's full results.
We've made some attempts in that direction to see if we could extend his
result on positive scalar curvature to Hermitian metrics, but seem to run
into obstructions due to the nonvanishing of the connection at the origin.
plainurl
|
http://arxiv.org/abs/2306.04720v1
|
20230607183410
|
Intrinsic antiferromagnetic multimeronic Néel spin-textures in ultrathin films
|
[
"Amal Aldarawsheh",
"Moritz Sallermann",
"Muayad Abusaa",
"Samir Lounis"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
Nonlinear Evolution of Quadratic Gravity in 3+1 Dimensions
Hyun Lim
July 31, 2023
==========================================================
* Peter Grünberg Institute and Institute for Advanced Simulation, Forschungszentrum Jülich and JARA, D-52425 Jülich, Germany
* Faculty of Physics, University of Duisburg-Essen and CENIDE, 47053 Duisburg, Germany
* RWTH Aachen University, 52056 Aachen, Germany
* Science Institute and Faculty of Physical Sciences, University of Iceland, VR-III, 107 Reykjavík, Iceland
* Department of Physics, Arab American University, Jenin, Palestine
* [email protected]; [email protected]
§ ABSTRACT
The realization of topological antiferromagnetic (AFM) solitons in real materials is a major goal towards their use in information technology. While they bear various advantages with respect to their ferromagnetic cousins, their observation is scarce. Utilizing first-principles simulations, here we predict new chiral particles in the realm of AFM topological magnetism, frustrated multimeronic spin-textures hosted by a Néel magnetic state, arising in single Mn layers directly grown on Ir(111) surface or interfaced with Pd-based films. These topological structures are intrinsic, i.e. they form in a single AFM material, can carry distinct topological charges and can combine in various multimeronic sequences with enhanced stability against external magnetic fields. We envision the frustrated Néel AFM multimerons as exciting highly-sought after AFM solitons having the potential to be utilized in novel spintronic devices hinging on non-synthetic AFM quantum materials.
§ INTRODUCTION
Recent experimental breakthroughs promoted antiferromagnetic (AFM) materials into the realm of information technological applications <cit.> and triggered state-of-the-art activities in the world of topological magnetism <cit.>.
The antiparallel spin sublattices present in AFM materials result in zero dipolar fields, making them insensitive to magnetic field perturbations and enhances the stabilization of nanoscale topological structures <cit.>. Moreover, AFM materials possess faster spin dynamics than ferromagnets by orders of magnitude <cit.>, which is an appealing characteristic for THz magnetic memory and logic devices.
The race in identifying AFM non-trivial spin-swirling objects is strongly motivated by their particle-like nature, potentially realizing ideal magnetic bits, augmented with the low power consumption <cit.> involved in their manipulation with the possibility of controlling their current-driven motion <cit.> while avoiding the skyrmion Hall effect that plagues the ferromagnetic (FM) cousins <cit.>.
This led to the recent discovery of synthetic AFM skyrmions, which consist of two FM skyrmions that are realized in two distinct magnetic layers and antiferromagnetically coupled through a non-magnetic spacer layer <cit.>. Here, utilizing first-principles in conjunction with atomistic spin dynamics we unveil multimeronic textures, a new type of topological AFM particles, which are non-synthetic and emerge in magnetically frustrated thin films (see Fig.<ref>).
Regular FM merons are in-plane magnetized textures with magnetization that curls around a stable core pointing out-of-plane, and are topologically equivalent to one half of a skyrmion <cit.>. The meronic topological charge N= 1/4π∫n· ( ∂n/∂_x×∂n/∂_y) dxdy equals ±1/2,
where n is the direction vector of magnetization. While they have been observed experimentally in bulk <cit.> and thin films <cit.>, they emerge in AFM synthetic <cit.> and intrinsic bulk (thick films) phases <cit.>, following a large body of phenomenology-based simulations <cit.>. However, a pristine ultrathin film material that hosts AFM merons remains elusive.
Our multi-meronic textures are distinct from current predictions since they form in a realistic material a rich set of combinations materializing in a frustrated in-plane Néel ground state, shown in Fig. <ref>j, which can be decomposed into three FM sublattices, with an opening angle of 120^∘ between their respective magnetic moments. We predict a single Mn layer as a universal hosting material once interfaced in different fashions with Ir(111) surface with and without Pd, Fe monolayers, PdFe bilayer or Pd_2Fe trilayer (see Fig.<ref>a-d). The different substrates form a typical family of substrates typically known to host FM <cit.>, and AFM skyrmions <cit.>. The in-plane AFM Néel state is the ground state for Mn layer in all magnetic systems, formed as a result of magnetic frustration caused by strong AFM exchange coupling among the first nearest neighbors, as illustrated in Supplementary Fig. 1. While the in-plane orientation is dictated by the z component of the antisymmetric exchange interactions (Dzyaloshinskii-Moriya Interactions – DMI) and is further reinforced by the in-plane magnetic anisotropy energy (MAE) K.
§ RESULTS
§.§ Topological magnetic states in frustrated Mn layer.
The Ir(111) substrate forms a triangular lattice, on which we deposit layers of Mn, PdMn, MnPdFe and MnPd_2Fe and perform atomistic spin dynamics <cit.>, minimizing the Heisenberg Hamiltonian (Eq. <ref> in Methods section) equipped with the magnetic interactions derived from first-principles (see Methods section). We identify a plethora of AFM Néel meronic magnetic states forming metastable states in the Mn layer as depicted in Fig. <ref>a, Fig. <ref>g, h, and Supplementary Fig. 2.
The Néel ordering of the spins is the ground state of the Mn layer in all of the aforementioned magnetic systems. The associated critical temperatures range from 130K for PdMn bilayers to about 600K or more for the rest of explored Mn-based films. The spins forming the AFM Néel order are segmented into three sublattices L1, L2 and L3, each hosting FM spin alignment (Fig. <ref>j). At each sublattice, FM meronic pair can be stabilized, so in total, in the case of single AFM Néel meronic pair (Fig. <ref>a), we have six FM merons (antimerons), as shown in Fig. <ref>d-i, which we refer to as a hexameronic state.
By zooming in into the two spin-swirling extremities of the hexameron (Figs. <ref>b-c) and their respective sublattice decomposition (Figs. <ref>d-i), we identify a vortex (Fig. <ref>d) and an antivortex (Fig. <ref>h) whose cores reside on an Mn lattice site, around which the spins of the remaining meronic textures precess, as dictated by the magnetic frustration induced by the underlying AFM magnetic interactions.
Each of the FM building blocks of our AFM explored solitons holds a topological charge (N) defined as: N = (wp/2) <cit.>, where w = +1 (-1) for the vortex (antivortex) is the winding number describing the rotational direction of the in-plane magnetization, and p is the polarity which defines the out-of-plane magnetization of the center being +1 (-1) when pointing up (down) <cit.>.
Since the merons and antimerons carry a topological charge of -1/2 and +1/2, respectively <cit.>, the sublattice charge N_L is either -1 (+1) for a meron-meron (antimeron-antimeron) pair, as the case of L3 (Fig. <ref> f, i), or 0 for a hybrid (see L1 and L2 in Fig. <ref> d, e, g, h) meron-antimeron pair. By summing up the total charge N_t for a hexameron, one can end up with three possible values -1, 0 and +1 (see Fig. <ref>k), which interestingly are energetically degenerate in the absence of an external magnetic field.
Besides the hexameronic frustrated AFM Néel state, we identified a rich set of other meronic textures, such as the dodecameron, hosting 12 merons, shown in Fig. <ref>g. Further examples of complex multimerons are presented in Supplementary Fig. 2. Similarly to the purely FM counterparts, in confined geometries (See Supplementary Fig. 2b-c) a "single" AFM Néel meronic state can be stabilized. This object is a trimeron resulting from three frustrated merons with overlapping cores, carrying in total a half integer topological charge.
§.§ Stability against external magnetic fields.
The investigation of the response of topologically paired AFM Néel meronic pairs to an magnetic fields is important to inspect stability aspects and to fingerprint subsequent potential non-trivial topological transitions.
The frustrated meronic textures survive to extremely high in-plane magnetic fields (>200 Tesla). The case of an out-of-plane (OOP) magnetic field shows a rather rich impact on the explored spin-textures. Therefore, here, we scrutinize in detail the latter scenario by focusing on three different AFM Néel meronic states (see Figs. <ref> f-h).
As a prototypical chiral magnetic object, we consider the hexameron emerging either in the AFM Néel (Fig. <ref>f) or in the spiraling AFM Néel states (Fig. <ref>h) as well as the dodecameron (Fig. <ref>g). For interfaces hosting the Fe layer, MnPdFe/Ir(111) and MnPd_2Fe/Ir(111), we examined both cases: switching-off (solid bars in Fig. <ref>e) and -on (dashed bars in Fig. <ref>e) the Mn-Fe magnetic interactions. A snapshot for the Mn-hexameron interfaced with ferromagnetic Fe spirals and skyrmkion is illustrated in Fig. <ref>i.
While we were expecting the robustness of the unveiled meronic textures against external magnetic fields, we were intrigued by the annihilation of some hexamerons emerging in an AFM Néel background with experimentally accessible OOP fields, e.g. 10 Tesla, in contrast to dodecamerons and hexamerons arising in a Néel spiraling state (red and green bars in Fig. <ref>e).
To get insight into the origin of the sensitivity of these magnetic states,hexamerons forming in an AFM Néel background, we scrutinize the sublattices topological distribution along with the spin orientation at each sublattice of the different hexameronic states shown in Fig. <ref> (See supplementary Fig. 3 illustrating snapshots of the different hexamerons). As introduced earlier, there is a quadruple degeneracy for each hexameron in the absence of a magnetic field. The four states, denoted Hexa A–D and illustrated in Fig. <ref>, can be distinguished by the vortex nature of their core constituents and the orientation of the core spins (see Fig. <ref>k). A finite OOP field lifts partially the degeneracy and favors the hexameron, here Hexa D, with most spins pointing along the field direction (see also Supplementary Fig. 3). Among the four hexamerons, Hexa D will be the most robust to the applied field and therefore survives gigantic fields. The remaining hexamerons experience at some point magnetization switching to reach the optimal sublattice topological distribution defined by Hexa D. This requires a flip of the spins for at least one meron (antimeron) implying going through a topological charge transition, being a non trivial process, during which, the AFM meronic structure might encounter an unstable spin distribution, leading to the annihilation of the AFM meronic structure where the AFM vortex and antivortex start rolling towards each other and then collapse at a rather low magnetic field. If the transition occurs, however, the new magnetic state would be capable of surviving large magnetic fields similar to Hexa D.
However, the presence of Néel spirals in the background or additional pairs of AFM meronic textures (leading to for example to dodecamerons) prevent the formation of unstable states within the topological transition induced by the magnetic field, which would lead to the collapse of the frustrated soliton. Effectively, a barrier is provided by enabling the rearrangement of the spins to acquire the desired topological state, which would withstand immense magnetic fields.
§.§ Emergence mechanism.
We have identified that the formation of our frustrated AFM Néel meronic spin textures requires a strong AFM exchange coupling among the first nearest-neighbor atoms J_1 (see Supplementary Fig. 1a-d). This coupling is responsible for the AFM Néel order of the spins, and it is through magnetic frustration that these solitons may arise. Additionally, another magnetic interaction is required to align the spins in the in-plane direction. This interaction can be provided by the in-plane MAE, K < 0, as observed in Mn/Ir(111), while for the other three magnetic systems studied, K prefers an out-of-plane orientation of spins (Supplementary Fig.1e). However, the z component of the DMI vector (D_z) plays a crucial role in aligning the spins in-plane, ultimately leading to the emergence of the AFM Néel meronic textures. In conclusion, to obtain our AFM solitons on a triangular lattice, an AFM J_1 is required, along with either a finite D_z or an in-plane K.
To explore the fundamental mechanisms defining the stability of the spin-textures, we built a minimal spin model that contains only an AFM J_1 and D_z since the latter played the main role in stabilizing the meronic textures in the four investigated Mn-based interfaces. The resulting phase diagram is shown in Fig. <ref>a. While the ground state would have been a pure Néel state without D_z, the latter enables quickly the formation of frustrated merons. Increasing D_z enforces a stronger in-plane alignment of the spins, which reduces the size of the meronic constituents (Fig. <ref>b and Supplementary Fig. 4). Clearly, the size of merons is dictated by a competition of magnetic exchange and DMI. Keeping D_z fixed while increasing the AFM J_1 counteracts the effect of DMI and enlarges the meron core (Fig. <ref>c).
Fig. <ref>d presents the critical OOP magnetic field upon which the meronic texture, here Hexa D similar to that shown in Fig. <ref>, is annihilated as function of the OOP DMI component all normalized by the nearest neighboring AFM exchange interaction. The obtained curve follows a quadratic dependence, highlighting that the DMI enhances the stability of the frustrated merons. In fact, the application of an OOP magnetic field counteracts the influence of the OOP DMI component by tilting the spins to the OOP direction, causing disruption to the in-plane alignment of the spins, imposed by the OOP DMI component, throughout the surrounding area, including the region spanning between the extremities of the hexameron, ultimately leading to its collapse. Consequently, the larger the OOP DMI component (smaller meronic cores), the larger the critical field required to destroy the AFM spin-swirling textures.
§ DISCUSSION
Our ab-initio simulations uncovered non-synthetic Néel-frustrated AFM meronic textures emerging in a realistic set of materials and interfaces. The newly unveiled magnetic objects are hosted by triangular Mn layer interfaced with an Ir(111) surface, with either a Pd overlayer, separated from Ir with PdFe bilayer of Pd_2Fe trilayer, which all represent substrates that can readily be grown experimentally. The frustrated AFM states form hexamerons, composed of three FM meronic pairs each located at one of the three FM sublattices building up the AFM Néel background. Other solitons can emerge such as dodecamerons (12 merons) while confined geometries enables the stabilization of a frustrated trimeron.
We have observed that these AFM Néel meronic solitons survive high values of magnetic fields if the majority spins align in the direction of the OOP magnetic field. Otherwise, a transition of the sublattice topological charge occurs, leading to the potential annihilation of the AFM solitons at experimentally accessible values of magnetic fields. To gain a better understanding of the characteristics of these AFM solitons, we provided a spin model that outlines the minimum set of magnetic interactions necessary to generate the detected AFM solitons.
Identifying new AFM solitons with a realistic existence scenario is at the heart of AFM topological magnetism. Our predictions can initiate the experimental discovery of the intriguing intrinsic frustrated multimeronic textures, which can delineate in various topological sequences. It remains to be explored how such spin states can be implemented and designed in AFM spintronic devices. Certainly, the thin films being proposed provide a solid platform for AFM meronic textures with a potential impact in information technology.
In this study, we conducted a systematic investigation to explore the magnetic structures that can be hosted by the magnetic layers of our four layered systems. Our approach involved a three-fold procedure, combining ab-initio calculations with spin atomistic dynamics. The details of this procedure are outlined below.
§.§ Ab-initio calculations.
To simulate the magnetic properties of our magnetic layers, we utilized in a first step the Quantum-Espresso computational package <cit.>. The calculations employed projector augmented wave pseudo potentials sourced from the PS Library <cit.> and the self-consistent calculations were performed with
a k-mesh of 28×28×1 points for the unit cell. The layers were arranged in an fcc-stacked configuration along the [111] direction (See to Fig. <ref>a-d). The relaxation parameters were then extracted, revealing the relaxation percentages of the different layers in relation to the ideal interlayer distance in the Ir-based systems. Specifically, for Mn/Ir(111), the relaxation percentages were 2.3% and -3.4%; for PdMn/Ir(111), they were 8.6%, 10.3%, and -2.3%; for MnPdFe/Ir(111), the percentages were 4%, 5.2%, 8.1%, and -1%; and for MnPd_2Fe/Ir(111), they were 5.9%, -4%, 8.2%, 8.2%, and -0.7%, for each layer respectively. Here, positive (negative) values indicate atomic relaxations towards (away from) the Ir surface.
After establishing the geometries of the various magnetic systems, we conducted in a second step a detailed investigation of their magnetic properties and interactions using the all-electron full-potential relativistic Korringa-Kohn-Rostoker (KKR) Green function method, implemented in the JuKKR computational package <cit.>,in the local spin density approximation. Each of the four magnetic systems consists of a slab with 30 layers. In the case of Mn/Ir(111), the slab consists of 5 vacuum + 1 Mn + 20 Ir layers + 4 vacuum layers. For PdMn/Ir(111), the slab comprises 4 vacuum + 1 Pd + 1 Mn layer+ 20 Ir layers + 4 vacuum . In the case of MnPdFe/Ir(111), the slab includes 3 vacuum + 1 Mn layer + 1 Pd + 1 Fe + 20 Ir + 4 vacuum layers. Lastly, for MnPd_2Fe/Ir(111), the slab is composed of 2 vacuum + 1 Mn + 2 Pd + 1 Fe + 20 Ir + 4 vacuum. To perform the calculations, the momentum expansion of the Green function was truncated at ℓ_max = 3. Self-consistent calculations were conducted using a k-mesh of 30×30×1 points. The energy contour consisted of 23 complex energy points in the upper complex plane, and it incorporated 9 Matsubara poles.
To extract the Heisenberg exchange interactions and Dzyaloshinskii-Moriya (DM) vectors, we employed the infinitesimal rotation method <cit.>. For this extraction, we used a finer k-mesh of 200×200×1 points.
§.§ Hamiltonian Model and atomistic spin dynamics.
After extracting the magnetic parameters for our magnetic atoms from first-principles, we employ the Landau-Lifshitz equation (LLG) implemented in the Spirit code <cit.> to explore the magnetic properties and complex states. This exploration involves minimizing the two-dimensional Heisenberg Hamiltonian on a triangular lattice. The Hamiltonian comprises several terms, including Heisenberg exchange coupling, Dzyaloshinskii-Moriya interaction (DMI), magnetic anisotropy energy, and Zeeman term. The energy functional of the system can be described as follows:
H = H_Exchange + H_DMI + H_Anisotropy + H_Zeeman,
with
H_Exchange= -∑_<i,j> J^Mn-Mn_ij S_i·S_j -∑_<i,j> J^Fe-Mn_ij S_i·S_j -∑_<i,j> J^Fe-Fe_ij S_i·S_j,
H_DMI= -∑_<i,j>D^Mn-Mn_ij· [S_i×S_j]-∑_<i,j>D^Fe-Mn_ij· [S_i×S_j]-∑_<i,j>D^Fe-Fe_ij· [S_i×S_j],
H_Anisotropy=- K^Mn∑_i (S_i·e_i)^2 - K^Fe∑_i (S_i·e_i)^2,
H_Zeeman =- ∑_iμ_iB·S_i,
where we assign indices i and j to denote specific sites, each associated with a magnetic moment. The magnetic moment is represented by the unit vector S. The Heisenberg exchange coupling strength J^X-Y_ij describes the interaction between an atom X on site i and an atom Y on site j, where a negative value indicates AFM interaction. Similarly, we use the notation D for the Dzyaloshinskii-Moriya interaction vector, K for the magnetic anisotropy energy, and μ_iB to represent the Zeeman coupling to the atomic spin moment μ at site i. It is important to note that the Fe-Mn and Fe-Fe interactions are only considered in the MnPdFe/Ir(111) and MnPd_2Fe/Ir(111) systems. For our spin atomistic simulations, we adopt both periodic and finite boundary conditions to model the extended and confined two-dimensional system, respectively, with cells containing 249^2, 300^2, 390^2 sites.
§.§ Data availability
The data needed to evaluate the conclusions in the paper are present in the paper and the Supplementary Information.
§.§ Code availability
We used the following codes:
Quantum ESPRESSO which can be found at <https:/www.quantum-espresso.org/download>, SPIRIT can be found at <https://github.com/spirit-code/spirit>, and
the KKR code is a rather complex ab-initio DFT-based code, which is in general impossible to use without proper training on the theory behind it and on the practical utilization of the code. We are happy to provide the latter code upon request.
§ ACKNOWLEDGEMENTS
We acknowledge fruitful discussions with Nihad Abuawwad. This work was supported by the Federal Ministry of Education and Research of Germany
in the framework of the Palestinian-German Science Bridge (BMBF grant number
01DH16027) and the Deutsche Forschungsgemeinschaft (DFG) through SPP 2137 “Skyrmionics” (Project LO 1659/8-1).
The authors gratefully acknowledge
the computing time granted through JARA on the supercomputer JURECA
at Forschungszentrum Jülich.
Author contributions
S.L. initiated, designed and supervised the project. A.A. performed the simulations and post processed the data . A.A., M.S., M.A., and S.L. discussed the results. A.A. and S.L. wrote the manuscript to which all co-authors contributed.
Competing Interests
The authors declare no competing interests.
Correspondence Correspondence and requests for materials should be addressed to A.A. (email: [email protected]) or to S.L. (email: [email protected]).
§ REFERENCES
10
url<#>1urlprefixURL
Jungwirth2016
authorJungwirth, T., authorMarti, X.,
authorWadley, P. & authorWunderlich, J.
titleAntiferromagnetic spintronics.
journalNat. Nanotechnol.
volume11, pages231–241
(year2016).
gomonay2017concepts
authorGomonay, O., authorJungwirth, T. &
authorSinova, J.
titleConcepts of antiferromagnetic spintronics.
journalPhys. Stat. Sol. RRL.
volume11, pages1700022
(year2017).
baltz2018antiferromagnetic
authorBaltz, V. et al.
titleAntiferromagnetic spintronics.
journalRev. Mod. Phys.
volume90, pages015005
(year2018).
gomonay2018antiferromagnetic
authorGomonay, O., authorBaltz, V.,
authorBrataas, A. & authorTserkovnyak, Y.
titleAntiferromagnetic spin textures and dynamics.
journalNat. Phys. volume14,
pages213 (year2018).
vsmejkal2018topological
authorŠmejkal, L., authorMokrousov, Y.,
authorYan, B. & authorMacDonald, A. H.
titleTopological antiferromagnetic spintronics.
journalNat. Phys. volume14,
pages242–251 (year2018).
nvemec2018antiferromagnetic
authorNěmec, P., authorFiebig, M.,
authorKampfrath, T. & authorKimel, A. V.
titleAntiferromagnetic opto-spintronics.
journalNat. Phys. volume14,
pages229–241 (year2018).
legrand2020room
authorLegrand, W. et al.
titleRoom-temperature stabilization of antiferromagnetic
skyrmions in synthetic antiferromagnets.
journalNat. Mater. volume19,
pages34 (year2020).
Jani2021
authorJani, H. et al.
titleAntiferromagnetic half-skyrmions and bimerons at room
temperature.
journalNature volume590,
pages74 (year2021).
aldarawsheh2022emergence
authorAldarawsheh, A. et al.
titleEmergence of zero-field non-synthetic single and
interchained antiferromagnetic skyrmions in thin films.
journalNat. Commun. volume13,
pages7369 (year2022).
dohi2019formation
authorDohi, T., authorDuttaGupta, S.,
authorFukami, S. & authorOhno, H.
titleFormation and current-induced motion of synthetic
antiferromagnetic skyrmion bubbles.
journalNat. Commun. volume10,
pages5153 (year2019).
finco2021imaging
authorFinco, A. et al.
titleImaging non-collinear antiferromagnetic textures via
single spin relaxometry.
journalNat. Commun. volume12,
pages767 (year2021).
juge2022skyrmions
authorJuge, R. et al.
titleSkyrmions in synthetic antiferromagnets and their
nucleation via electrical current and ultra-fast laser illumination.
journalNat. Commun. volume13,
pages4807 (year2022).
chen2022controllable
authorChen, R. et al.
titleControllable generation of antiferromagnetic
skyrmions in synthetic antiferromagnets with thermal effect.
journalAdv. Funct. Mater.
volume32, pages2111906
(year2022).
barker2016static
authorBarker, J. & authorTretiakov, O. A.
titleStatic and dynamical properties of antiferromagnetic
skyrmions in the presence of applied current and temperature.
journalPhys. Rev. Lett.
volume116, pages147203
(year2016).
olejnik2018terahertz
authorOlejník, K. et al.
titleTerahertz electrical writing speed in an
antiferromagnetic memory.
journalSci. Adv. volume4,
pageseaar3566 (year2018).
aldarawsheh2023
authorAldarawsheh, A., authorSallermann, M.,
authorAbusaa, M. & authorLounis, S.
titleA spin model for intrinsic antiferromagnetic
skyrmions on a triangular lattice.
journalFront. Phys. volume11
(year2023).
kampfrath2011coherent
authorKampfrath, T. et al.
titleCoherent terahertz control of antiferromagnetic spin
waves.
journalNat. Photonics
volume5, pages31–34 (year2011).
gomonay2014spintronics
authorGomonay, E. & authorLoktev, V.
titleSpintronics of antiferromagnetic systems.
journalLow Temp. Phys.
volume40, pages17–35
(year2014).
baierl2016terahertz
authorBaierl, S. et al.
titleTerahertz-driven nonlinear spin response of
antiferromagnetic nickel oxide.
journalPhys. Rev. Lett.
volume117, pages197201
(year2016).
bhattacharjee2018neel
authorBhattacharjee, N. et al.
titleNéel spin-orbit torque driven antiferromagnetic
resonance in mn 2 au probed by time-domain thz spectroscopy.
journalPhys. Rev. Lett.
volume120, pages237201
(year2018).
rosales2015three
authorRosales, H. D., authorCabra, D. C. &
authorPujol, P.
titleThree-sublattice skyrmion crystal in the
antiferromagnetic triangular lattice.
journalPhys. Rev. B volume92,
pages214439 (year2015).
zhang2016antiferromagnetic
authorZhang, X., authorZhou, Y. &
authorEzawa, M.
titleAntiferromagnetic skyrmion: stability, creation and
manipulation.
journalSci. Rep. volume6,
pages24795 (year2016).
velkov2016phenomenology
authorVelkov, H. et al.
titlePhenomenology of current-induced skyrmion motion in
antiferromagnets.
journalNew J. Phys. volume18,
pages075016 (year2016).
keesman2016skyrmions
authorKeesman, R., authorRaaijmakers, M.,
authorBaerends, A., authorBarkema, G. &
authorDuine, R.
titleSkyrmions in square-lattice antiferromagnets.
journalPhys. Rev. B volume94,
pages054402 (year2016).
jin2016dynamics
authorJin, C., authorSong, C., authorWang,
J. & authorLiu, Q.
titleDynamics of antiferromagnetic skyrmion driven by the
spin hall effect.
journalAppl. Phys. Lett.
volume109, pages182404
(year2016).
gobel2017antiferromagnetic
authorGöbel, B., authorMook, A.,
authorHenk, J. & authorMertig, I.
titleAntiferromagnetic skyrmion crystals: Generation,
topological hall, and topological spin hall effect.
journalPhys. Rev. B volume96,
pages060406 (year2017).
tomasello2017performance
authorTomasello, R. et al.
titlePerformance of synthetic antiferromagnetic racetrack
memory: domain wall versus skyrmion.
journalJ. Phys. D volume50,
pages325302 (year2017).
akosa2018theory
authorAkosa, C. A., authorTretiakov, O.,
authorTatara, G. & authorManchon, A.
titleTheory of the topological spin hall effect in
antiferromagnetic skyrmions: Impact on current-induced motion.
journalPhys. Rev. Lett.
volume121, pages097204
(year2018).
silva2019antiferromagnetic
authorSilva, R., authorSilva, R.,
authorPereira, A. & authorMoura-Melo, W.
titleAntiferromagnetic skyrmions overcoming obstacles in a
racetrack.
journalJ. Phys.: Condens. Matter
volume31, pages225802
(year2019).
fernandes2019skyrmions
authorFernandes, R., authorLopes, R. &
authorPereira, A.
titleSkyrmions and merons in two-dimensional
antiferromagnetic systems.
journalSolid State Commun.
volume290, pages55–59
(year2019).
gao2020fractional
authorGao, S. et al.
titleFractional antiferromagnetic skyrmion lattice induced
by anisotropic couplings.
journalNature volume586,
pages37 (year2020).
li2020bimeron
authorLi, X. et al.
titleBimeron clusters in chiral antiferromagnets.
journalnpj Comp. Mater.
volume6, pages169 (year2020).
shen2020current
authorShen, L. et al.
titleCurrent-induced dynamics and chaos of
antiferromagnetic bimerons.
journalPhys. Rev. Lett.
volume124, pages037202
(year2020).
silva2021antiferromagnetic
authorSilva, R.
titleAntiferromagnetic-bimeron dynamics driven by a
spin-polarized current at an inhomogeneous racetrack.
journalPhys. Lett. A
volume403, pages127399
(year2021).
Lin2013
authorLin, S.-Z., authorReichhardt, C.,
authorBatista, C. D. & authorSaxena, A.
titleParticle model for skyrmions in metallic chiral
magnets: Dynamics, pinning, and creep.
journalPhys. Rev. B volume87,
pages214419 (year2013).
Nagaosa2013
authorNagaosa, N. & authorTokura, Y.
titleTopological properties and dynamics of magnetic
skyrmions.
journalNat. Nanotechnol.
volume8, pages899 (year2013).
Jiang2016
authorJiang, W. et al.
titleDirect observation of the skyrmion Hall effect.
journalNat. Phys. volume13,
pages162 (year2017).
woo2016observation
authorWoo, S. et al.
titleObservation of room-temperature magnetic skyrmions
and their current-driven dynamics in ultrathin metallic ferromagnets.
journalNat. Mater. volume15,
pages501–506 (year2016).
Litzius2017
authorLitzius, K. et al.
titleSkyrmion hall effect revealed by direct time-resolved
x-ray microscopy.
journalNat. Phys. volume13,
pages170–175 (year2017).
tretiakov2007vortices
authorTretiakov, O. & authorTchernyshyov, O.
titleVortices in thin ferromagnetic films and the skyrmion
number.
journalPhys. Rev. B volume75,
pages012408 (year2007).
ezawa2011compact
authorEzawa, M.
titleCompact merons and skyrmions in thin chiral magnetic
films.
journalPhys. Rev. B volume83,
pages100408 (year2011).
phatak2012direct
authorPhatak, C., authorPetford-Long, A. &
authorHeinonen, O.
titleDirect observation of unconventional topological spin
structure in coupled magnetic discs.
journalPhys. Rev. Lett.
volume108, pages067205
(year2012).
lin2015skyrmion
authorLin, S.-Z., authorSaxena, A. &
authorBatista, C. D.
titleSkyrmion fractionalization and merons in chiral
magnets with easy-plane anisotropy.
journalPhys. Rev. B volume91,
pages224407 (year2015).
tan2016topology
authorTan, A. et al.
titleTopology of spin meron pairs in coupled Ni/Fe/Co/Cu
(001) disks.
journalPhys. Rev. B volume94,
pages014433 (year2016).
yu2018transformation
authorYu, X. et al.
titleTransformation between meron and skyrmion topological
spin textures in a chiral magnet.
journalNature volume564,
pages95–98 (year2018).
lu2020meron
authorLu, X., authorFei, R., authorZhu, L.
& authorYang, L.
titleMeron-like topological spin defects in monolayer
CrCl_3.
journalNat. Commun. volume11,
pages4724 (year2020).
augustin2021properties
authorAugustin, M., authorJenkins, S.,
authorEvans, R. F., authorNovoselov, K. S. &
authorSantos, E. J.
titleProperties and dynamics of meron topological spin
textures in the two-dimensional magnet CrCl_3.
journalNat. Commun. volume12,
pages185 (year2021).
hayami2021meron
authorHayami, S. & authorYambe, R.
titleMeron-antimeron crystals in noncentrosymmetric
itinerant magnets on a triangular lattice.
journalPhys. Rev. B
volume104, pages094425
(year2021).
xia2022qubits
authorXia, J., authorZhang, X., authorLiu,
X., authorZhou, Y. & authorEzawa, M.
titleQubits based on merons in magnetic nanodisks.
journalCommun. Mater.
volume3, pages88 (year2022).
donnelly2021experimental
authorDonnelly, C. et al.
titleExperimental observation of vortex rings in a bulk
magnet.
journalNat. Phys. volume17,
pages316–321 (year2021).
gao2019creation
authorGao, N. et al.
titleCreation and annihilation of topological meron pairs
in in-plane magnetized films.
journalNat. Commun. volume10,
pages5603 (year2019).
kolesnikov2018composite
authorKolesnikov, A. et al.
titleComposite topological structure of domain walls in
synthetic antiferromagnets.
journalSci. Rep. volume8,
pages15794 (year2018).
amin2023antiferromagnetic
authorAmin, O. et al.
titleAntiferromagnetic half-skyrmions electrically
generated and controlled at room temperature.
journalNat. Nanotechnol. pages1–5
(year2023).
chmiel2018observation
authorChmiel, F. P. et al.
titleObservation of magnetic vortex pairs at room
temperature in a planar α-Fe_2O_3/Co heterostructure.
journalNat. Mater. volume17,
pages581–585 (year2018).
jani2021antiferromagnetic
authorJani, H. et al.
titleAntiferromagnetic half-skyrmions and bimerons at room
temperature.
journalNature volume590,
pages74 (year2021).
radaelli2020micromagnetic
authorRadaelli, P., authorRadaelli, J.,
authorWaterfield-Price, N. & authorJohnson, R.
titleMicromagnetic modeling and imaging of vortex| meron
structures in an oxide| metal heterostructure.
journalPhys. Rev. B
volume101, pages144420
(year2020).
romming2013writing
authorRomming, N. et al.
titleWriting and deleting single magnetic skyrmions.
journalScience volume341,
pages636 (year2013).
dupe2014tailoring
authorDupé, B., authorHoffmann, M.,
authorPaillard, C. & authorHeinze, S.
titleTailoring magnetic skyrmions in ultra-thin transition
metal films.
journalNat. Commun. volume5,
pages4030 (year2014).
Simon2014
authorSimon, E., authorPalotás, K.,
authorRózsa, L., authorUdvardi, L. &
authorSzunyogh, L.
title Formation of magnetic skyrmions with tunable
properties in PdFe bilayer deposited on Ir (111).
journalPhys. Rev. B volume90,
pages094410 (year2014).
crum2015perpendicular
authorCrum, D. M. et al.
titlePerpendicular reading of single confined magnetic
skyrmions.
journalNat. Commun. volume6,
pages8541 (year2015).
romming2015field
authorRomming, N., authorKubetzka, A.,
authorHanneken, C., authorvon Bergmann, K. &
authorWiesendanger, R.
titleField-dependent size and shape of single magnetic
skyrmions.
journalPhys. Rev. Lett.
volume114, pages177203
(year2015).
dos2016chirality
authordos Santos Dias, M., authorBouaziz, J.,
authorBouhassoune, M., authorBlügel, S. &
authorLounis, S.
titleChirality-driven orbital magnetic moments as a new
probe for topological magnetic structures.
journalNat. Commun. volume7,
pages13613 (year2016).
fernandes2018universality
authorFernandes, I. L., authorBouaziz, J.,
authorBlügel, S. & authorLounis, S.
titleUniversality of defect-skyrmion interaction
profiles.
journalNat. Commun. volume9,
pages4395 (year2018).
fernandes2020defect
authorFernandes, I. L., authorBouhassoune, M. &
authorLounis, S.
titleDefect-implantation for the all-electrical detection
of non-collinear spin-textures.
journalNat. Commun. volume11,
pages1602 (year2020).
Arjana2020
authorArjana, I. G., authorLima Fernandes, I.,
authorChico, J. & authorLounis, S.
titleSub-nanoscale atom-by-atom crafting of
skyrmion-defect interaction profiles.
journalSci. Rep. volume10,
pages14655 (year2020).
bouhassoune2021friedel
authorBouhassoune, M. & authorLounis, S.
titleFriedel oscillations induced by magnetic skyrmions:
From scattering properties to all-electrical detection.
journalNanomaterials
volume11, pages194 (year2021).
lima2022spin
authorLima Fernandes, I., authorBlügel, S. &
authorLounis, S.
titleSpin-orbit enabled all-electrical readout of chiral
spin-textures.
journalNat. Commun. volume13,
pages1576 (year2022).
muller2019spirit
authorMüller, G. P. et al.
titleSpirit: Multifunctional framework for atomistic spin
simulations.
journalPhys. Rev. B volume99,
pages224414 (year2019).
shinjo2000magnetic
authorShinjo, T., authorOkuno, T.,
authorHassdorf, R., authorShigeto, K. &
authorOno, T.
titleMagnetic vortex core observation in circular dots of
permalloy.
journalscience volume289,
pages930–932 (year2000).
giannozzi2009quantum
authorGiannozzi, P. et al.
titleQuantum espresso: a modular and open-source software
project for quantum simulations of materials.
journalJ. Phys.: Condens. Matter
volume21, pages395502
(year2009).
dal2014pseudopotentials
authorDal Corso, A.
titlePseudopotentials periodic table: From H to
Pu.
journalComput. Mater. Sci.
volume95, pages337–350
(year2014).
Papanikolaou2002
authorPapanikolaou, N., authorZeller, R. &
authorDederichs, P. H.
titleConceptual improvements of the KKR method.
journalJ. Phys.: Condens. Matter
volume14, pages2799 (year2002).
Bauer2014
authorBauer, D. S. G.
titleDevelopment of a relativistic full-potential
first-principles multiple scattering Green function method applied to complex
magnetic textures of nano structures at surfaces
(publisherForschungszentrum Jülich Jülich,
year2014).
Liechtenstein1987
authorLiechtenstein, A., authorKatsnelson, M.,
authorAntropov, V. & authorGubanov, V.
titleLocal spin density functional approach to the theory
of exchange interactions in ferromagnetic metals and alloys.
journalJ. Magn. Magn. Mater.
volume67, pages65 – 74
(year1987).
Ebert2009
authorEbert, H. & authorMankovsky, S.
titleAnisotropic exchange coupling in diluted magnetic
semiconductors: Ab initio spin-density functional theory.
journalPhys. Rev. B volume79,
pages045209 (year2009).
|
http://arxiv.org/abs/2306.08341v2
|
20230614081835
|
Ground-VIO: Monocular Visual-Inertial Odometry with Online Calibration of Camera-Ground Geometric Parameters
|
[
"Yuxuan Zhou",
"Xingxing Li",
"Shengyu Li",
"Xuanbin Wang",
"Zhiheng Shen"
] |
cs.RO
|
[
"cs.RO"
] |
Manuscript
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Ground-VIO: Monocular Visual-Inertial Odometry with Online Calibration of Camera-Ground Geometric Parameters
Yuxuan Zhou, Xingxing Li, Shengyu Li, Xuanbin Wang, Zhiheng Shen
This work was supported in part by the National Key Research and Development Program of China (2021YFB2501102), the National Natural Science Foundation of China (Grant 41974027), and the Sino-German mobility programme (Grant No. M-0054).(Corresponding author: Xingxing Li.)
The authors are with School of Geodesy and Geomatics, Wuhan University, China (e-mail: [email protected]).
July 31, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Monocular visual-inertial odometry (VIO) is a low-cost solution to provide high-accuracy, low-drifting pose estimation. However, it has been meeting challenges in vehicular scenarios due to limited dynamics and lack of stable features. In this paper, we propose Ground-VIO, which utilizes ground features and the specific camera-ground geometry to enhance monocular VIO performance in realistic road environments. In the method, the camera-ground geometry is modeled with vehicle-centered parameters and integrated into an optimization-based VIO framework. These parameters could be calibrated online and simultaneously improve the odometry accuracy by providing stable scale-awareness. Besides, a specially designed visual front-end is developed to stably extract and track ground features via the inverse perspective mapping (IPM) technique. Both simulation tests and real-world experiments are conducted to verify the effectiveness of the proposed method. The results show that our implementation could dramatically improve monocular VIO accuracy in vehicular scenarios, achieving comparable or even better performance than state-of-art stereo VIO solutions. The system could also be used for the auto-calibration of IPM which is widely used in vehicle perception. A toolkit for ground feature processing, together with the experimental datasets, would be made open-source[1].
[1]https://github.com/GREAT-WHU/gv_tools
Visual-inertial odometry, autonomous vehicle navigation, camera-ground geometry, inverse perspective mapping.
§ INTRODUCTION
Vision-based solutions have been pivotal in the development of intelligent vehicle applications <cit.><cit.>. The low-cost camera could provide high-resolution texture information of the environment, enabling high-level perception such as object detection <cit.> and scene parsing <cit.>. On the other hand, visual simultaneous localization and mapping (VSLAM) provides a feasible approach for accurate vehicle pose estimation, which could be used for navigation tasks <cit.><cit.>. Such aspect of vision-based navigation is later enhanced with the introduction of inertial measurement unit (IMU), bringing about better stability and accuracy with very limited additional expenses <cit.>. Besides, IMU could resolve the scale ambiguity in monocular VSLAM, facilitating more practical use cases. The outstanding performance of visual-inertial odometry (VIO) and visual-inertial navigation system (VINS) has been demonstrated in unmanned aerial vehicle (UAV) applications <cit.><cit.>.
However, as a possible low-cost navigation solution, monocular VIO has been meeting challenges when applied to ground vehicles.
Unlike UAVs, it is impractical for a ground vehicle to sufficiently excite the IMU during regular maneuvers. This would significantly affect the pose estimation performance of VIO due to lack of observability<cit.><cit.>, especially for the scale which is unresolvable in monocular vision. Although stereo camera setups can mitigate this issue to some extent<cit.>, they entail additional expenses and computation cost. Other schemes turn to a higher integration level, introducing other sensors (e. g. wheel encoder, GNSS and LiDAR) to achieve better navigation performance<cit.>.
It is noted that, the ground itself provides a natural and powerful constraint that could be utilized to enhance VSLAM/VIO performance. Considering the fact that the vehicle moves on the ground, the vehicle-mounted sensor and the local ground plane have a relatively fixed geometric relationship, depending mainly on the sensor installation and the vehicle size. For VSLAM/VIO, the local ground plane could be expressed as a specific plane in the camera frame, as shown in Fig. <ref>. We term this fixed relationship as the camera-ground geometry, which could be used to constrain the landmark depths, thu=s deriving metric-scale geometric information that is essential for high-accuracy pose estimation. However, few researches have given an in-depth insight into the application of camera-ground geometry in VIO.
In fact, such camera-ground geometry has been widely, sometimes implicitly, applied in vehicle perception. Typically, the well known inverse perspective mapping (IPM) technique<cit.><cit.> is using the pre-calibrated camera-ground geometry to generate bird-eye view (BEV) images, or known as around-view monitoring (AVM)<cit.>, thus to efficiently perceive the surrounding road environment<cit.>. From this aspect, the online auto-calibration of the camera-ground geometry is also a meaningful issue and still remains unsolved.
In the proposed Ground-VIO, we introduce the online estimation of the camera-ground geometric parameters, termed as C-G parameters, into a monocular VIO, which could not only improve the odometry performance but also provide an approach for the auto-calibration of IPM. The contributions of this work are as follows:
* A vehicle-centered model is proposed to parameterize the camera-ground geometry, which is integrated into the monocular VIO for online calibration and to improve the navigation performance.
* A novel visual front-end is developed to precisely track features on the ground by making use of the camera-ground geometry and IPM.
* Both simulation tests and real-world experiments were conducted to validate different aspects of the system, including the estimation of C-G parameters, the odometry accuracy and the IPM calibration performance.
* We make the ground feature processing module and the test data sequences open-source.
The rest of the paper is organized as follows. In Section II,
related works are discussed. In Section III, the system overview is presented. In Section IV, the core idea of camera-ground geometric model is explained. Section V presents the implementation details of Ground-VIO. The system performance is evaluated in Section VI and Section VII through simulation tests and real-world experiments respectively. The conclusion is finally given in Section VIII.
§ RELATED WORK
§.§ Visual-Inertial Odometry
The aspect of VIO/VINS has been extensively investigated in the past decades, applied in both UAV and ground vehicle applications. Generally, the implementations could be divided into filter-based and optimization-based methods. For filter-based methods, the representative framework is multi-state constraint kalman filter (MSCKF)<cit.>, which maintains historical IMU poses in the state vector and uses common-view feature observations to construct geometric constraints among the poses. The variants of MSCKF have been developed to improve the framework by introducing observability constraint, extrinsic calibration<cit.>, multi-IMU/camera configuration<cit.>, landmark states<cit.> and so on. For optimization-based implementations, the mainstream methods use a factor graph which jointly optimizes IMU preintegration factors <cit.> and visual re-projection factors to estimate the navigation states and landmark positions<cit.>. These methods are expected to achieve better performance through iterations and relinearizations, in the expense of higher computation cost. The sliding window mechanism is a usual way to ensure real-time processing<cit.><cit.>, while some other implementations employ a local map to limit the problem size<cit.>.
Despite the advantages of low cost and high accuracy in ideal conditions, VIO/VINS has been meeting challenges when employed for vehicular applications due to limited dynamics, fast motion, and lack of stable features. To improve the practicality of such methods, it is necessary to introduce other sensors or fully utilize the inherent constraint of a ground vehicle.
§.§ Vehicle Navigation Utilizing the Ground Constraint
It is a natural idea to utilize the ground constraint when designing a navigation system for ground vehicles. Most of these researches model the ground as a local plane (or manifold) and constrain the vehicle motion on it, which is sometimes implicitly comprised in a vehicle-centered non-holonomic constraint (NHC). As pointed out in<cit.>, these implementations could be roughly divided into deterministic and stochastic SE(2) constraints. Deterministic SE(2) constraints strictly constrain the vehicle poses via parameterization or a deterministic model<cit.><cit.>, while stochastic SE(2) constraints are applied to SE(3) pose estimation with a time-variant and probabilistic constraint<cit.>. Generally, the former is more suitable for indoor or small-scale environments, while the latter shows superiority in outdoor environments with better resistance to outliers. Although some of these methods use a vision-based sensor setup, most of them don't directly associate the visual observations with the ground. Differently, in<cit.>, stereo visual features are utilized to estimate the ground manifold representation, which is later used to constrain the vehicle pose.
Compared to the mentioned methods, our work focuses on the ground observed by the vehicle-mounted camera rather than the vehicle motion constrained on the ground. Actually, there is significant difference between “the vehicle maneuvers on the ground” and “the observed features are on the ground”. For VSLAM/VIO, the latter statement could be used to constrain the landmark depths based on the relatively stable camera-ground geometry, thus to provide instantaneous scale-awareness to the system. This characteristic has been pointed out in <cit.>, all of which use the observed ground structure to realize scale-aware VSLAM based on a monocular camera. However, the camera height should be pre-calibrated in these methods, which limits the usability. Literature <cit.> takes the camera-geometric geometry into the state vector of VIO, but discusses little about its mechanism. In this work, we would demonstrate that the camera-ground geometry could be calibrated online in a monocular VIO without other infrastructure and could greatly improve the odometry performance.
§ CAMERA-GROUND GEOMETRIC MODEL
In this section, the camera-ground geometric model utilized in the proposed system is firstly introduced.
For a camera mounted on a ground vehicle, it has a specific geometric relationship with the ground. Assuming the local ground is flat and the vehicle is a rigid body (temporarily ignoring the suspension system), the ground plane in the camera frame c is a specific plane that could be unambiguously determined by its normal vector and distance<cit.>.
Here, for better convenience, we parameterize the local ground plane using the height h from the camera center to the ground and a two-step rotation which makes the X-Z plane of the camera frame parallel with the ground plane, as illustrated in Fig. <ref>. Specifically, we firstly rotate the real camera frame c around the Z-axis to make its X-axis parallel with the ground plane. Secondly, we rotate the obtained frame around the X-axis to get the expected virtual camera frame c_, which could also be seen as the reference frame of IPM<cit.>.
Thus, the ground plane in the camera frame could be expressed by the following one-row equation
(𝐑^c_c_(α,θ)^⊤𝐩^c_f)_y - h = 0
with
𝐑^c_c_(α,θ)=
[cosα -sinα 0
sinα cosα 0
0 0 1
]
[
1 0 0
0 cosθ -sinθ
0 sinθ cosθ]
where (·)_x/y/z denotes the first/second/third row of a three-row vector/matrix, α and θ are the magnitudes of the two-step rotation, corresponding to the roll and the pitch angles of the camera. The triplet (h,θ,α) is defined as the C-G parameters in this paper, which is similar to the parameterization in<cit.>.
Such parameterization makes the estimation of camera-ground geometry straight-forward (i.e., one height and two angles), and it becomes easy to use IMU attitude to compensate the geometry (see Sect. IV-E). It is reasonable to expect the C-G parameters, which indicate the local ground plane, are statistically stable in common road environments without notable change of the sensor alignment or vehicle load. The proposed Ground-VIO fully utilizes this aspect, and several techniques would be given later to deal with complex conditions.
Given that a landmark f in the environment is observed by the camera, we could get
𝐩^c_f = 𝐮_f/λ_f
, 𝐮_f =[x
y
1]=π_c^-1[u
v
1]
where [u v]^⊤ is the pixel coordinates of f on the image, 𝐮_f = [x y 1]^⊤ is the normalized image coordinates, π_c is the camera projection matrix, λ_f is the inverse depth of f.
Equation (<ref>) and (<ref>) reveal that, with known camera-ground geometry, the metric-scale (inverse) depth of a ground feature on the image could be instantaneously recovered
λ_f = 1/h(𝐑^c_c_(α,θ)^⊤𝐮_f)_y
which yields great significance for monocular VSLAM/VIO.
By combining (<ref>) and (<ref>), the camera-ground geometry can be applied in VSLAM/VIO to constrain the landmark depths. Here, a simplified 1-D model is used to qualitatively analyze how it makes sense in VSLAM/VIO. As shown in Fig. <ref>, the odometry system needs to estimate the traveled distance d_ij through the tracking of a landmark f. Assuming the C-G parameters are known and lead to a comprehensive camera height ĥ, the traveled distance d_ij could be obtained
d_ij = ĥ·(1/(𝐮^i_f)_y-1/(𝐮^j_f)_y)
where 𝐮^i_f and 𝐮^j_f are observations of f at epoch i and j.
Assuming the visual observations are noiseless, the estimation of d_ij only depends on the accuracy of ĥ
d_ij∝ĥ
Given a typical case of vehicular scenario where the comprehensive camera height ĥ is 2 m with 2 cm error, the estimation of d_ij would have 1% relative error. In other words, the tracking of just one ground feature could derive geometric information about the translation with 1% relative error, which is exceedingly meaningful for a monocular visual-inertial system.
It comes to the problems that, 1) how the camera-ground geometry could be integrated into the common VIO, and 2) how the C-G parameters could be obtained or estimated online. These problems would be explained in the following section.
§ SYSTEM IMPLEMENTATION
In this section, the overview and implementation details of the proposed Ground-VIO will be presented.
§.§ System Overview
The overall structure of Ground-VIO is shown in Fig. <ref>. The basic structure of the system follows the classic pipeline of optimization-based VIO <cit.> but with additional camera-ground-related mechanisms.
Basically, the collected images and IMU data are processed for common VIO initialization and optimization routines.
On this basis, an additional front-end is designed for ground feature processing and works in parallel with the common feature processor. The ground feature processor extracts and tracks features on the BEV images generated by IPM, which enables efficient and accurate tracking. A semantic segmentation module could be employed for ground segmentation but is not necessary, which would be discussed later.
In factor graph optimization, the ground features are treated as a subset of visual features with additional camera-ground geometric constraints. These constraints could significantly improve the VIO performance and enable the online estimation of C-G parameters.
Under the situation that the C-G parameters are completely unknown at the beginning, the C-G initialization module would be called every time after common factor graph optimization. Once initialized, the camera-ground-related mechanisms in feature processing and factor graph optimization would be switched on. The C-G parameters would then be continuously refined during VIO running.
§.§ Ground Feature Processing
In typical implementations of VIO, feature points in the images are continuously tracked to construct visual measurements.
The proposed system follows the typical VIO routines<cit.> to detect and track environmental features in the camera view. To be specific, the feature detection method in<cit.> and KLT optical flow algorithm<cit.> are employed, and a fundamental matrix-based RANSAC is used to detect outliers.
As to the ground features, their unique distribution and fast motions on the perspective image make them hard to track. The near-to-far ground plane is highly “warped” on the image, and the near points move drastically despite their better observation geometry.
Instead of the common method, we develop a special module for more precise data association of the ground features.
It is noted that, with the camera-ground geometry, the 3-D position of every pixel on the perspective image corresponding to the ground could be instantaneously obtained, referring to (<ref>). From another perspective, we could efficiently generate a BEV image using IPM, and every pixel on the image is directly related to a 3-D position.
The following mapping relationship exists between the metric-scale 3-D point, the point on the perspective image and the point on the BEV image:
𝐩^c_f=
1/λ_f·[
x
y
1
] = h·𝐑^c_c_(α,θ)[
x_
1
-y_]
where 𝐮_=[x_ y_ 1]^⊤ is the normalized image coordinates of f on the BEV image, the inverse depth λ_f refers to (<ref>). The generation of BEV images through IPM refers to<cit.>.
The knowledge of camera-ground geometry makes the accurate prediction of ground feature tracking possible. Every time a new image comes, we could predict the position of an existing ground feature with the help of the IMU-predicted relative pose
𝐩^c_k+1_f = 𝐑̂^c_k+1_c_k𝐩^c_k_f + 𝐩̂ ^c_k+1_c_k
where (𝐑̂^c_k+1_c_k, 𝐩̂ ^c_k+1_c_k) is the relative pose estimated by IMU integration. Combining (<ref>) with (<ref>), the prediction of ground features could be performed on either the perspective image or the BEV image, which could limit the search region of optical flow tracking to several pixels, thereby greatly improving the tracking performance.
In Ground-VIO, we choose to extract and track features on the BEV image, for the reason that the BEV image is less “warped” and has better tracking consistency. In fact, the KLT optical flow tracking doesn't guarantee scale and rotation invariance, with a failure case illustrated in Fig. <ref>. Fortunately, the IPM could recover the metric-scale geometry of ground features and eliminate most of the scaling effect during fast motion, thus contributing to better tracking precision. Fig. 6 illustrates the tracking of ground features on the BEV image with IMU-aided feature prediction. In addition, a homography matrix-based RANSAC method is used to efficiently detect outlier feature trackings<cit.>. In our implementation, we mainly focus on the rectangle area (15 m far and ± 3 m wide with 0.015 m spatial resolution) in front of the vehicle-mounted camera, which facilitates multi-frame tracking of the ground features during regular vehicle maneuvers and guarantees good tracking precision.
After the feature processing on the BEV image, the obtained ground features are re-mapped to the perspective image through the inverse process of IPM, thus these features could be processed consistently with the common features. By doing so, the tracking on the BEV image helps improve the tracking precision without introducing systematic errors related to the C-G parameters. During the operation of VIO, the C-G parameters used for ground feature processing could be continuously updated.
Intuitively, to extract and track the ground features, it is needed to specifically identify the ground region in the image. This is not a hard task by applying deep learning-basd semantic segmentation<cit.><cit.>, which performs well in vehicular scenarios. Yet in the proposed method, the semantic segmentation is optional.
The IPM processing itself could exclude most objects that are not on the ground surface, and the accurate feature prediction based on C-G parameters plus the RANSAC method could exclude outliers on the BEV image (e. g. vehicles, guardrails). Later in factor graph optimization, the influence of gross errors could be further mitigated through outlier detection methods. Therefore, although semantic segmentation could contribute to best performance of the system, it is not necessary.
In factor graph optimization, the extracted ground features are treated as a subset of visual features to construct the visual re-projection factors, while additional camera-ground constraints would be applied to them.
§.§ Optimization-based Visual-Inertial Odometry
Adhering to<cit.>, we maintain a sliding-window factor graph to simultaneously estimate the navigation states, landmarks and, additionally, the C-G parameters by optimizing different kinds of measurements.
The state vector of Ground-VIO is defined as follows
𝒳 = (𝐱_0 , 𝐱_1 , ⋯ , 𝐱_n , 𝐱_ , λ_0 , λ_1 , ⋯ , λ_m )
𝐱_k = (𝐩^w_b_k , 𝐪^w_b_k , 𝐯^w_b_k , 𝐛_a,b_k, 𝐛_g,b_k), k ∈[0, n]
𝐱_ = ( h, θ, α)
where 𝐩^w_b_k, 𝐪^w_b_k, 𝐯^w_b_k are the position, attitude and velocity of the k-th frame expressed in the world frame, 𝐛_a,b_k and 𝐛_g,b_k are the accelerometer bias vector and the gyroscope drift vector, λ_0, λ_1, ⋯, λ_m are the inverse depths of the landmarks. Each landmark is anchored in the first observation frame within the sliding window.
The following factors are considered in the optimization:
1) IMU preintegration factor:
The IMU data between frames are preintegrated and used to construct the IMU preintegration factors.
The residual could be expressed as
𝐫_IMU(α̂ ^b_k_b_k+1,β̂ ^b_k_b_k+1,γ̂ ^b_k_b_k+1,𝐱_k,𝐱_k+1)=
[𝐑^w_b_k^⊤( 𝐩^w_b_k+1 - 𝐩^w_b_k+ 1/2𝐠^wΔt^2_k - 𝐯^w_b_kΔ t_k - α̂^b_k_b_k+1)
𝐑^w_b_k^⊤( 𝐯^w_b_k+1 + 𝐠^w Δ t_k - 𝐯^w_b_k) - β̂ ^b_k_b_k+1
2[ 𝐪^w_b_k^-1⊗𝐪^w_b_k+1⊗(γ̂ ^b_k_b_k+1)^-1]_xyz
𝐛_a,b_k+1 - 𝐛_a,b_k
𝐛_g,b_k+1 - 𝐛_g,b_k]
where k and k+1 are the epochs of adjacent frames, Δ t_k is the time interval, α̂ ^b_k_b_k+1, β̂ ^b_k_b_k+1, γ̂ ^b_k_b_k+1 are the IMU preintegration terms<cit.>.
The IMU measurements provide stable relative pose information based on the navigation state estimation, but they alone couldn't measure the absolute values of the translation and the velocity. When combined with visual measurements in VIO, metric-scale translation/velocity could be derived as long as the IMU is sufficiently excited<cit.>.
2) Visual re-projection factor:
The visual features maintained in the sliding window, including the ground features, are used to construct the visual re-projection factors. The residual could be expressed as
𝐫_cam( 𝐮^i_f,𝐮^j_f, 𝐱_i, 𝐱_j, λ_f )=
[
(𝐩^c_j_f)_x/(𝐩^c_j_f)_z
(𝐩^c_j_f)_y/(𝐩^c_j_f)_z
]
- 𝐮^j_f
with
𝐩^c_j_f =𝐑^w_b_j^⊤( 𝐑^w_b_i(𝐑^b_c(𝐮^i_f/λ_f) + 𝐩^b_c) + 𝐩^w_b_i - 𝐩^w_b_j)
where 𝐮^i_f and 𝐮^j_f are visual observations of f at epoch i and j, (𝐑^b_c, 𝐩^b_c) are the IMU-camera extrinsic parameters.
The visual measurements are used to strongly constrain the vehicle poses and landmark positions through a bundle-adjustment (BA)-like model.
3) Camera-ground constraint factor:
Camera-ground constraints are applied to the ground features maintained in the sliding window, based on the model in Sect. III.
In our implementation, there are two kinds of camera-ground constraint factors, depending on the anchor frame of the ground feature and the frame that the camera-ground constraint is applied, termed as the target frame.
If the anchor frame and the target frame are the same, the residual could be expressed as
r_C-G(𝐮^i_f,λ_f,𝐱_)= h -( 𝐑^c_c_(α,θ)^⊤𝐮^i_f/λ_f)_y
If the anchor frame and the target frame are different, the residual could be expressed as
r_C-G(𝐮^i_f,λ_f,𝐱_i,𝐱_j,𝐱_)= h -(𝐑^c_c_(α,θ)^⊤𝐩^c_j_f)_y
where the i-th frame is the anchor frame, the j-th frame is the target frame, and 𝐩^c_j_f refers to (<ref>).
By introducing the camera-ground geometric constraints into the estimator, the C-G parameters could be estimated and refined based on the information gained by VIO. Once the C-G parameters get converged, the constraints could reciprocally provide driftless, metric-scale geometric information to VIO.
The mechanism of “make some parameters converge and use it to maintain the estimation performance” is similar to the IMU biases.
Yet the converged C-G parameters are expected to have a more sustained influence, for: 1) the IMU biases are time-variant but the C-G parameters are relatively stable, 2) the C-G parameters are at the same order with pose estimation which don't need integration like IMU.
The optimization problem could then be expressed as minimizing above residuals and prior terms following
min_𝒳{ 𝐫_p - 𝐇_p 𝒳^2 +
∑_k∈[0,n) 𝐫_IMU(α̂ ^b_k_b_k+1,β̂ ^b_k_b_k+1,γ̂ ^b_k_b_k+1,𝐱_k,𝐱_k+1)^2_𝐏_IMU +
∑_i< j∈[0,n],f ∈ℱ ρ_H
(𝐫_cam( 𝐮^i_f,𝐮^j_f, 𝐱_i, 𝐱_j, λ_f
)
^2_𝐏_cam) +
∑_i∈[0,n],f ∈ℱ_ ρ_C
(
r_C-G(𝐮^i_f,λ_f,𝐱_)
^2_P_C-G) +
∑_i< j∈[0,n],f ∈ℱ_ ρ_C
(
r_C-G(𝐮^i_f,λ_f,𝐱_i,𝐱_j,𝐱_)
^2_P_C-G)}
where (𝐫_p, 𝐇_p) is the prior information obtained from marginalization<cit.>,
ℱ is the set of landmarks maintained in the sliding window, ℱ_ is the set of ground landmarks, which is a subset of ℱ,
ρ_H(·) and ρ_C(·) are Huber and Cauchy kernel functions<cit.> respectively,
𝐏_IMU, 𝐏_cam, P_C-G are covariances/variances of the residuals. The ceres-solver<cit.> is employed to solve the optimization problem.
§.§ Initialization of Camera-Ground Parameters
If the C-G parameters are completely unknown at the beginning, the ground feature processing module couldn't work properly and it is hard to construct accurate camera-ground constraint factors. In this case, the system needs to online initialize the C-G parameters.
It is recognized that the monocular VIO has the capability to perceive metric-scale environmental structure with enough IMU excitation<cit.>. On this basis, it is completely possible to online initialize the C-G parameters without auxiliary information from other sensors. The specific procedure of the initialization is presented as follows.
After the VIO initialization, the common VIO routines start to work. So far, without the knowledge of C-G parameters, the ground features could only be tracked on the perspective image (without IPM processing). To achieve this, a conservative region of interest (ROI) on the image is used, which is determined by the IMU-camera extrinsics and a rough vehicle height. For the initialization of C-G parameters, the uncertainties (i. e., variances) of the ground landmarks in the sliding window are periodically checked. Once enough ground landmarks below the uncertainty threshold are obtained, observations of these landmarks are stacked together to estimate the C-G parameters, following
(ĥ,θ̂,α̂) =
min_(h,θ,α)
∑_
i≤ j∈[0,n],f∈F_ h -
[
𝐑^c_c_(α,θ)^⊤(𝐑^c_j_c_i𝐮^c_i_f /λ_f+ 𝐩^c_j_c_i)
]_y
After the initialization of C-G parameters, the ground feature processing module would be switched on for better tracking accuracy. At the same time, the camera-ground constraint factors would be applied in the factor graph to enhance VIO, and the C-G parameters will be further refined.
§.§ Dealing with Complex Road Conditions
In real-world scenarios, the road conditions could be relatively complex and don't conform to the ideal camera-ground geometric model depicted in Sect. III. Such conditions could be mainly covered by the following two cases: 1) attitude vibration of the vehicle caused by road irregularity or vehicle dynamics, 2) change of the road slope. These two cases are illustrated in Fig. <ref>, which would lead to systematic errors and affect the system performance if not carefully considered.
In the proposed system, several tricks are employed to mitigate the effect of these problems. To deal with high-frequency attitude vibration of the vehicle (Fig. <ref>(a)), we use the local IMU attitude estimation to compensate the C-G parameters temporarily, as shown in Fig. <ref>.
In our implementation, only the pitch component θ is compensated, which is more sensitive considering the ground region of interest (± 3 m wide, 15 m far).
To be specific, we use a 4-second window of historical pitch estimation to fit a quadratic curve and to calculate the pitch compensation of the current epoch, following
θ_comp = θ^w_b_k - θ̂^w_b_k
where θ^w_b_k is the current IMU pitch estimation, θ̂^w_b_k is the pitch predicted by curve fitting.
And when applying the camera-ground constraint factors in this frame, the C-G parameter θ is compensated temporarily
θ_k = θ + θ_comp
where θ_k is the taken as the temporary C-G parameter at epoch k. By doing so, the noise of the camera-ground constraint caused by attitude vibration could be significantly mitigated.
To deal with the change of the road slope (Fig. <ref>(b)), firstly the ground feature processing could abandon some of the feature observations that don't conform to the planar ground assumption. Secondly, when constructing the camera-ground factors, we use a relatively strict outlier removal strategy, with a cut-off threshold plus a Cauchy kernel function, in order to counter gross errors caused by drastic slope changes.
§ SIMULATION TESTS
Simulation tests are conducted to evaluate the system performance in relatively ideal conditions. The advantage of simulation is that the vehicle-sensor alignment and the environmental geometry are precisely known, which facilitates more in-depth analysis.
The CARLA simulator<cit.>, which provides exquisite 3-D scenes and realistic vehicle dynamics, is used to generate the vehicle poses and images. The IMU data is separately simulated based on B-spline fitting<cit.> of the 10 Hz ground truth poses, where custom biases and noises are added. The settings of the simulation are listed in TABLE <ref>.
The vehicle trajectories and the captured images in the simulation tests are shown in Fig. <ref>. The simulation consists of two sequences, namely S-A and S-B, corresponding to urban and highway environments respectively. The vehicle dynamics caused by the suspension system are considered, leading to up to ± 0.5^∘ vibration of the vehicle attitude.
Different schemes of VIO are tested on the simulated data sequences, including: 1) VINS-Fusion (monocular), 2) VINS-Fusion (stereo), 3) VINS-Fusion (monocular) with ground features, 4) OpenVINS (monocular), 5) ORB-SLAM3 (monocular, with IMU) and 6) the proposed Ground-VIO. For VINS-Fusion (monocular) with ground features, the ground feature processing module is employed, in which ground-truth C-G parameters are applied for comparison. For Ground-VIO, the C-G parameters are unknown and would be estimated online.
For VINS-Fusion-based solutions and Ground-VIO, a 50-ms maximum optimization time limit (single thread, Intel i7-6700K) is set to guarantee real-time processing and provide a more equitable comparison. The maximum feature number of front-end common feature processing is 250, while the maximum number of ground features is set to 40. For an ideal analysis, the semantic images are used to determine the ground region in the simulation tests.
Firstly. the focus is put on the estimation of the C-G parameters. The convergence of the C-G parameters on Seq. S-A and Seq. S-B is shown in Fig. <ref>. In the two sequences, the initialization of the C-G parameters could be completed within 10 seconds with <0.1 m error of h and <1^∘ error of θ and α, as long as enough geometric information is derived by the VIO system. After the initialization, the camera-ground geometric constraints are enabled in the factor graph, and the C-G parameters go on to be refined. It could be found that, with moderate vehicle dynamics and ideal planar grounds in the simulation, the C-G parameters could get converged in a very short time (10 secs for Seq. S-A and 20 secs for Seq. S-B) and achieve good accuracy (0.01 m for h, 0.1^∘ for θ and α). Relatively speaking, the convergence performance for Seq. S-A is better, which could be attributed to more available ground features. It is noted that the estimation accuracy of α (roll) is worse than θ (pitch), which is reasonable considering the region of interest (15 m far and ± 3 m wide) and the fact that only the pitch vibration is compensated (Sect. IV-E).
Secondly, we check the consistency between the landmark depths estimated by VIO and the ground-truth C-G parameters to analyze the model accuracy. To be specific, the residuals of single-frame camera-ground geometric constraints (<ref>) are calculated using ground-truth C-G parameters and estimated landmark depths. As shown in Fig. <ref>, if the constraints are not applied (VINS-Fusion), the estimated landmark depths don't fit the camera-ground geometry well. The distribution of the residuals reflect the error of scale estimation, which could be over the level of 0.1/h ≈ 5%. Once the camera-ground geometric constraints are taken into account (Ground-VIO), the residuals are consequently kept to around 0, which indicate an unbiased estimation of the scale. Furthermore, with the local compensation of the vehicle pitch, the noise level of the residuals is significantly lowered. This indicates better accuracy of the compensated model, as it compensates much of the model errors caused by vehicle dynamics.
Finally, the pose estimation accuracy of different solutions are investigated. The estimated vehicle trajectories are shown in Fig. <ref>, and the distributions of the relative translation errors are shown in Fig. <ref>. Considering that different VIO solutions need different time (several seconds) to initialize the system, we align the estimated vehicle poses at t = 10 s when plotting the trajectories in Fig. <ref>.
It could be found that, the attitude estimations of different solutions are comparable, yet almost all the monocular VIO solutions without C-G constraints show significant translation errors. As the scale observability is closely related to the dynamics in a monocular VIO, the long-time straight motions lead to inevitable scale drifting. The solution of VINS-Fusion (monocular) with ground features slightly improves the translation accuracy by introducing more stable features, but still suffers significant drift of the scale. For Ground-VIO, after the C-G parameters get converged in the beginning dynamic period (with acceleration and rotation), the camera-ground geometry could then provide unbiased information of the metric scale and helps VIO maintain accurate translation estimation. Consequently, the monocular Ground-VIO achieves superior translation estimation performance (relative error < 0.5%) without introducing any other sensors, which is incredible considering the insufficient dynamics of a ground vehicle in the road environment.
The statistics of the navigation performance of different VIO solutions are listed in TABLE <ref>. The relative translation and rotation errors are calculated by averaging all possible subsequences of length (100, ..., 800) meters, referring to<cit.>. The absolute trajectory error is calculated referring to<cit.>.
§ REAL-WORLD EXPERIMENTS
Real-world experiments were conducted on Oct. 12, 2022 to evaluate the performance of the proposed system under typical vehicular scenarios, including urban roads and highways. The appearance of the experimental vehicle is shown in Fig. <ref>. The experimental platform is equipped with two Flir BFS-PGE-31S4C cameras, a low-cost ADIS16470 MEMS IMU, a tactical grade XW-GI7660 IMU and a Septentrio AsteRx4 GNSS receiver. The data from the tactical grade IMU and the GNSS receiver (with base station availability) are post-processed to generate the reference trajectory. The specifications of the used IMUs are listed in TABLE <ref>.
The reason to use self-collected datasets is for better representativeness of the evaluation, i. e., using low-cost visual-inertial sensor scheme under realistic vehicular scenarios, and especially focusing on feature-lacking highway scenarios with limited dynamics. For the visibility of our work, we will make the experimental data public available.
Notice that in the real-world experiments, we don't apply a semantic segmentation module but rely on the system itself to distinguish the ground features and resist possible outliers.
§.§ VIO with Unknown C-G Parameters
In this part, we evaluate the proposed system under the condition that the C-G parameters are completely unknown. In this case, the C-G parameters would be initialized online and continuously estimated during the data periods. Four 180-sec data sequences with moderate vehicle dynamics, namely Seq. R-A, R-B, R-C and R-D, are used for the evaluation, as shown in Fig. <ref>. Different solutions of VIO are tested on the data sequences. Compared to the simulation test, state-of-art VIO implementations with stereo camera setups are considered in this part to investigate the best achievable VIO performance in these real-world road environments.
The convergence of the C-G parameters is shown in Fig. <ref>. For the four sequences, the final estimation results of the C-G parameters show good consistency. Later in this part, the average value of the four sets of obtained C-G parameters is taken as the reference, which is (1.7803 m, -1.151^∘, -0.153^∘) for (h, θ, α). It is found from Fig. <ref> that, the initialization of the parameters could be finished in a few seconds, and the initial accuracy is similar to the simulation tests (0.1 m for h, 1^∘ for θ, α). Yet differently, the convergence of the C-G parameters is slower than the simulation tests. To be specific, around 30∼60 seconds are needed to obtain ideal accuracy of the C-G parameters (0.01 m, 0.1^∘, 0.2^∘ for h, θ, α). This could be attributed to more complex road conditions and smaller IMU excitation in the real-world experiments, which affect both the camera-ground geometric constraint and the monocular VIO itself. Roughly speaking, better observability of the VIO system, sufficient ground features and smooth road surface could contribute to faster convergence of the C-G parameters.
The focus is then put on the pose estimation performance. Fig. <ref> and Fig. <ref> show the estimated vehicle trajectories and relative translation errors of different VIO schemes. The detailed statistics of the VIO performance are listed in TABLE <ref>. Similar to the simulation tests, the monocular VIOs (except Ground-VIO) obtain good attitude estimation, but show bad performance on the translation error due to the drift of the scale, which is significant during long-time straight motions.
Comparatively, the stereo VIOs perform much better on the relative translation errors, but they still undergo significant pose errors as could be seen in Fig. <ref>. To be specific, the filter-based OpenVINS (stereo) undergoes relatively large heading errors on the four sequences, while the optimization-based VINS-Fusion (stereo) performs bad on Seq. R-A and R-D. The phenomenon that monocular VIO could outperform stereo VIO on attitude estimation could also be found in<cit.>. The ORB-SLAM3 (stereo, with IMU) scheme, although maintaining good heading estimation, undergoes non-negligible position drifting. The good attitude estimation performance could be attributed to the map-centered design of ORB-SLAM3, whose superiority is verified in<cit.>. However, the road environment, with insufficient stable features and moving objects, has caused difficulty for it to achieve ideal translation estimation.
In constrast, the proposed Ground-VIO shows good translation estimation performance with the help of the camera-ground geometric constraints. Although the C-G parameters are unknown at the beginning, the vehicle dynamics are able to make them converge and continuously take effect in the remaining period. It is verified that, the camera-ground geometry, like stereo vision but in a different way, could help maintain precise and unbiased scale estimation in realistic vehicular scenarios. Generally, the Ground-VIO could achieve comparable relative translation error (0.5%∼1.0%) with state-of-art stereo VIO schemes, and the attitude estimation performance is even better. Thus, the Ground-VIO achieves the smallest position drift on almost all the four sequences.
In all, with moderate vehicle dynamics, the proposed Ground-VIO is able to online calibrate the C-G parameters and obtain good pose estimation accuracy simultaneously.
§.§ Pre-Calibrated VIO under Challenging Scenarios
It has been mentioned that, the online calibration of the C-G parameters relys on the vehicle dynamics, since it needs the observability of VIO to extract metric-scale environmental structure. Fortunately, with pre-calibration of the C-G parameters, the VIO performance could also be greatly improved even under dynamic-insufficient scenarios. Actually, the pre-calibration is not hard, since it could be automatically finished when moderate vehicle dynamics are available, as verified in Sect. VI-A.
In this section, two highway data sequences, namely Seq. R-E and Seq. R-F, are used to test the system performance with pre-calibrated C-G parameters. The vehicle trajectories and the representative images are shown in Fig. <ref>. These two data sequences are extremely challenging for VIO, with limited dynamics, insufficient environmental features and high vehicle speed. These conditions could cause difficulty in both feature tracking and the observability of the VIO system. For Ground-VIO, the pre-calibrated C-G parameters are obtained from the online estimation results in Sect. VI-A.
The estimated vehicle trajectories and relative translation error distributions are shown in Fig. <ref> and Fig. <ref>. Despite our best efforts, some schemes can't work properly on the two sequences. To be specific, the OpenVINS (stereo) scheme can't successfully initialize on both sequences and the ORB-SLAM3 (stereo) scheme fails on Seq. R-F because of the difficulty in ORB feature matching, as the environmental textures are either weak or highly repetitive (e.g. building windows, guardrails).
The pose estimation results are presented in Fig. <ref> and Fig. <ref>. As shown in Fig. <ref>, the monocular VIOs perform bad on the translation estimation on Seq. R-E, reaching a relative error over 10%. It is unexpected that state-of-art stereo VIO schemes are also unable to achieve good pose estimation on the sequences, despite the fact that the stereo vision could provide accurate scale information in principle. This could be mainly due to the lack of high-quality visual features in the highway environment, and the stereo matching even increases the risk of introducing gross errors. In contrast, with the pre-calibrated C-G parameters, the proposed Ground-VIO achieves an incredible 1% relative translation error (average). The camera-ground geometry not only provides unbiased scale information to the monocular VIO system, but also provides stable features via the specially designed ground feature processing module. In the highway environment, most visual features are faraway which can't provide accurate translation information. The ground feature processing module makes it possible to fully utilize the landmarks on the road (e. g. road markings, shadows) to mitigate the ill-conditioning, and what makes sense is that these landmarks depths are almost known. As a result, the estimation of vehicle velocity is effectively constrained and contributes to accurate pose estimation.
Similar results could be found in the more challenging Seq. R-F. As shown in Fig. <ref>, the Ground-VIO greatly outperforms state-of-art monocular and stereo VIO schemes, achieving 1% relative translation error (average).
The statistics of the pose estimation error on Seq. R-E and R-F are listed in TABLE <ref>.
§.§ IPM Calibration Performance
To some extent, the online estimation of C-G parameters is equivalent to online calibrating IPM, which is widely used in vehicle perception.
In this part, apart from the odometry performance, the effectiveness of the online IPM calibration is investigated qualitatively.
To be specific, the estimated C-G parameters in Sect. VI-A are used for IPM processing of an image sequence. Through IPM, the images are transformed into metric-scale point clouds with color information. Based on the camera poses obtained by Ground-VIO, the point clouds could be merged together in a reference frame. The consistency of the merged point could directly reflects the accuracy of IPM.
Fig. <ref> shows the merged point clouds based on different C-G parameters. It could be seen that, with residual error on either attitude or height component of the C-G parameters, the merged point cloud is fuzzy and has stitching errors. This reflects the inconsistency of multiple point clouds, which further indicates that the generated IPM point clouds are not geometrically accurate. Roughly speaking, the longitudinal errors could reach over meter-level. In contrast, with calibrated C-G parameters, the merged point cloud is much more consistent and less fuzzy. Furthermore, once the IMU pitch compensation is applied, the details of the point cloud become clearer, which verifies the accuracy of the camera-ground geometric model. After the IPM calibration, a 10∼15 meter effective perception range, with decimeter to centimeter level accuracy, could be expected.
In all, the proposed algorithm provides an approach to online calibrate the IPM parameters of vehicle-mounted cameras, which is based on only monocular visual-inertial data without the need of extra infrastructure.
§ CONCLUSION
In this work, we presented Ground-VIO, which introduces the camera-ground geometry into monocular VIO to improve the odometry performance. The proposed works well with either unknown or pre-calibrated C-G parameters, achieving comparable or even better odometry accuracy than state-of-art stereo VIOs in vehicular scenarios. The method is expected to significantly improve the practicability of VIO applied in intelligent vehicle applications, which could work as an effective supplement to existing vehicle navigation schemes.
Besides, the proposed method provides an efficient way for online IPM calibration based on only monocular visual-inertial data. The auto-calibration doesn't need extra infrastructure and could handle long-term change of the sensor alignment, which is meaningful for better vehicle perception. It is our future interest to apply this technique in vision-based crowd-sourced mapping applications.
§ ACKNOWLEDGMENTS
The implemented Ground-VIO is part of the GREAT (GNSS+ REsearch, Application and Teaching) software developed by the GREAT Group, School of Geodesy and Geomatics, Wuhan University.
1
IEEEtran
ref_kitti
A. Geiger, P. Lenz and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012, pp. 3354-3361.
ref_autonomous_driving
J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 2011, pp. 163-168.
ref_od
X. Chen, K. Kundu, Z. Zhang, H. Ma, S. Fidler and R. Urtasun, “Monocular 3D Object Detection for Autonomous Driving,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 2016, pp. 2147-2156.
ref_sp
K. Muhammad et al., “Vision-Based Semantic Segmentation in Scene Understanding for Autonomous Driving: Recent Achievements, Challenges, and Outlooks,” in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 12, pp. 22694-22715, Dec. 2022.
ref_vslam1
C. Cadena et al., “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age,” in IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec. 2016.
ref_vslam2
H. Lategahn, A. Geiger and B. Kitt, “Visual SLAM for autonomous ground vehicles,” 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 2011, pp. 1732-1737.
ref_vio
G. Huang, “Visual-Inertial Navigation: A Concise Review,” 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp. 9572-9582.
ref_vinsmono
T. Qin, P. Li and S. Shen, “VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator,” in IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004-1020, Aug. 2018.
ref_smsckf
K. Sun et al., “Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 965-972, Apr. 2018.
ref_observability1
A. Martinelli, “Visual-inertial structure from motion: Observability and resolvability,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 4235-4242, Nov. 2013.
ref_observability2
J. Hernandez, K. Tsotsos and S. Soatto, “Observability identifiability and sensitivity of vision-aided inertial navigation,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 2319-2325, May 2015.
ref_observability3
Y. Yang and G. Huang, “Observability Analysis of Aided INS With Heterogeneous Features of Points, Lines, and Planes,” in IEEE Transactions on Robotics, vol. 35, no. 6, pp. 1399-1418, Dec. 2019.
ref_msf1
W. Lee, Y. Yang and G. Huang, “Efficient Multi-sensor Aided Inertial Navigation with Online Calibration,” 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China, 2021, pp. 5706-5712.
ref_msf2
J. H. Jung et al., “Monocular Visual-Inertial-Wheel Odometry Using Low-Grade IMU in Urban Areas,” in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 2, pp. 925-938, Feb. 2022.
ref_msf3
S. Li, X. Li, H. Wang, Y. Zhou and Z. Shen,
“Multi-GNSS PPP/INS/Vision/LiDAR tightly integrated system for precise navigation in urban environments,” Information Fusion, vol. 90, 2023, pp. 218-232.
ref_IPM1
J. Jeong and A. Kim, “Adaptive Inverse Perspective Mapping for lane map generation with SLAM,” 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Xi'an, China, 2016, pp. 38-41.
ref_IPM2
J. Wang, T. Mei, B. Kong and H. Wei, “An approach of lane detection based on Inverse Perspective Mapping,” 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, 2014, pp. 35-38.
ref_avm
Y. -L. Chang, L. -Y. Hsu and O. T. . -C. Chen, “Auto-Calibration Around-View Monitoring System,” 2013 IEEE 78th Vehicular Technology Conference (VTC Fall), Las Vegas, NV, USA, 2013, pp. 1-5.
ref_ipm_map1
J. Jeong, Y. Cho and A. Kim, “Road-SLAM : Road marking based SLAM with lane-level accuracy,” 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 2017, pp. 1736-1473.
ref_ipm_map2
Y. Zhou, X. Li, S. Li and X. Wang, “Visual Mapping and Localization System Based on Compact Instance-Level Road Markings With Spatial Uncertainty,” in IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10802-10809, Oct. 2022.
ref_ipm_perception
N. Gosala and A. Valada, “Bird’s-Eye-View Panoptic Segmentation Using Monocular Frontal View Images,” in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1968-1975, April 2022.
ref_msckf
A. I. Mourikis and S. I. Roumeliotis, “A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation,” Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 2007, pp. 3565-3572.
ref_msckf2
M. Li and A. I. Mourikis, “Improving the accuracy of EKF-based visual-inertial odometry,” 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 2012, pp. 828-835.
ref_mimc
K. Eckenhoff, P. Geneva and G. Huang, “MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera Visual-Inertial Navigation System,” in IEEE Transactions on Robotics, vol. 37, no. 5, pp. 1360-1380, Oct. 2021.
ref_openvins
P. Geneva, K. Eckenhoff, W. Lee, Y. Yang and G. Huang, “OpenVINS: A Research Platform for Visual-Inertial Estimation,” 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 4666-4672.
ref_preintegration
C. Forster, L. Carlone, F. Dellaert and D. Scaramuzza, “On-Manifold Preintegration for Real-Time Visual–Inertial Odometry,” in IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1-21, Feb. 2017.
ref_okvis
S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, P. Furgale, “Keyframe-based visual-inertial odometry using nonlinear optimization”, International Journal of Robotics Research (IJRR), 2014.
ref_vinsfusion
Tong Qin, Jie Pan, Shaozu Cao, Shaojie Shen, “A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors”, arXiv:1901.03638 [cs.CV], Jan. 2019.
ref_orbslam3
C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel and J. D. Tardós, “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM,” in IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874-1890, Dec. 2021.
ref_se2_gf
P. Zhou, Y. Liu, P. Gu, J. Liu and Z. Meng, “Visual Localization and Mapping Leveraging the Constraints of Local Ground Manifolds,” in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4196-4203, April 2022.
ref_lidar_se2
K. Konolige, G. Grisetti, R. Kümmerle, W. Burgard, B. Limketkai and R. Vincent, “Efficient Sparse Pose Adjustment for 2D mapping,” 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 2010, pp. 22-29.
ref_nhc1
D. Scaramuzza, F. Fraundorfer and R. Siegwart, “Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC,” 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 2009, pp. 4293-4299.
ref_se2_1
F. Zheng and Y. -H. Liu, “SE(2)-Constrained Visual Inertial Fusion for Ground Vehicles,” in IEEE Sensors Journal, vol. 18, no. 23, pp. 9699-9707, 1 Dec.1, 2018.
ref_se2_2
F. Zheng and Y. -H. Liu, “Visual-Odometric Localization and Mapping for Ground Vehicles Using SE(2)-XYZ Constraints,” 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp. 3556-3562.
ref_se2_3
R. Kang, L. Xiong, M. Xu, J. Zhao and P. Zhang, “VINS-Vehicle: A Tightly-Coupled Vehicle Dynamics Extension to Visual-Inertial State Estimator,” 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 2019, pp. 3593-3600.
ref_se2_4
M. Zhang, X. Zuo, Y. Chen, Y. Liu and M. Li, “Pose Estimation for Ground Robots: On Manifold Representation, Integration, Reparameterization, and Optimization,” in IEEE Transactions on Robotics, vol. 37, no. 4, pp. 1081-1099, Aug. 2021.
ref_gf
M. Ouyang, Z. Cao, P. Guan, Z. Li, C. Zhou and J. Yu, “Visual-Gyroscope-Wheel Odometry With Ground Plane Constraint for Indoor Robots in Dynamic Environment,” in IEEE Sensors Letters, vol. 5, no. 3, pp. 1-4, March 2021, Art no. 6000504.
ref_gc1
S. Song, M. Chandraker and C. C. Guest, “High Accuracy Monocular SFM and Scale Correction for Autonomous Driving,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 4, pp. 730-743, 1 April 2016.
ref_gc2
B. Lee, K. Daniilidis and D. D. Lee, “Online self-supervised monocular visual odometry for ground vehicles,” 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015, pp. 5232-5238.
ref_gc3
R. Tian, Y. Zhang, D. Zhu, S. Liang, S. Coleman and D. Kerr, “Accurate and Robust Scale Recovery for Monocular Visual Odometry Based on Plane Geometry,” 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China, 2021, pp. 5296-5302.
ref_goodfeature
J. Shi and C. Tomasi, “Good features to track,” in Proc. IEEE Int. Conf. Pattern Recog., pp. 593-600, 1994.
ref_klt
B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proc. Int. Joint Conf. Artif. Intell., pp. 24-28, Aug. 1981.
ref_cv_calib
G. Bradski, “The OpenCV Library.” [Online]. Available: https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html
ref_ss1
Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation”, in Proc. Eur. Conf. Comput. Vis., 2018, pp. 801-818
ref_ss2
X. Li et al., “Semantic flow for fast and accurate scene parsing,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 775-793.
ref_ceres
S. Agarwal and K. Mierle, “Ceres solver.” [Online]. Available: http://ceres-solver.org
ref_carla
A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez and V. Koltun, “CARLA: An Open Urban Driving Simulator,” Proceedings of the 1st Annual Conference on Robot Learning ser. Proceedings of Machine Learning Research, vol. 78, pp. 1-16, Nov. 2017.
|
http://arxiv.org/abs/2306.11832v1
|
20230620184324
|
QuOTeS: Query-Oriented Technical Summarization
|
[
"Juan Ramirez-Orta",
"Eduardo Xamena",
"Ana Maguitman",
"Axel J. Soto",
"Flavia P. Zanoto",
"Evangelos Milios"
] |
cs.IR
|
[
"cs.IR",
"cs.CL"
] |
Ramirez-Orta et al.
Department of Computer Science, Dalhousie University Institute of Research in Social Sciences and Humanities (ICSOH) Universidad Nacional de Salta - CONICET Escrever Ciência Institute for Computer Science and Engineering, UNS - CONICET Department of Computer Science and Engineering, Universidad Nacional del Sur
QuOTeS: Query-Oriented Technical Summarization
Juan Ramirez-Orta1 Please send correspondence to <[email protected]> Eduardo Xamena2,3 Ana Maguitman5,6 Axel J. Soto5,6 Flavia P. Zanoto4 Evangelos Milios1
July 31, 2023
============================================================================================================================================================================
When writing an academic paper, researchers often spend considerable time reviewing and summarizing papers to extract relevant citations and data to compose the Introduction and Related Work sections. To address this problem, we propose QuOTeS, an interactive system designed to retrieve sentences related to a summary of the research from a collection of potential references and hence assist in the composition of new papers. QuOTeS integrates techniques from Query-Focused Extractive Summarization and High-Recall Information Retrieval to provide Interactive Query-Focused Summarization of scientific documents. To measure the performance of our system, we carried out a comprehensive user study where participants uploaded papers related to their research and evaluated the system in terms of its usability and the quality of the summaries it produces. The results show that QuOTeS provides a positive user experience and consistently provides query-focused summaries that are relevant, concise, and complete.
We share the code of our system and the novel Query-Focused Summarization dataset collected during our experiments at <https://github.com/jarobyte91/quotes>.
§ INTRODUCTION
When writing an academic paper, researchers often spend substantial time reviewing and summarizing papers to shape the Introduction and Related Work sections of their upcoming research. Given the ever-increasing number of academic publications available every year, this task has become very difficult and time-consuming, even for experienced researchers. A solution to this problem is to use Automatic Summarization systems, which take a long document or a collection of documents as input and produce a shorter text that conveys the same information.
The summaries produced by such systems are evaluated by measuring their fluency, coherence, conciseness, and completeness. To this end, Automatic Summarization systems can be divided into two categories, depending on their output. In Extractive Summarization, the purpose of the system is to highlight or extract passages present in the original text, so the summaries are usually more coherent and complete. On the other hand, in Abstractive Summarization, the system generates the summary by introducing words that are not necessarily in the original text. Hence, the summaries are usually more fluent and concise. Although there have been significant advances recently <cit.>, these complementary approaches share the same weakness: it is very hard for users to evaluate the quality of an automatic summary because it means that they have to go back to the original documents and verify that the system extracted the correct information.
Since evaluating summarization systems by hand is very difficult, several automatic metrics have been created with this purpose: BLEU <cit.>, ROUGE <cit.>, and METEOR <cit.> all aim to measure the quality of the summary produced by the system by comparing it with a reference summary via the distribution of its word n-grams. Despite being very convenient and popular, all these automatic metrics have a significant drawback: since they only look at the differences in the distribution of words between the system's summary and the reference summary, they are not useful when the two summaries are worded differently, which is not necessarily a sign that the system is performing poorly.
Therefore, although Automatic Summarization systems display high performance when evaluated on benchmark datasets <cit.>, they often cannot satisfy their users' needs, given the inherent difficulty and ambiguity of the task <cit.>. An alternative approach to make systems more user-centric is Query-Focused Summarization <cit.>, in which the users submit a query into the system to guide the summarization process and tailor it to their needs. Another alternative approach to this end is Interactive Summarization <cit.>, in which the system produces an iteratively improved summary. Both of these approaches, and several others, take into account that the correct summary given a document collection depends on both the users and what they are looking for.
In this paper, we introduce QuOTeS, an interactive system designed to retrieve sentences relevant to a paragraph from a collection of academic articles to assist in the composition of new papers. QuOTeS integrates techniques from Query-Focused Extractive Summarization <cit.> and High-Recall Information Retrieval <cit.> to provide Interactive Query-Focused Summarization of scientific documents. An overview of how QuOTeS works and its components is shown in Fig. <ref>.
The main difficulty when creating a system like QuOTeS in a supervised manner is the lack of training data: gathering enough training examples would require having expert scientists carefully read several academic papers and manually label each one of their sentences concerning their relevance to the query, which would take substantial human effort. Therefore, we propose QuOTeS as a self-service tool: the users supply their academic papers (usually as PDFs), and QuOTeS provides an end-to-end service to aid them in the retrieval process. This paper includes the following contributions:
* A novel Interactive Query-Focused Summarization system that receives a short paragraph (called query) and a collection of academic documents as input and returns the sentences related to the query from the documents in the collection. The system extracts the text directly from the academic documents provided by the user at runtime, minimizing the effort needed to perform complex queries on the text present in the documents. Finally, the system features techniques from High-Recall Information Retrieval to maximize the number of relevant sentences retrieved.
* A novel dataset composed of (Query, Document Collection) pairs for the task of Query-Focused Summarization of Scientific Documents, each one with five documents and hundreds of sentences, along with the relevance labels produced by real users.
* A comprehensive analysis of the data collected during a user study of the system, where the system was evaluated using the System Usability Scale <cit.> and custom questionnaires to measure its usability and the quality of the summaries it produces.
§ RELATED WORK
§.§ Query-Focused Summarization
The task of Query-Focused Summarization (QFS) was introduced in the 2005 Document Understanding Conference (DUC 2005) <cit.>. The focus of the conference was to develop new evaluation methods that take into account the variation of summaries produced by humans. Therefore, DUC 2005 had a single, user-oriented, question-focused summarization task that allowed the community to put some time and effort into helping with the new evaluation framework. The summarization task was to synthesize a well-organized and fluent answer to a complex question from a set of 25 to 50 documents. The relatively generous allowance of 250 words for each answer revealed how difficult it was for the systems to produce good multi-document summaries. The two subsequent editions of the conference (DUC 2006 <cit.> and DUC 2007 <cit.>) further enhanced the dataset produced in the first conference and have become the reference benchmark in the field.
Surprisingly, state-of-the-art algorithms designed for QFS do not significantly improve upon generic summarization methods when evaluated on traditional QFS datasets, as was shown in <cit.>. The authors hypothesized that this lack of success stems from the nature of the datasets, so they defined a novel method to quantify their Topic Concentration. Using their method, which is based on the ratio of sentences within the dataset that are already related to the query, they observed that the DUC datasets suffer from very high Topic Concentration. Therefore, they introduced TD-QFS, a new QFS dataset with controlled levels of Topic Concentration, and compared competitive baseline algorithms on it, reporting a solid improvement in performance for algorithms that model query relevance instead of generic summarizers. Finally, they presented three novel QFS algorithms (RelSum, ThresholdSum, and TFIDF-KLSum) that outperform, by a large margin, state-of-the-art QFS algorithms on the TD-QFS dataset.
A novel, unsupervised query-focused summarization method based on random walks over the graph of sentences in a document was introduced in <cit.>.
First, word importance scores for each target document are computed using a word-level random walk. Next, they use a siamese neural network to optimize localized sentence representations obtained as the weighted average of word embeddings, where the word importance scores determine the weights. Finally, they conducted a sentence-level query-biased random walk to select a sentence to be used as a summary. In their experiments, they constructed a small evaluation dataset for QFS of scientific documents and showed that their method achieves competitive performance compared to other embeddings.
§.§ High-Recall Information Retrieval
A novel evaluation toolkit that simulates a human reviewer in the loop was introduced in <cit.>. The work compared the effectiveness of three Machine Learning protocols for Technology-Assisted Review (TAR) used in document review for legal proceedings. It also addressed a central question in the deployment of TAR: should the initial training documents be selected randomly, or should they be selected using one or more deterministic methods, such as Keyword Search? To answer this question, they measured Recall as a function of human review effort on eight tasks.
Their results showed that the best strategy to minimize the human effort is to use keywords to select the initial documents in conjunction with deterministic methods to train the classifier.
Continuous Active Learning achieves high Recall for TAR, not only for an overall information need but also for various facets of that information, whether explicit or implicit, as shown in <cit.>. Through simulations using Cormack and Grossman’s Technology-Assisted Review Evaluation Toolkit <cit.>, the authors showed that Continuous Active Learning, applied to a multi-faceted topic, efficiently achieves high Recall for each facet of the topic. Their results also showed that Continuous Active Learning may achieve high overall Recall without sacrificing identifiable categories of relevant information.
A scalable version of the Continuous Active Learning protocol (S-CAL) was introduced in <cit.>. This novel variation requires O(log(N)) labeling effort and O(N log(N) ) computational effort — where N is the number of unlabeled training examples — to construct a classifier whose effectiveness for a given labeling cost compares favorably with previously reported methods. At the same time, S-CAL offers calibrated estimates of Class Prevalence, Recall, and Precision, facilitating both threshold setting and determination of the adequacy of the classifier.
§.§ Interactive Query-Focused Summarization
A novel system that provides summaries for Computer Science publications was introduced in <cit.>. Through a qualitative user study, the authors identified the most valuable scenarios for discovering, exploring, and understanding scientific documents. Based on these findings, they built a system that retrieves and summarizes scientific documents for a given information need, either in the form of a free-text query or by choosing categorized values such as scientific tasks, datasets, and more. The system processed 270,000 papers to train its summarization module, which aims to generate concise yet detailed summaries. Finally, they validated their approach with human experts.
A novel framework to incorporate users' feedback using a social robotics platform was introduced in <cit.>. Using the Nao robot (a programmable humanoid robot) as the interacting agent, they captured the user's expressions and eye movements and used it to train their system via Reinforcement Learning. The whole approach was then evaluated in terms of its adaptability and interactivity.
A novel approach that exploits the user's opinion in two stages was introduced in <cit.>. First, the query is refined by user-selected keywords, key phrases, and sentences extracted from the document collection. Then, it expands the query using a Genetic Algorithm, which ranks the final set of sentences using Maximal Marginal Relevance. To assess the performance of the proposed system, 45 graduate students in the field of Artificial Intelligence filled out a questionnaire after using the system on papers retrieved from the Artificial Intelligence category of The Web of Science. Finally, the quality of the final summaries was measured in terms of the user's perspective and redundancy, obtaining favorable results.
§ DESIGN GOALS
As shown in the previous section, there is a clear research gap in the literature: on the one hand, there exist effective systems for QFS, but on the other hand, none of them includes the user's feedback about the relevance of each sentence present in the summary. On top of that, the task of QFS of scientific documents remains a fairly unexplored discipline, given the difficulty of extracting the text present in academic documents and the human effort required to evaluate such systems, as shown by <cit.>. Considering these limitations and the guidelines obtained from an expert consultant in scientific writing from our team, we state the following design goals behind the development of QuOTeS:
* Receive a paragraph query and a collection of academic documents as input and return the sentences relevant to the query from the documents in the collection. Unlike previous works, QuOTeS is designed as an assistant in the task of writing Introduction and Related Work sections of papers in the making. To this end, the query inputted into the system is a short paragraph describing the upcoming work, which is a much more complex query than the one used in previous systems.
* Include the user in the retrieval loop. As shown by previous works, summarization systems benefit from being interactive. Since it is difficult to express all the information need in a single query, the system needs to have some form of adaptation to the user, either by requiring more information about the user's need (by some form of query expansion) or by incorporating the relevance labeling in the retrieval process.
* Provide a full end-to-end user experience in the sentence extraction process. So far, query-focused summarization systems have been mainly evaluated on data from the DUC conferences. A usable system should be able to extract the text from various documents provided by the user, which can only be determined at runtime. Since the main form to distribute academic documents is PDF files, the system needs to be well adapted to extract the text in the different layouts in academic publications.
* Maximize Recall in the retrieval process. Since the purpose of the system is to help the user retrieve the (possibly very) few relevant sentences from the hundreds of sentences in the collection, Recall is the most critical metric when using a system like QuOTeS, as users can always refine the output summary to adapt it to their needs. Therefore, we use Continuous Active Learning <cit.> as the training procedure for the classifier inside QuOTeS.
§ SYSTEM DESIGN
QuOTeS is a browser-based interactive system built with Python, mainly using the Dash package <cit.>. The methodology of the system is organized into seven steps that allow the users to upload, search and explore their documents. An overview of how the steps relate to each other is shown in Fig. <ref>.
§.§ Tutorial
In this step, the user can watch a 5-minute video[The video can be watched here: <https://www.youtube.com/watch?v=zR9XisDFQ7w>] explaining the task that QuOTeS was made for and an overview of how to use the system. The main part of the video explains the different parts of the system and how they are linked together. It also explains the effect of the different retrieval options and how to download the results from the system to keep analyzing them. Since users will not necessarily need to watch the video every time they use the system, the first step they see when they access the website is the Upload, described below.
§.§ Upload
In this step, the users can upload their documents and get the system ready to start interacting with them via a file upload form. Once the text from all the documents has been extracted, they can click on Process Documents to prepare the system for the retrieval process. After that, they can select the options for the system in the Settings screen, which contains two drop-down menus. In the Embeddings menu, the user can choose how the system represents the query and the documents from three options: TFIDF embeddings based on word unigrams, TFIDF embeddings based on character trigrams and Sentence-BERT embeddings <cit.>. In the Classifier menu, the user can choose which Supervised Machine Learning algorithm to use as the backbone for the system from three options: Logistic Regression, Random Forest, and Support Vector Machine.
§.§ Documents
In this step, the user can browse the text extracted from the documents. The sentences from the papers are shown in the order they were found so that the user can verify that the text was extracted correctly. The user can select which documents to browse from the drop-down menu at the top, which displays all the documents that have been uploaded to the system. Later on, when the user starts labeling the sentences with respect to the query, they are colored accordingly: green (for relevant) or pink (for irrelevant).
§.§ Search
This is the first main step of the system. In the text box, users can write their query. After clicking on Search, the system retrieves the most relevant sentences using the classical Vector Space Model from Information Retrieval.
The sentences below are the best matches according to the query and the representation the user picked in the Upload step. The user can label them by clicking on them, which are colored accordingly: green (for relevant) or pink (for irrelevant). Once the users label the sentences, they can click on Submit Labels, after which the system records them and shows a new batch of recommendations.
§.§ Explore
This is the second main step of the system. Here, the system trains its classifier using the labels the user submits to improve its understanding of the query. Two plots at the top show the distribution of the recommendation score and how it breaks down by document to help the user better understand the collection. The sentences below work exactly like in Search, allowing the user to label them by clicking on them and submitting them into the system by clicking on Submit Labels. Users can label the collection as much as they want, but the recommended criterion is to stop when the system has not recommended anything relevant in three consecutive turns, shown in the colored box at the top right.
§.§ History
In this step, users can review what they have labeled and where to find it in the papers. The sentences are shown in the order they were presented to the user, along with the document they came from and their sentence number to make it easier to find them. Like before, the user can click on a sentence to relabel it if necessary, which makes it change color accordingly. There are two buttons at the top: Clear allows the user to restart the labeling process, and Download .csv downloads the labeling history as a CSV file for further analysis.
§.§ Results
In the last step of QuOTeS, the user can assess the results. There are two plots at the top that show the label counts and how they break down by document, while the bottom part displays the query and the sentences labeled as relevant. The query along these sentences make up the final output of the system, which is the Query-Focused Summary of the collection. The user can download this summary as a .txt file or the whole state of the system as a JSON file for further analysis.
§ EVALUATION
To evaluate the effectiveness of QuOTeS, we performed a user study where each participant uploaded up to five documents into the system and labeled the sentences in them for a maximum of one hour. The user study was implemented as a website written using the Flask package <cit.>, where the participants went through eight screens to obtain their consent, explain the task to them and fill out a questionnaire about their perception of the difficulty of the task and the performance of QuOTeS. An overview of the user study is shown in Figure <ref>.
§.§ Methodology
In the Welcome Screen, the participants were shown a quick overview of the whole user study and its duration. In the Screening Questionnaire, they filled out a short questionnaire indicating their education level and the frequency they read academic papers. In the Consent Form screen, they read a copy of the consent form and agreed to participate by clicking on a checkbox at the end. In the Video Tutorial screen, they watched a five-minute video about the task and how to use QuOTeS. In the Results Upload screen, they were redirected to the website of QuOTeS and after using the system for a maximum of one hour, they uploaded the JSON file containing the state of the system at the end of their interaction. In the Questionnaire screen, they filled in a three-part questionnaire to evaluate the usability of QuOTeS, its features and the quality of the summaries. In the Compensation Form, they provided their name and email to be able to receive the compensation for their participation. Finally, the End Screen indicated that the study was over and they could close their browser.
§.§ Participants
To recruit participants, we sent a general email call to our faculty, explaining the recruiting process and the compensation. To verify that participants were fit for our study, they filled out a screening questionnaire with only two questions, with the purpose of knowing their research experience and the frequency they normally read academic papers. The requirements to participate were to have completed at least an undergraduate degree in a university and to read academic papers at least once a month. The results of the screening questionnaire for the participants who completed the full study are shown in Table <ref>, while the full results of the screening questionnaire can be found in the code repository.
§.§ Research Instrument
During the user study, the participants filled out a questionnaire composed of thirty questions divided into three parts: Usability, Features, and Summary Quality. In the Usability part, they filled out the questionnaire from the standard System Usability Scale <cit.>, which is a quick and simple way to obtain a rough measure of the perceived usability of the system in the context of the task it is being used for. In the Features part, they answered sixteen questions about how difficult the task was and the usefulness of the different components of the system. In the Summary Quality part, they answered four questions about the relevance of the sentences in the system and the conciseness, redundancy, and completeness of the summaries produced. Finally, the participants submitted their opinions about the system and the user study in a free-text field. The full questionnaire presented to the participants can be found in the code repository.
§.§ Experimental Results
The frequency tables of the responses for the System Usability Scale questionnaire, the Features questionnaire, and the Summary Quality questionnaire can be found in the code repository. To make it easier to understand the responses from the questionnaires, we computed a score for the Features and Summary Quality parts in the same fashion as for the System Usability Scale: the questions with positive wording have a value from 0 to 4, depending on their position on the scale. In contrast, the questions with negative wording have a value from 4 to 0, again depending on their position on the scale. The distribution of the scores obtained during the user study is shown in Fig. <ref>.
§ DISCUSSION
§.§ Questionnaire Responses
Overall, QuOTeS received a positive response across users, as the questionnaires show that the system seems to fulfill its purpose. Most of the time, the participants reported that the sentences recommended by the system seemed relevant and that the summaries appeared succinct, concise, and complete. Participants felt they understood the system's task and how it works. Furthermore, they felt that the components of the system were useful. Nonetheless, the system can be improved in the following ways:
* As shown by the last question of the System Usability Scale questionnaire, participants felt that they needed to learn many things before using the system. This is understandable, as QuOTeS is based on several concepts which are very specific to Natural Language Processing and Information Retrieval: the task of Query-Focused Summarization itself, the concept of embedding documents as points in space, and the concept of training a Machine Learning classifier on the fly to adapt it to the needs of the user. Nonetheless, knowledge of these concepts is not strictly required to obtain useful insights from the system.
* As shown by the Features questionnaire, the system can still be improved in terms of speed. Also, the users felt it was unclear what the different settings do and how to interpret the information in the plots. This may be improved with a better deployment and a better introductory tutorial that provides use cases for each one of the options in the settings: giving the user some guidance about when it is best to use word unigrams, character trigrams, and Sentence-BERT embeddings would facilitate picking the correct options.
The relationship between the different scores computed from the responses of the user study is shown in Fig. <ref>. All the scores show a clear, positive relationship with each other, with some outliers. The relationships found here are expected because all these scores are subjective and measure similar aspects of the system. Of all of them, the relationship between the System Usability Scale and the Summary Quality is the most interesting: it shows two subgroups, one in which the usability remains constant and the summary quality varies wildly, and another in which they both grow together. This may suggest that for some users, the query is so different from the collection that, although the system feels useful, they are dissatisfied with the results.
§.§ Analysis of the Labels Collected During the User Study
To further evaluate the performance of QuOTeS, we estimated the Precision and Topic Concentration using the data labeled by the users. To compute the Precision, we divided the number of sentences labeled as relevant over the total number of sentences shown to the user. To compute the Topic Concentration, we followed the approach from <cit.>, using the Kullback-Leibler Divergence <cit.> between the unigram-based vocabulary of the document collection and the unigram-based vocabulary of the query-focused summary produced.
The distributions of the Precision and KL-Divergence, along with their relationship, are shown in Fig. <ref>. The relationship between the two metrics is noisy, but it is somewhat negative, suggesting that as the KL-Divergence decreases, the Precision increases. This result makes sense because the KL-Divergence measures how much the query deviates from the contents of the document collection.
On the other hand, Precision is displayed as a function of the Labeling Effort for each one of the participants in the user study in Fig. <ref>. We computed the Labeling Effort as the fraction of sentences reviewed by the user. The system displays a stable average Precision of 0.39, which means that, on average, two out of five recommendations from the system are relevant. There appear to be two classes of users: in the first class, the system starts displaying a lot of relevant sentences, and the Precision drops as the system retrieves them; in the second class, the story is entirely the opposite: the system starts with very few correct recommendations, but it improves quickly as the user explores the collection.
The relationships between the Precision and the scores obtained from the questionnaires in the user study are shown in Fig. <ref>. Precision is well correlated with all the other scores, which is expected since it is the first metric perceived by the user, even before answering the questionnaires. An outlier is very interesting: one of the users gave the system low scores in terms of the questionnaires, despite having the highest Precision of the dataset. The labels produced by this user display a lower Divergence than usual, which means that his query was much closer to the collection than most users, as shown in Fig. <ref>. This could mean that he/she could already have excellent previous knowledge about the document collection. Therefore, although the system was retrieving relevant sentences, it was not giving the user any new knowledge.
The relationship between the Divergence and the scores is shown in Fig. <ref>. The relationship shown is noisier than the ones involving Precision. Although the System Usability Scale and Features scores show a positive relationship with the Divergence, this is not the case with the Summary Quality. This suggests that to have a high-quality summary, it is necessary to start with a collection close to the query. Another interesting point is that these relationships suggest that the system is perceived as more useful and better designed as the query deviates from the document collection.
To finalize our evaluation of QuOTeS, we measured its performance using the (Query, Document Collection) pairs collected during the user study. As a baseline, we used the traditional Vector Space Model, which is equivalent to disabling the Machine Learning Classifier component of QuOTeS (as shown in Fig. <ref>). We evaluated the three variations of the baseline system as they appear inside QuOTeS. The performance obtained by this baseline is shown in Fig. <ref>.
Even when using Sentence-BERT embeddings, the performance of the baseline system is markedly inferior compared to that of QuOTeS, as shown in Fig. <ref>. Although the Sentence-BERT embeddings start with a much higher Precision than the traditional embeddings, they quickly deteriorate as the score threshold increases, while the traditional embeddings catch up in terms of Precision with the same level of Recall. However, since none of these models obtained a satisfactory performance, it is clear that using QuOTeS enabled the users to find much more relevant sentences than they could have found otherwise. This highlights the importance of the Continuous Active Learning protocol in QuOTeS, as it enables the system to leverage the feedback from the user, so the results do not depend entirely on the embeddings produced by the language model.
§.§ Limitations
Although our experimental results are promising, the system we propose has two main limitations, given the complexity of the task and the amount of resources needed to produce benchmarks for this topic:
* First, the purpose of QuOTeS is not to provide fully automatic summaries since it is hard to guarantee that all the relevant sentences were retrieved in the process. Instead, its purpose is to point users in the right direction so that they can find the relevant information in the original documents.
* And second, the summaries produced by the system can still be improved using traditional techniques from Automatic Summarization. For example, their sentences in the summary could be reordered or removed to improve fluency and conciseness. These aspects would be beneficial if the goal is to produce a fully-automatic summary of the collection of articles.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we introduce QuOTeS, a system for Query-Focused Summarization of Scientific Documents designed to retrieve sentences relevant to a short paragraph, which takes the role of the query. QuOTeS is an interactive system based on the Continuous Active Learning protocol that incorporates the user's feedback in the retrieval process to adapt itself to the user's query.
After a comprehensive analysis of the questionnaires and labeled data obtained through a user study, we found that QuOTeS provides a positive user experience and fulfills its purpose. Also, the experimental results show that including both the user's information need and feedback in the retrieval process leads to better results that cannot be obtained with the current non-interactive methods.
For future work, we would like to conduct a more comprehensive user study where users read the whole papers and label the sentences manually, after which they could use QuOTeS and compare the summaries produced. Another interesting future direction would be to compare the system heads-on with the main non-interactive methods from the literature on a large, standardized dataset.
§ ACKNOWLEDGEMENTS
We thank the Digital Research Alliance of Canada (<https://alliancecan.ca/en>), CIUNSa (Project B 2825), CONICET (PUE 22920160100056CO, PIBAA 2872021010 1236CO), MinCyT (PICT PRH-2017-0007), UNS (PGI 24/N051) and the Natural Sciences and Engineering Research Council of Canada (NSERC) for the resources provided to enable this research.
unsrt
|
http://arxiv.org/abs/2306.05057v1
|
20230608092225
|
SmartBugs 2.0: An Execution Framework for Weakness Detection in Ethereum Smart Contracts
|
[
"Monika di Angelo",
"Thomas Durieux",
"João F. Ferreira",
"Gernot Salzer"
] |
cs.CR
|
[
"cs.CR",
"cs.SE"
] |
Neuro-Symbolic Approaches for Context-Aware Human Activity Recognition
Claudio Bettini
July 31, 2023
======================================================================
Smart contracts are blockchain programs that often handle valuable assets.
Writing secure smart contracts is far from trivial, and any vulnerability may lead to significant financial losses.
To support developers in identifying and eliminating vulnerabilities, methods and tools for the automated analysis have been proposed.
However, the lack of commonly accepted benchmark suites and performance metrics makes it difficult to compare and evaluate such tools.
Moreover, the tools are heterogeneous in their interfaces and reports as well as their runtime requirements, and installing several tools is time-consuming.
In this paper, we present SmartBugs 2.0, a modular execution framework.
It provides a uniform interface to 19 tools aimed at smart contract analysis and accepts both Solidity source code and EVM bytecode as input.
After describing its architecture, we highlight the features of the framework.
We evaluate the framework via its reception by the community and illustrate its scalability by describing its role in a study involving 3.25 million analyses.
Bytecode, EVM, Solidity, Security, Vulnerability
§ INTRODUCTION
Smart contracts are a fundamental part of blockchain technology, particularly on platforms like Ethereum, where they enable the development of decentralized applications.
Benefits like transparency, trust, and security are paired with potential risks, as malicious actors can exploit vulnerable smart contracts and cause substantial financial losses.
Therefore, there is a pressing need for automated tools that help identify such vulnerabilities.
The goal of this paper is to present 2.0, a modular execution framework that simplifies the execution of analysis tools for smart contracts, facilitates reproducibility, and supports large-scale experimental setups.
It is open-source and publicly available at <https://github.com/smartbugs/smartbugs>.
Methodology.
supports three modes for analyzing smart contracts: Solidity source code, creation bytecode, and runtime code.
It currently includes tools encapsulated in docker images.
With its standardized output format (via scripts that parse and normalize the output of the tools), it facilitates an automated comparison of the findings across tools.
In the context of a bulk analysis, it allows for the parallel, randomized execution of tasks for the optimal use of resources.
Envisioned users.
is intended for
* developers auditing smart contracts before deployment,
* analysts evaluating already deployed smart contracts,
* tool developers comparing selected tools,
* researchers performing large-scale analyses,
and thereby advances the state-of-the-art in the automated analysis of smart contracts.
Engineering challenges and new features.
Compared to the original version, 2.0 offers the following improvements that overcome several engineering challenges:
* support for bytecode as input
* additional tools
* modular integration of new tools
* support for multiple versions of the same tool
* generic architecture
* increased robustness and reliability
* detection and reporting of tool errors and failures
* as output format
* mapping of tool findings to the SWC taxonomy[https://swcregistry.io/]
By adding bytecode as accepted input format, the range of smart contracts that can be analyzed by has been extended to programs without source code, including all smart contracts already deployed.
Due to its modular structure, 2.0 can easily be extended with further tools.
The standardized output format and the mapping to a tool-independent taxonomy both facilitate the integration of a comprehensive vulnerability analysis into the development cycle.
Validation studies.
To showcase the capabilities of 2.0, we present a typical use case that demonstrates how 2.0 has supported the largest experimental setup to date, both in terms of the number of tools and the number of analyzed smart contracts.
§ ARCHITECTURE
Figure <ref> depicts the architecture of .
It can be started from the command line or called from Python programs.
The main arguments to provide are a specification of the smart contracts to process and a list of tools to execute.
For a mass analysis, it is also important to specify the number of parallel processes as well as resource bounds per process.
Task builder. For each smart contract matching the specification, the task builder selects those tools that fit the format of the smart contract (source code, creation bytecode, or runtime code) and pulls their Docker images.
Moreover, it determines a unique folder for the output of each run.
Sometimes the naming scheme specified by the user leads to collisions, meaning that the output of different smart contracts or tools would end up in the same folder.
The task builder resolves conflicts in a deterministic way such that any restart of with the same arguments after an interrupt leads to the same output folders.
Most tools analyzing Solidity source code either contain a compiler for a fixed Solidity version or download an appropriate compiler on the fly.
Both approaches are problematic in the context of a bulk analysis.
In the first case, the integrated compiler is not able to handle smart contracts written for a different version, whereas in the second case an adequate compiler will be downloaded, but used only once and then discarded together with the container of the tool, which leads to redundant downloads during the analysis.
Therefore, the task builder inspects the smart contracts and downloads the corresponding compilers beforehand.
Later on, during analysis, a compiler matching the smart contract is injected into the container such that the tool is able to compile the contract without attempting to download the compiler itself.
Overall, the task builder downloads all resources and detects problems before actually starting the analysis.
This prevents racing conditions, errors popping up only during the analysis phase, and minimizes network traffic.
Runner. The runner receives a list of tasks, where each task contains the information for applying a single tool to a single smart contract.
The length of the list is roughly the product of the number of smart contracts and the number of tools.
To improve the utilization of server resources, the runner randomly permutes the task list.
Then it starts the requested number of parallel analyzers, which process the tasks from the list one after the other.
Analyzers.
Each analyzer picks a task from the queue of the runner, copies the smart contract, the Solidity compiler (if necessary) and auxiliary scripts to a temporary volume and runs the Docker image of the tool with this volume mounted.
Once the Docker container has terminated, the analyzer extracts the result files and writes them to the designated output folder.
It adds a file with meta information like the execution time, the arguments of the Docker run, and the version of the tool.
Parsing.
The output of the tools is heterogeneous: some provide their results in structured form, others produce textual output.
Parsers are small scripts accompanying each tool. They scan the results for the weaknesses detected, but also watch out for errors (irregular conditions reported by the tool) and failures (exceptions not caught by the tool).
The information is written to JSON files and — to facilitate the integration of into CI workflows — to files.
§ FEATURES
Output format .
2.0 can provide the results in (Static Analysis Results Interchange Format), an OASIS standard that defines a common reporting format for static analysis tools <cit.>.
is JSON-based and allows IDEs to access the analysis reports in a uniform way.
By adopting a common format that can be parsed by readily available tools, the cost and complexity of aggregating the results of analysis tools into common workflows diminishes.
For example, it becomes trivial to integrate into GitHub workflows, since GitHub automatically creates code scanning alerts in a repository using information from files.
[<https://docs.github.com/en/code-security/code-scanning/integrating-with-code-scanning/uploading-a-sarif-file-to-github>]
For an example of the integration of produced by and GitHub, we refer the reader to the repository .[<https://github.com/smartbugs/sarif-tests/security/code-scanning>]
Bytecode input.
On Ethereum, smart contracts are deployed by sending a transaction containing the creation bytecode.
When executed by the Ethereum Virtual Machine, this code initializes the environment of the new contract and returns the runtime code that is actually stored on the chain.
In most cases, the creation bytecode is the result of compiling Solidity source code.
A significant enhancement of 2.0 is its ability to integrate tools that analyze the creation bytecode and runtime code directly, obviating the need to procure Solidity sources first.
In fact, for many smart contracts deployed on the chain, their source code is not available.
Of the tools currently included in , 13 are able to process creation bytecode and/or runtime code.
Provision of proper compiler versions.
Another important addition to 2.0 is its ability to select an appropriate compiler for each smart contract.
Solidity has seen a rapid development over the past years, with numerous breaking changes.
Therefore, programmers are strongly advised to include a pragma that specifies the language version that a smart contract was developed for.
Analysis tools have three strategies to cope with this situation.
Experimental tools (proofs-of-concept) may come with just a specific compiler version, restricting its applicability.
Other tools implicitly assume that the compiler on the command search path matches the smart contract to be analyzed.
The most versatile tools inspect the smart contract and download an appropriate compiler before starting analysis.
As none of these approaches fits the needs of an unsupervised bulk analysis, the task builder (see its description above) inspects the smart contracts, downloads each required compiler version once before the actual analysis, and then injects the correct one into every container.
This allows the tool to run the correct compiler version without the need for on-the-fly downloads, which would cost time and increase the network traffic.
As another benefit, this improvement enhances the reproducibility and uniformity of the analyses, as the same compiler version is used consistently across all runs.
Tool integration.
With the new version of , it is now possible to incorporate new tools without touching the code of .
The details of adding a new tool are described in the wiki of 's repository[<https://github.com/smartbugs/smartbugs/wiki/Adding-new-analysis-tools>].
In essence, a few lines in a configuration file are needed to specify the docker image of the tool and its interface.
Moreover, for extracting the findings and errors from the result files, a Python script has to be added.
This new flexibility in adding tools also allows researchers to compare the behavior of different versions of the same tool, which is particularly useful for evaluating performance over time, or for ensuring that performance does not degrade with an update.
Mapping to a weakness taxonomy.
To compare and unify findings across tools, the idiosyncratic labels assigned by each tool need to be mapped to a common frame of reference.
1.0 maps the findings to the vulnerability taxonomy DASP TOP 10[<https://dasp.co/>].
The new version adds a mapping of all findings (including those of the new tools) to the weakness taxonomy of the SWC registry.[<https://swcregistry.io/>]
The SWC registry is a community-driven catalog of software weaknesses in smart contracts, whose granularity is finer than the one of DASP TOP 10, which allows us to provide more detailed information about the weaknesses found by the tools.
This classification is added to the output, in order to be displayed in the context of the source or bytecode.
Supported tools.
The tools currently in 2.0 are listed in <ref>. Check marks in black () indicate new additions, while the gray check marks in column `Solidity' identify the capabilities of the old version.
We added new tools as well as bytecode support for seven of the old tools.
In most cases, bytecode support refers to runtime code.
Only two tools are able to handle the creation bytecode as well.
§ EVALUATION
Reception.
The appreciation of by the community on GitHub is reflected in the following metrics.
With 13 contributors, it received over 400 stars, 81 issues were filed, and 110 users/organizations have forked the repository, with 50 unique cloners in the weeks from May 09 to 22, 2023.
is not only used by developers and security companies, but also in academic studies <cit.> or master theses <cit.>. Moreover, components of it have been used to build a ML-based tool <cit.>.
Use case.
In the largest experimental study to date <cit.>, we used 2.0 to execute 13 tools on almost 250000 runtime bytecodes.
The tools reported over [million]1.3 weaknesses in total.
With a resource limit of [min]30 and [GB]32, the execution took a total of 31 years.
More than half of the tools could run on just [GB]4 for the vast majority of the bytecodes and with less than [min]3 on average per bytecode, while three tools ran into the limits for more than 1000 bytecodes.
The new feature in 2.0 of reporting errors and failures gives the user an indication, for which bytecodes a tool may be operating outside of its specification.
This way, potential findings or non-findings are put in relation to the tools ability to properly analyze the bytecode.
<ref> depicts the error rate of each tool on a time line of blocks on the Ethereum main chain, where each data point represents the percentage of reported errors in bins of 100000 bytecodes.
As Mythril, Oyente and Vandal report no errors, they are not depicted.
Apparently, HoneyBadger, Maian, and Osiris experience an increasing error rate after 7.5 million blocks.
This information can be used to enhance the tools or make informed decisions about whether to use them for more recent smart contracts.
Moreover, tool failures may serve as a measure of robustness.
For eight tools, the failure rate was below [%]1 of the bytecodes, whereas for one tool, the failure rate reached [%]25, meaning that the tool ran into an exception for one out of four bytecodes.
§ RELATED WORK
As documented in the previous sections, 2.0 is a major improvement over the original version of <cit.>, which was released in 2019.
To the best of our knowledge, the only other execution framework that implements similar ideas is USCV <cit.>.
It comprises eight tools for the analysis of Solidity source code, with seven of them also covered by .
USCV seems to be neither widely used nor maintained, as the latest of its 10 commits is from mid-2021 and no issues have been filed so far.
§ CONCLUSION
2.0 has proven to be a useful tool for our own work as well as for fellow researchers and developers.
Its extensive use has shown some limitations, partly resulting in enhancement requests by users.
In future work, we will consider the following extensions.
Support for historic compiler versions.
supports Solidity 0.4.11 and above.
By accessing another repository, we can include versions down to 0.4.0.
Compiler versions older than that may be harder to come by.
Support for more complex formats of source code.
At the moment, each smart contract has to be contained in a single file.
However, complex projects are split into several files, with additional includes of system-wide libraries.
could try to determine the dependencies and transfer them also into the container.
Use of source code mappings.
Tools for bytecode analysis can be made to analyze source code by compiling the source code before feeding the result to the tool. The difficult part is to map the bytecode addresses of weaknesses back to source code lines.
Addition of new tools.
The automated analysis of smart contracts is an active area, with new tools emerging every year.
We hope that we will be able to keep up, not least with the help of the community contributing further tool configurations.
IEEEtran
|
http://arxiv.org/abs/2306.01457v1
|
20230602113306
|
Driving Context into Text-to-Text Privatization
|
[
"Stefan Arnold",
"Dilara Yesilbas",
"Sven Weinzierl"
] |
cs.CL
|
[
"cs.CL",
"cs.LG"
] |
Robust Bayesian Inference for Measurement Error Models
Charita Dellaporta Department of Statistics, University of Warwick / correspondance to ,
Theodoros Damoulas Department of Statistics & Department of Computer Science, University of Warwick
===================================================================================================================================================================================================================
Metric Differential Privacy enables text-to-text privatization by adding calibrated noise to the vector of a word derived from an embedding space and projecting this noisy vector back to a discrete vocabulary using a nearest neighbor search. Since words are substituted without context, this mechanism is expected to fall short at finding substitutes for words with ambiguous meanings, such as 'bank'. To account for these ambiguous words, we leverage a sense embedding and incorporate a sense disambiguation step prior to noise injection. We encompass our modification to the privatization mechanism with an estimation of privacy and utility. For word sense disambiguation on the Words in Context dataset, we demonstrate a substantial increase in classification accuracy by 6.05%.
§ INTRODUCTION
A tension exists between the need to leverage textual data to develop language models and privacy concerns regarding the information conveyed by that data. This is of particular importance because personal information can be recovered from language models <cit.>.
Metric Differential Privacy provides a protection against the disclosure of private information. It has recently been tailored to textual analysis in the form of a text-to-text privatization mechanism <cit.>. Building on continuous-valued word embeddings, it relies on the assumption that words close in embedding space serve similar semantic and syntactic roles. This property of embeddings is exploited to replace all words in a text with substitute words given a probability that can be controlled by a noise parameter. A nearest neighbor search is employed to return a substitute word from all words in the embedding space.
A notable deficiency of word embeddings is that they assign a single representation to each word. Depending on its context, an ambiguous word can refer to multiple, potentially unrelated, meanings. Word embeddings are unable to reflect this dynamic nature of words, leading to potentially inappropriate substitutions when used for text-to-text privatization. Clues signaled by inappropriate substitute words may direct a classifier into the opposite direction during downstream tasks. Contextualised word embeddings are an attempt at addressing this limitation by computing dynamic representations for words which can adapt based on context. However, this dynamic behavior makes it virtually impossible to return a substitute word as the nearest neighbor search requires all vectors to be pre-computed and located in the same embedding space.
Sense embeddings represent a middle course between lexical embeddings and contextualized embeddings. By decoupling the static representations of words into multiple representations that capture the meaning of words (covering one representation for each meaning of a word), sense representations enable context-aware text-to-text privatization.
We make the following contributions:
∙ We replace the word embedding in <cit.> with a sense embedding constructed according to <cit.>. To utilize the decoupled senses of words, we further incorporate a word-sense disambiguation prior to the privatization step that discriminates a sense given a sense inventory and a context window.
∙ We investigate the privacy and utility of substitutions compared to the baseline privatization mechanism without context awareness. Congested by additional representations for each sense of a word, we find that the plausible deniability (acting as our proxy for privacy) is shaped almost identical but allows for smaller noise injection. To demonstrate the utility, we obtain substitutions of identical words paired in either the same or different contexts. At equivalent levels of privacy, the similarity of substitutions for which their original words belong to the same context show a significantly higher similarity than those of substitutions for which their original words belong to different contexts. Using a set of benchmark tasks from <cit.>, we demonstrate that this difference is an important signal for downstream classification.
§ PRELIMINARIES
§.§ Differential Privacy
Metric Differential Privacy <cit.> is a generalization of differential privacy that originated in the context of location-based privacy, where locations close to a user are assigned with a high probability, while distant locations are given negligible probability. Using word embeddings as a corollary to geo-location coordinates, metric differential privacy has been adopted from location analysis to textual analysis by <cit.>. This avoids the curse of dimensionality arising from randomized response <cit.>.
We follow the formulation of <cit.> for metric differential privacy in the context of textual analysis. Equipped with a discrete vocabulary set 𝒲, an embedding function ϕ : 𝒲→ℝ, where ℝ represents a high-dimensional embedding space, and a distance function d: ℝ×ℝ→ [0,∞) satisfying the axioms of a metric (, identity of indiscernibles, symmetry, and triangle inequality), metric differential privacy is defined in terms of the distinguishability level between pairs of words. Formally, a randomized mechanism ℳ:𝒲→𝒲 satisfies metric differential privacy with respect to the distance metric d(·) if for any w,w^',ŵ∈𝒲 the distributions of ℳ(w) and ℳ(w^') are bounded by Equation <ref> for any privacy budget ε > 0:
ℙ [ℳ(w) = ŵ]/ℙ [ℳ(w^') = ŵ]≤ e^ε d{ϕ(w),ϕ(w^')}.
This probabilistic guarantee ensures that the log-likelihood ratio of observing any word ŵ given two words w and w’ is bounded by ε d{ϕ(w),ϕ(w’)}, providing plausible deniability <cit.> with respect to all w ∈𝒲. We refer to <cit.> for a complete proof of privacy. For the mechanism ℳ to provide plausible deniability, additive noise is in practice sampled from a multivariate distribution such as the multivariate Laplace distribution <cit.> or truncated Gumbel distribution <cit.>.
We recall that differential privacy requires adjacent datasets that differ in at most one record. Since the distance d(·) captures the notion of closeness between datasets, metric differential privacy instantiates differential privacy when Hamming distance is used, , if ∀ x,x': d{ϕ(w),ϕ(w^')} = 1. Depending on the distance function d(·), metric differential privacy is therefore generally less restrictive than differential privacy. Intuitively, words that are distant in metric space are easier to distinguish compared words that are in close proximity. Scaling the indistinguishability by a distance d(·) avoids the curse of dimensionality that arises from a large vocabulary 𝒲 and allows the mechanism ℳ to produce similar substitutions ŵ for similar w and w^'. However, this scaling complicates the interpretation of the privacy budget ε, as it changes depending on the metric employed.
Related Work. The multivariate mechanism for text-to-text privatization by <cit.> has been extended in orthogonal directions to further improve the utility <cit.> and privacy <cit.>.
Drawing inspiration from <cit.>, we complement on the line of inquiry dedicated to the enhancement of the utility. By leveraging the curvature of the space at different locations in the Hyperbolic space of Poincaré embeddings <cit.>, their mechanism preserves the hierarchical structure of words during substitution. We persist in the Euclidean space and instead replace the word embedding with a sense embedding to account for the ambiguity of words during substitution. Our results demonstrate that this modification leads to improved performance on downstream tasks while being compatible with prevalent embedding mechanisms.
§.§ Word Embeddings
Since metric differential privacy for text-to-text privatization operates on word embeddings, the merits of privatization are limited by the capabilities of these word embeddings. Starting from sparse vectors suffering from curse of dimensionality, which makes computation and storage infeasible, most research on word embeddings is dedicated to learning dense vectors from corpus-level co-occurrence statistics <cit.>. To learn these dense vectors, two mirrored approaches have been proposed: continuous bag-of- words and skip-gram. Continuous bag-of- words is trained to predict a word from a fixed window size of context words, whereas skip-gram specifies the probability of observing the context words conditioned on a word within a window. This results in a real-valued vector representation of words that capture interpretable analogical relations between words.
A limitation of these embedding mechanisms is that they conflate all meanings of a word into a single representation, and the most frequent meaning of a word dominates this representation. By conflating all meanings, word embeddings are unable to discriminate ambiguous words. This inability to distinct between ambiguous words is inherited to word substitutions obtained from privatization.
§.§ Sense Embeddings
To address the meaning conflation deficiency of word embeddings, one can represent meanings of words in the form of sense embeddings. Learning sense embeddings has been an active area of research until the emergence of contextual embeddings. We briefly recall some methods to sense representation. Exploiting an unlabeled corpus of text, methods to resolve the meaning conflation deficiency can be divided into three main branches: (1) a staged induction of word senses followed by learning of sense representations, (2) a joint induction of word senses together with learning of sense representations, and (3) retrofitting an existing word embedding by de-conflating word representations into sense representations.
The sense distinctions required to discriminate the meaning of a word are extracted from text corpora by clustering words according to their contexts given a window size. This paradigm is related to word-sense induction. It comes with algorithmic complexity and interpretability problems. Instead of a word-sense induction by clustering, an alternative approach is to derive word senses from pre-defined sense inventories. This paradigm is related to word-sense disambiguation in which ambiguous words must be assigned a sense from the sense inventory. Exploiting knowledge from pre-defined sense inventories for the initialization of senses allows learning representations that are linked to interpretable sense definitions. Two shortcomings are apparent to learning sense representations using word-sense disambiguation. It is assumed that the sense distinctions intended by the text matches those defined in the sense inventory. Unable to handle words that are not defined in the sense inventory, relying on pre-defined senses hinges on the coverage of the sense inventory.
Staged training of sense embeddings. The training of sense embeddings initially employed a staged approach <cit.>. <cit.> constructed sense vectors by clustering sparse vectors corresponding to occurrences of words into a predetermined number of clusters. Clustering is performed by a parametric method that permits controlling the semantic breadth using a per-cluster concentration. Assuming a fixed fixing number of senses for all words, the centroids of the clusters are used as sense vectors and word occurrences are relabeled according to the cluster they belong to. This idea has been extended to dense vectors <cit.>.
Instead of inducing senses by clusters, a straightforward method is to disambiguate text corpora as defined by a sense inventory and apply an embedding method on the resulting sense-annotated text <cit.>. <cit.>, for instance, use an off-the-shelf disambiguation process to obtain a sense-annotated corpus and directly learn sense representations.
Joint training of sense embeddings. A staged approach to learning sense representations suffers from the limitation that clustering and learning does not take advantage from their inherent similarities. To avoid the issues brought by a two-step clustering, the idea of clustering context vectors has been adapted into the training of word embeddings <cit.>. Performing clustering and embedding learning jointly, the intended sense for each word is dynamically selected as the closest sense to the context and weights are updated only for that sense. Assuming a fixed number of senses per word, <cit.> introduced an expectation maximization integrated with skip-gram that learns multiple senses weighted by their prior probability. Since words can have a highly dynamic number of senses that range from monosemous words to polysemous words with dozens of associated meanings, this assumption presents a severe limitation. <cit.> address the varying polysemy problem of sense representation by setting the number of senses of a word as defined by a sense inventory. Deriving the number of senses for each word from a sense inventory, it does not need to create or maintain clusters to discriminate between senses. A better solution would involve dynamic induction of senses from the text corpus. <cit.> applies a non-parametric clustering procedure for estimating the granularity of senses for each word. Similar to <cit.>, it represents the context of a word as the centroid of the vectors of its words but allocates a new sense vector each time the similarity of a context to existing senses is below a certain threshold. By using latent topic modeling to assign topics to each word in a corpus <cit.> and a mixture of weights that reflect different association degrees of each word to multiple senses in the context <cit.>, words can be discriminated into more general topics.
Retrofitting of word embeddings.
Instead of training a word and sense embedding jointly, research exists on refining a word embedding to match semantic constraints <cit.>. Given a word embedding, <cit.> propose retrofitting as a post-processing step in which words that are connected by a relationship derived from a semantic network are moved closer together in the embedding space. <cit.> tailored retrofitting towards learning representations for the senses listed in a sense inventory. Using a random walk, <cit.> extracted a set of sense biasing words from an external sense inventory. To de-conflate a word, they add a set of sense embeddings to the same space and push words in the space to the region occupied by its corresponding sense biasing words.
Most retrofitting approaches rely on signals from sense inventories. To transform word embeddings to sense embeddings without external resources, <cit.> construct a graph by connecting each word to a set of related words. Using ego-network clustering of words, senses are induced as a weighted average of words in each cluster.
§.§ Contextual Embeddings
Although much research has been directed to sense embeddings, the field shifted towards learning contextual embeddings <cit.>. Rather than pre-computing a static representation for each word, contextualized embeddings dynamically change the representation of a word depending on the context. Harnessing sense signals during the training objective of contextual embeddings has been shown to promote the disambiguation of word meanings <cit.>. However, the dynamic representations produced by contextual embeddings disqualifies contextual embeddings for privatization as the nearest neighbor search requires that the representations are aligned in a shared embedding space.
§ METHODOLOGY
Aiming at context-aware privatization of ambiguous words in texts, we adopt the privatization mechanism of <cit.> and replace the word embedding with a sense embedding. The sense embedding is constructed by building and clustering a graph of nearest neighbors based on vector similarities <cit.>.
Using a context window of size 3 and minimum word frequency of 5, we construct a 300-dimensional word embedding on a dump of . We align our vocabulary with words contained in . Our word embedding contains 95,670 words with words vectors. For each word in the word embedding, we retrieve its 200 nearest neighbors according to the cosine similarity of their word vectors. Once calculated the similarities, we build a graph of word similarities. Assuming that words referring to the same sense tend to be tightly connected, while having fewer connections to words referring to different senses, word senses can be represented by a cluster of words.
A sense inventory is induced from ego-network clustering. The clustering yielded 248,218 word senses. Each word sense is indexed by a sense identifier. Performing graph clustering of ego-networks is non-parametric. It makes no assumptions about the number of word senses. However, the number and definition of the resulting word senses are not linked to a lexical inventory. Since a word sense is assumed as a composition of words in a cluster, sense vectors are calculated as a weighted pooling of word vectors representing cluster items.
In Figure <ref>, we depict the averaged pairwise distances of words as a function of the number of senses. On average, the distance within word senses is considerably lower than the average distance between words in the embedding space (depicted by a dotted line at 1.0550). Since the privatization step is applied directly to the structure of the embedding space, the distance between senses originating from the same word must be taken into account when assessing utility and privacy.
To utilize the sense representations, we incorporate a disambiguation step prior to the privatization. Given a word and its context words, we map the word to a set of its sense vectors according to the sense inventory. The disambiguation strategy is based on similarity between sense and context words: 𝐜·𝐬_i/𝐜·𝐬_i, where 𝐜 is the mean of the word vectors from the context words. In line with the context size during sense induction, context words for the sense disambiguation are selected within a window of 5. This step is repeated for each word prior to the privatization step.
The privatization step follows a multi-step protocol: We retrieve the sense vector for each disambiguated word. This sense vector is perturbed with noise sampled from a multivariate distribution and its noisy representation is then projected back to the discrete vocabulary space of the sense embedding. As noisy representations are unlikely to exactly represent words in the embedding space, a nearest neighbor approximation is returned. To obtain a private text of word forms, we truncate the sense identifier from the word senses. The result is a privatized text that can be post-processed by word embeddings agnostic to the sense embedding.
To demonstrate the effectiveness of leveraging sense embedding in combination with a disambiguation step prior to the privatization, we privatized the ambiguous word 'bank' for a total of 500 queries and recorded its substitutions. In half of the queries, the ambiguous word is contained in a text belonging to a geographical context, and in the other half, the ambiguous word is contained in a text belonging to a financial context. The texts are 'to walk by a river bank at sunset' and to deposit money at a bank to earn interest'. We reduced the dimensionality of the substitute vectors into a two-dimensional space for visualization in Figure <ref>. We highlight words of the obtained substitutions. We observe that the substitution words returned by lexical privatization stem from both geographical and financial contexts. While substitutions blend between senses during lexical privatization, we discover distinct boundaries between substitute words belonging to contrasting contexts if the words are disambiguated before privatization.
§ EXPERIMENTS
§.§ Privacy Analysis
The privacy guarantees in metric differential privacy depend on the deployed metric and the geometric properties of the embedding space. Since retrofitting changes the geometric properties by populating the geometric space of the embedding with word senses that refer to the same word form, we need to recalibrate the plausible deniability <cit.>. We record the following statistics as proxies for the plausible deniability. We note that these proxy statistics have been used in previous studies to characterize the plausible deniability of multivariate mechanisms <cit.>.
∙ N_w = ℙ{ M(w) = w } measures the probability that a word is not substituted by the mechanism. This is approximated by counting the number of occurrences in which a word w is substituted by the same word after running the mechanism for 100 times.
∙ S_w = |ℙ{ M(w) = w^‘}| measures the effective support in terms of the number of distinct substitutions produced for a word from the mechanism. This is approximated by the cardinality of the set of words w^‘ after running the mechanism for 100 times.
Since the noise in the multivariate Laplace mechanism is scaled by 1/ϵ, we can make a connection between the proxy statistics and the privacy budget ϵ. A smaller ϵ corresponds to more stringent privacy guarantees by adding more noise to the word embedding. More noise leads to fewer unperturbed words (lower N_w) and more diverse outputs for each word (higher S_w). By contrast, a higher ϵ leads to less substitutions (higher N_w) and a narrow set of distinct words (lower S_w). From a distributional perspective, it follows that N_w (S_w) should be positively (negatively) skewed to afford reasonable privacy guarantees.
In Figures <ref> and <ref>, we present the averaged values of N_w and S_w over 100 independent queries from the corpus of <cit.> for a discrete set of privacy budgets ε = {1,5,10,15,25,50,100,250,500,∞}. While lower values of ε are desirable in terms of privacy, plausible deniability is assured unless N_w (S_w) exceeds (falls below) 0.5. The plots thus serve as a visual guidance for comparing (and selecting) the privacy budget ε. The curve of the privacy proxies as function of the privacy budget is shaped identical for word and sense embeddings, except that using a sense embedding stretches the allocatable privacy budget by an order of magnitude. We attribute this shape to the congestion of the embedding space with substitution candidates, even at low levels of noise.
For our utility experiments, we set the privacy budget for each mechanism so that .90 quantile of words is plausible deniable. To calculate the .90 quantile, we interpolated the scores for N_w (S_w) and selected the privacy budget ε so that N_w (S_w) does not exceed (fall below) 0.5. A plausible deniability for only a quantile of words was also assumed in a prior study by <cit.>.
§.§ Utility Analysis
To analyze the utility of privatization with context awareness, we use the standard datasets for evaluating word similarity. The datasets include <cit.>, <cit.>, and <cit.>. Common to all these datasets is that similarity ratings are given to pairs of words. While and provide pairs of words in isolation, provides a context for each word that triggers a specific meaning, making it very suitable for the evaluation of context-aware privatization. All experiments are conducted while ensuring plausible deniability for .90-quantile of words.
We query each pair of words (w_i, w_j) for 25 times by each privacy mechanism and record their similarity after privatization. We use the cosine distance as our similarity measure. The results capture ŵ_i ·ŵ_j/ŵ_i ·ŵ_j. Once queried, we correlate the measured similarity against the similarity annotations. We present the results in Table <ref>. Without a context provided to discriminate a word, the privatisation using sense embeddings generalizes to privatisation using word embeddings. This can be seen by the almost identical correlation coefficients for and . The correlation of the sense embedding surpassing those for the word embedding on indicates that the information provided by the disambiguation step helps in finding more appropriate substitutions.
We further benchmark our mechanism in combination with a model for downstream classification. We employ the words in context <cit.> dataset. It is composed of 5,428 text-pairs for training and 638 text-pairs for validation. Framed as a binary classification task, the goal of words in context is to identify if the occurrences of a word for which two contexts are provided correspond to the same intended meaning. Each of context is designed to trigger a specific meaning. Note that the dataset is balanced, hence, a context-insensitive embedding would perform similarly to a random baseline.
Without privacy guarantees, peaks at an accuracy score of 0.6887. The training using the privatized data mimics the training without privatization. After privatizing the training data using word embeddings, scores 0.6006. Leveraging sense embeddings, we boost the accuracy to 0.6423. This narrows the gap in accuracy by 6.05%. All scores are calculated as an average over three independent trials for each privatization mechanism.
To provide an explanation for the substantial improvement, we queried each record in the words in context dataset for 25 times and recorded the cosine similarity between the word pairs after substitution. Since we are only interested in the instances a substitution occurs, we removed cases in which the similarity between substitutions is one. We expect that the similarity between ŵ_i and ŵ_j obtained from the privatization step is higher when w_i and w_j belong to the same context and lower when different contexts are intended. Whether the words are from an identical context or different contexts is directly derived from annotations. For a transparent comparison, we measure the similarity using representations of their corresponding substitutions. We present the results in Figure <ref>, separated by word and sense embedding.
The representations of substitutions obtained by a word embedding convey no clues about the intended contexts the word belongs to. This can be argued by an average similarity that is almost identical at values of 0.1860 and 0.2035. Compared to the similarity of lexical representations, the average similarity of substitutions within the same context is 0.3118 and 0.2272 for words that originate from different contexts. This distinguishability signals whether words are paired in identical or different contexts, which indicates an awareness of the context during privatization.
We expect the awareness of the meaning of words to carry over to downstream tasks. To thoroughly evaluate whether context-awareness during privatization translates into better performance on downstream tasks, we conduct experiments on a set of classification tasks in the text domain. We use the General Language Understanding Evaluation () benchmark <cit.>. is a collection of diverse language understanding tasks. The benchmark involves classification of ordinary text and text pairs for similarity and entailment. Apart from <cit.>, which requires high level of syntactic reasoning, all other tasks are based on semantic reasoning.
We summarize the results on a subset of obtained by fine-tuning a pre-trained <cit.> in Table <ref>. We report the scores once for word embeddings and once for sense embeddings. Using sense embeddings as opposed to word embedding, the average performance increases from 0.5367 to 0.6065. This result confirms our expectation that context awareness during privatization translates into better performances on downstream tasks.
§ CONCLUSION
We redesigned the multivariate mechanism of metric differential privacy in the text domain to account for word meaning during privatization. We accomplished this by replacing the word embedding with a sense embedding and incorporating a sense disambiguation step prior to the noise injection.
Despite the congestion of the embedding space with senses that stem from the same word form, we experimentally demonstrated that our modification follows the privacy formalization of <cit.>. Once we recalibrated the privacy budget to ensure plausible deniability, we measured the capability of our mechanism to capture the word meaning. By calculating the similarity of pairs of words in a context that triggers the meaning of each word, we observe that the similarity score for substitutions is consistently higher when both words appear in the same context, and lower when both words appear in different contexts.
With the confirmation that our mechanism captures word meaning, we were interested in whether the benefits of contextual substitutions translates into superior performance in downstream classification tasks. The results on a set of benchmark datasets demonstrated a substantial boost in generalization performance for tasks that rely on semantic reasoning rather than syntactic reasoning.
Limitations. Our modification utilizes sense embeddings. Since the senses were not mapped to an external inventory, the senses cannot be interpreted. Apart from the lack of interpretability, sense embeddings are superseded by contextual embeddings derived from transformer models with sense awareness <cit.>. While sense embeddings and contextual embeddings are not mutually exclusive, it is necessary to alternate between them for the purpose of privatization and optimization.
§ ACKNOWLEDGMENT
We gratefully acknowledge that this research was supported in part by the German Federal Ministry of Education and Research through the Software Campus (ref. 01IS17045).
acl_natbib
|
http://arxiv.org/abs/2306.06363v1
|
20230610065039
|
Probabilistic Visibility-Aware Trajectory Planning for Target Tracking in Cluttered Environments
|
[
"Han Gao",
"Pengying Wu",
"Yao Su",
"Kangjie Zhou",
"Ji Ma",
"Hangxin Liu",
"Chang Liu"
] |
cs.RO
|
[
"cs.RO"
] |
mbtransfer: Microbiome Intervention Analysis using Transfer Functions and Mirror Statistics
[
July 31, 2023
===========================================================================================
Target tracking with a mobile robot has numerous significant applications in both civilian and military. Practical challenges such as limited field-of-view, obstacle occlusion, and system uncertainty may all adversely affect tracking performance, yet few existing works can simultaneously tackle these limitations. To bridge the gap, we introduce the concept of belief-space probability of detection (BPOD) to measure the predictive visibility of the target under stochastic robot and target states. An Extended Kalman Filter variant incorporating BPOD is developed to predict target belief state under uncertain visibility within the planning horizon. Furthermore, we propose a computationally efficient algorithm to uniformly calculate both BPOD and the chance-constrained collision risk by utilizing linearized signed distance function (SDF), and then design a two-stage strategy for lightweight calculation of SDF in sequential convex programming. Building upon these treatments, we develop a real-time, non-myopic trajectory planner for visibility-aware and safe target tracking in the presence of system uncertainty. The effectiveness of the proposed approach is verified by both simulations and real-world experiments.
Visibility-aware target tracking, active sensing, planning under uncertainty, trajectory planning.
0.95
§ INTRODUCTION
Target tracking using an autonomous vehicle has garnered widespread utilization in various important applications, such as vehicle tracking <cit.>, cinematography <cit.>, and underwater monitoring <cit.>. In recent years, visibility-aware motion planning has emerged as a key focus of target-tracking research, where the robot is tasked with generating collision-free trajectories to track a mobile target while ensuring continuous visibility, as illustrated in <ref>.
Trajectory optimization remains a mainstream methodology for visibility-aware planning. This approach formulates trajectory planning as a nonlinear optimization problem <cit.>, seeking to maintain the target's visibility by adjusting robot control inputs <cit.> or predictive trajectory parameters <cit.> using off-the-shelf optimization algorithms, such as Sequential Quadratic Programming <cit.>. Despite notable progress in this area, applying visibility-aware tracking techniques in cluttered environments continues to pose significant challenges due to adverse conditions such as limited field-of-view (FOV), obstacle occlusion, and system uncertainty.
A limited FOV restricts the range of perception and can result in target loss when the target moves outside the FOV. A mainstream approach addressing this issue involves employing geometric shapes, such as a conical shape <cit.> or the union of multiple spherical shapes <cit.> to model the FOV and using geometric costs, such as distance <cit.> and bearing angle <cit.> costs between the target and the sensor, to maintain the target within the FOV.
Although computationally efficient, geometric cost calculation relies on the particular FOV shape, hindering generalization to different sensor types.
Obstacle occlusion presents another challenge in target tracking. Such occlusion can be geometrically characterized, , when a target is occluded by an obstacle, the line segment connecting the target and the robot, namely the line of sight (LOS), intersects with the obstacle. Following this line of thought, previous works have modeled obstacles as spheres and applied geometric distance costs in either the world frame <cit.> or the camera frame <cit.> to avoid occlusion. However, using spheres to envelop obstacles with varying shapes may lead to a conservative occlusion avoidance strategy. To overcome this limitation, researchers have proposed constructing a Euclidean Signed Distance Field (ESDF) to compute the occlusion cost for arbitrary obstacle shapes <cit.>. Nevertheless, this method requires the offline computation of the ESDF to reduce the online computational burden hence rendering it ineffective in dynamic environments.
System uncertainty arising from imperfect system models and process noise can also degrade state estimation and trajectory planning in visibility-aware tracking tasks. Belief space planning <cit.> provides a systematic approach for handling such uncertainty by formulating a stochastic optimization problem to generate visibility maintenance trajectories, and the key step lies in the evaluation of predictive target visibility.
A commonly employed method for predicting visibility is to evaluate the probability of detection (POD) by integrating predictive target probability density function (PDF) over the anticipated unoccluded FOV area <cit.>.
However, calculation of this integration can be either computationally burdensome when conducted over continuous state spaces <cit.> or inaccurate due to discretization error when transformed into the sum of probability mass over a grid detectable region <cit.>.
Another approach is to directly determine target visibility based on the mean positions of the predicted target and the planned FOV <cit.>. Despite the computational efficiency of this approach, such maximum-a-posteriori (MAP) estimation only considers mean positions, neglecting predictive target uncertainty during robot trajectory planning.
To overcome those limitations, our work proposes a model predictive control (MPC)-based non-myopic trajectory planner for mobile target tracking in cluttered environments. The proposed framework systematically considers limited FOV, obstacle occlusion, and state uncertainty during tracking, which is less investigated in current literature. The main contributions can be summarized as follows:
* We propose the concept of belief-space probability of detection (BPOD) that depends on both robot and target belief states to function as a measure of predicted visibility. We then develop an Extended Kalman Filter (EKF) variant that incorporates BPOD into target state prediction to take into account stochastic visibility in the MPC predictive horizon, which overcomes the deficiency of MAP estimation.
* We present a unified representation for both the BPOD and the collision risk as the probabilities of stochastic SDF satisfaction (PoSSDF) via the use of signed distance functions (SDFs), and develop a paradigm to efficiently calculate the PoSSDFs by linearizing the SDF around the mean values of robot and target distributions.
* We propose a real-time trajectory planner for visibility-aware target tracking in cluttered environments based on sequential convex programming (SCP). In particular, we propose a two-stage strategy for calculating SDF in SCP to accelerate the planner, reaching a computational speed of 10Hz.
Simulations and real-world experiments demonstrate that our method enables the robot to track a moving target in cluttered environments at high success rates and visible rates.
The remainder of the article is organized as follows. <ref> formulates the target tracking problem. <ref> defines BPOD and proposes a variant of EKF covariance update formulation to tackle uncertain visibility in the predictive horizon. <ref> approximates BPOD and collision risk by SDF linearization and proposes an SCP framework with the two-stage strategy for online trajectory planning. Simulations and real-world experiments are presented in <ref>, respectively. <ref> presents conclusions and future outlooks.
§ FORMULATION OF TARGET TRACKING
§.§ Motion Models and Obstacle Modeling
This work considers a discrete-time unicycle model as the robot dynamics:
𝐳^r_k+1=𝐟^r(𝐳^r_k,𝐮^r_k)+𝐰^r, 𝐰^r∼𝒩(0,𝐑^r),
where
𝐟^r(𝐳^r_k,𝐮^r_k)=𝐳^r_k+[v_k^rcosθ_k^r, v_k^rsinθ_k^r, ω_k^r, a_k^r]^T ·Δ t,
with Δ t representing the sampling interval.
The robot state 𝐳^r_k=[𝐱^r_k^T, θ_k^r, v_k^r]^T∈ℝ^4
contains position 𝐱^r_k∈ℝ^2, orientation θ_k^r∈ (-π,π], and velocity v_k^r∈ℝ_+, where the subscript k means time step. The control input 𝐮^r_k=[ω_k^r,a_k^r]^T∈ℝ^2 is composed of
angular velocity ω_k^r∈ℝ and acceleration a_k^r∈ℝ.
The process noise 𝐰^r follows a Gaussian distribution with a zero mean and covariance matrix 𝐑^r.
We assume the robot's current state is known.
The target motion model is defined as
𝐳^t_k+1=𝐟^t(𝐳^t_k,𝐮^t_k)+𝐰^t, 𝐰^t ∼𝒩(0,𝐑^t),
where 𝐳^t_k∈ℝ^d_t denotes the target state with dimension d_t, 𝐮^t_k represents the target control input. We define the elements in 𝐳^t_k that correspond to the target position as 𝐱^t_k∈ℝ^2. The target motion function 𝐟^t will be specified in <ref>, and the motion noise 𝐰^t follows a Gaussian distribution with a zero mean and covariance matrix 𝐑^t. Target state 𝐳^t_k remains unknown and needs to be estimated.
This work considers a known continuous map ℳ∈ℝ^2 with convex obstacles represented as point sets 𝒪_i⊂ℳ, i=1,2,⋯, N_o, where N_o denotes the number of obstacles. Note that the proposed approach can also address nonconvex obstacles by dividing them into multiple convex ones.
§.§ Sensor Model
The robot's sensor FOV is modeled as an annular sector:
𝒱_k={𝐱|𝐚^i_k·𝐱≤ b^i_k, r_1≤𝐱-𝐱^r_k≤ r_2, i=1,2},
where · means the Euclidean norm, 𝐚^1_k,𝐚^2_k∈ℝ^2,b^1_k,b^2_k∈ℝ form the two linear boundaries that limit the maximum detection angle ψ^s, and r_1,r_2∈ℝ_+ constrain the detection range. The FOV is illustrated in <ref>.
The sensor model described in this work considers intermittent measurements due to potential target loss caused by the limited FOV and occlusions, which is defined as
𝐲_k={[ 𝐟^s(𝐳_k^t,𝐳_k^r)+𝐰 ^s, 𝐱^t_k∈𝒱_k^v; ∅ , 𝐱^t_k∉𝒱_k^v ].,
where 𝐲_k∈ℝ^d_m is the measurement with dimension d_m and the measurement function
𝐟^s will be specified in <ref>.
The sensor noise 𝐰^s follows a Gaussian distribution with a zero mean and covariance matrix 𝐑^s. Here 𝒱^v_k is a subset of 𝒱_k that is not occluded by any obstacles[This work adopts a perfect sensor model for the purpose of simplicity, which assumes that a non-empty measurement is returned if and only if the target is inside the FOV and not occluded. However, it is worth noting that the proposed approach can be easily extended to imperfect sensor models that give false positive or false negative measurements.], as illustrated in <ref>.
§.§ Extended Kalman Filter with Intermittent Measurements
The robot needs accurately estimate the target state based on sensor measurements to make informed tracking behavior. To account for the system nonlinearity and possible target loss, we develop a variant of EKF for target state estimation to take intermittent measurements into account, inspired by <cit.>.
The filtering procedure can be summarized as follows.
Prediction. Predict the prior PDF using target kinematics,
𝐳̂^t_k|k-1 =𝐟^t(𝐳̂^t_k-1|k-1,𝐮^t_k-1),
𝐏_k|k-1 =𝐀^t_k-1𝐏_k-1|k-1(𝐀^t_k-1)^T+𝐑^t,
where 𝐳̂^t_k-1|k-1 and 𝐏_k-1|k-1 represent the mean and covariance of the estimate of 𝐳^t_k-1, and
𝐀^t_k-1 = ∇_𝐳^t𝐟^t(𝐳^t,𝐮^t_k-1)|_𝐳^t=𝐳̂^t_k-1|k-1 is the Jacobi matrix of the target kinematic model.
Update. Use current measurements to update target PDF,
𝐊_k =𝐏_k|k-1𝐂_k^T(𝐂_k𝐏_k|k-1𝐂^T_k+𝐑^s)^-1,
𝐳̂^t_k|k =𝐳̂^t_k|k-1+μ_k𝐊_k(𝐲_k-𝐟^s(𝐳̂_k|k-1^t,𝐳_k^r)),
𝐏_k|k =𝐏_k|k-1-μ _k𝐊_k𝐂_k𝐏_k|k-1,
where 𝐂_k = ∇_𝐳^t𝐟^s(𝐳^t,𝐳^r_k)|_𝐳^t=𝐳̂^t_k|k-1 is the Jacobi matrix of the measurement model, and μ_k is the detection variable (DV) that determines whether the target is detected, calculated as
μ _k={[ 1, 𝐲_k∅; 0, 𝐲_k=∅; ]..
§.§ MPC Formulation for Trajectory Planning
The trajectory planner is formulated as an MPC problem:
min_𝐮^r_k:k+N-1 J(𝐛^r_k+1:k+N, 𝐛^t_k+1:k+N)
s.t. 𝐛^t_k+i=𝐠^t(𝐛^t_k+i-1,𝐛^r_k+i-1),
𝐛^r_k+i=𝐠^r(𝐛^r_k+i-1,𝐮^r_k+i-1),
𝐛^r_k+i∈ℬ^r,𝐛_k+i^t ∈ℬ^t,𝐮^r_k+i-1∈𝒰,
𝐟^o(𝐛^r_k+i,𝒪_j)<0 ,
j=1,2,⋯,N_o,
i=1,⋯,N,
where N stands for MPC planning horizon. The target belief state 𝐛^t_k=[𝐳̂_k|k^t, 𝐏_k|k] encodes the probability distribution of the estimated target state.
Due to the existence of process noise, the robot state is stochastic in the predictive horizon. Therefore, we define the robot belief state 𝐛^r_k=[𝐳̂^r_k, 𝐐_k] to encode mean value 𝐳̂^r_k and covariance matrix 𝐐_k of robot state. The sets ℬ^r, ℬ^t and 𝒰 are the feasible sets of robot belief state, target belief state and robot control input, respectively. The functions 𝐠^r(·) and 𝐠^t(·) denote belief prediction procedures, which will be described in detail in the next section. The objective function <ref> and the collision avoidance constraint <ref> will be further elaborated in <ref>.
§ BELIEF-SPACE PROBABILITY OF DETECTION-BASED STATE PREDICTION
§.§ Probabilistic State Prediction in Predictive Horizon
We employ two different EKF-based approaches for the state prediction of the robot and target. Note that the lack of observation needs to be properly addressed, which differs from the estimation process <ref>.
Robot state prediction. Like <ref>, we use the prediction step of EKF to predict the robot state in the predictive horizon. Specifically, <ref> is specified as
𝐳̂_k+i^r =𝐟^r(𝐳̂^r_k+i-1,𝐮_k+i-1^r),
𝐐_k+i =𝐀^r_k+i-1𝐐_k+i-1(𝐀^r_k+i-1)^T+𝐑^r,
where 𝐀^r_k+i-1=∇ _𝐳^r𝐟^r(𝐳^r,𝐮^r_k+i-1)|_𝐳^r=𝐳̂^r_k+i-1 is the Jacobi matrix of the robot kinematic model.
Target state prediction.
The target mean 𝐳̂_k|k^t is simply propagated using its kinematics due to unknown predictive measurements, and the covariance 𝐏_k|k-1 is predicted by using <ref>. However, the target visibility is uncertain in the predictive horizon, making it difficult to predict DV and update the covariance following <ref>.
To deal with this difficulty, we propose the concept of BPOD to denote the probability that the target is detected, defined as
γ_k =Pr(𝐲_k∅| 𝐛^r_k,𝐛^t_k|k-1),
where 𝐛^t_k|k-1=[𝐳̂_k|k-1^t, 𝐏_k|k-1]. Compared to traditional PODs that are either determined by the ground truth of robot and target positions <cit.> or only depend on the predictive target distribution under deterministic robot states <cit.>, BPOD is conditioned on the belief states of both the robot and the target, which provides a more precise measurement of visibility under stochastic robot and target states. Next,
we will incorporate the BPOD into EKF and constitute the stochastic counterpart of <ref> to tackle uncertain predictive visibility.
Recall that the DV is determined by the relative pose of the robot and the target, and thus can be reformulated as
μ_k =Pr(𝐲_k∅| 𝐳^r_k,𝐳^t_k).
Denote 𝔼_z|b(·) as the expectation operator with respect to 𝐳^t_k and 𝐳^r_k conditioned on 𝐛^t_k|k-1 and 𝐛^r_k, we can adopt the total probability rule (TPR) and derive γ_k=𝔼_z|b(μ_k). Combining <ref> and the EKF procedures, we can find that 𝐏_k|k in <ref> is conditioned on 𝐳^r_k and 𝐳^t_k.
Note that accurate estimates of the robot and target states are not available in the predictive horizon.
Therefore, the posterior covariance 𝐏̃_k|k in predictive horizon only depends on the predictive robot and target beliefs, and thus can be formulated as the conditional expectation of 𝐏_k|k, , 𝐏̃_k|k=𝔼_z|b(𝐏_k|k) according to TPR.
Following this idea, we propose a visibility-aware covariance update scheme, which is formulated as follows:
𝐏̃_k|k =𝔼_z|b(𝐏_k|k)
≈𝔼_z|b(𝐏_k|k-1-μ _k𝐊̃_k𝐂̃_k𝐏_k|k-1)
=𝐏_k|k-1-𝔼_z|b(μ_k)𝐊̃_k𝐂̃_k𝐏_k|k-1
=𝐏_k|k-1-γ_k𝐊̃_k𝐂̃_k𝐏_k|k-1,
where covariance 𝐏_k|k-1 and 𝐏_k|k are defined in <ref>, respectively, and 𝐂̃_k, 𝐊̃_k denote the approximate measurement Jacobi matrix and Kalman gain, respectively.
To simplify the computation, we approximate the measurement Jacobi matrix as being equal to the one determined by the mean value of robot belief 𝐳̂_k^r, , 𝐂̃_k=∇_𝐳^t𝐟^s(𝐳^t,𝐳̂^r_k)|_𝐳^t=𝐳̂^t_k|k-1, and calculate the Kalman gain 𝐊̃_k from <ref> by replacing 𝐂_k with 𝐂̃_k, which yields <ref>. <ref> is derived by the independency of 𝐊̃_k and 𝐂̃_k from 𝐳^t_k and 𝐳^r_k.
<ref> is a probabilistic extension of <ref> that allows us to update the target covariance in <ref> using only the predicted beliefs of the robot and target. Intuitively speaking, smaller BPOD means a lower probability of the target to be observed, which in turn corresponds to higher predictive uncertainty of the target (<ref>).
To further justify our proposed new formulation for covariance update, we can prove that a larger BPOD indeed leads to reduced uncertainty.
Define the function
f(γ)= |𝐏_k|k-1-γ𝐊̃_k𝐂̃_k𝐏_k|k-1|,γ∈ [0, 1],
where |·| is the determinant. Then following inequality holds,
f(γ_1)≥ f(γ_2), ∀γ_1≤γ_2.
For convenience, we define
Ψ = 𝐊̃_k𝐂̃_k𝐏_k|k-1, Φ = 𝐏_k|k-1-γ_2 Ψ.
Following inequalities hold for EKF given the definitions of Ψ and Φ in Kalman Filter<cit.>,
Ψ≽0, 𝐏_k|k-1-Ψ≽ 0,
where “≽" denotes semi-definiteness.
Using <ref> and the constraint that γ_2≤ 1, it is obvious that Φ is also semi-definite.
Then we can derive that
0.91!|𝐏_k|k-1-γ_1 Ψ|=|Φ+(γ_2-γ_1)Ψ|
≥|Φ|+|(γ_2-γ_1)Ψ|
≥|Φ|,
where the first inequality is derived by the superadditivity property of determinant for semi-definite matrices <cit.>.
In information theory, entropy is widely used as a measure of uncertainty. Due to the Gaussian assumption of EKF, the entropy of target belief state can be formulated as follows <cit.>,
ℋ(𝐛^t_k)= d_t/2(ln (2π)+1)+1/2ln|𝐏_k|k|.
Combining <ref>, we can conclude that higher BPOD results in smaller target uncertainty in the sense of entropy, which motivates the use of the following objective functions.
§.§ Objective Function
There are two mainstream choices of the objective function for target tracking, , to minimize target uncertainty<cit.>, or to maximize the predictive visibility of the target<cit.>. We correspondingly formulate two objectives as
J_1=∑_i=1^Nℋ(𝐛^t_k+i),
J_2=-∑_i=1^Nγ_k+i,
where J_1 is the cumulative entropy and J_2 is the negation of cumulative BPOD.
With the help of <ref> and <ref>, we show that these two objectives are compatible, such that minimizing J_2 will simultaneously decrease J_1.
Both two objective functions are tested in our simulations, as will be presented in <ref>, and are shown to achieve similar performance in effectively keeping the target visible and decreasing estimation error.
§ SIGNED DISTANCE FUNCTION-BASED ONLINE TRAJECTORY PLANNING
§.§ Unified Expression of Visibility and Collision Risk
Limited FOV and occlusion are two major factors that influence visibility. Following this idea,
we define two corresponding probabilities to compute <ref>.
Target and FOV. We define the probability of the target being within the FOV area as:
γ^tf_k=Pr(𝐱^t_k∈𝒱_k|𝐛_k^r,𝐛_k|k-1^t).
LOS and Obstacle. Likewise, we express the probability that the target is not occluded by obstacle i as:
γ^lo_k,i=Pr(ℒ_k⋂𝒪_i=∅|𝐛_k^r,𝐛_k|k-1^t),
where ℒ_k is the line segment between 𝐱^t_k and 𝐱^r_k. Reasonably, we can assume that all variables in {γ^tf_k, γ^lo_k,1,⋯,γ^lo_k,N_o} are mutually independent. Then the BPOD can be factorized as
γ_k =γ^tf_k∏ _i=1^N_oγ^lo_k,i.
To avoid overly conservative trajectory while ensuring safety, we define chance-constrained<cit.> collision avoidance conditions that bound the collision risk below a user-defined threshold δ^s∈ℝ. The probability of collision between the robot and obstacle i at time k is defined as
γ^ro_k,i=Pr(𝐱^r_k∈𝒪_i|𝐛_k^r,𝐛_k|k-1^t).
<ref> is then specified as the chance constraints, ,
γ^ro_k+i,j<δ^s_k+i,j.
In order to solve the MPC problem <ref>, it is crucial to efficiently compute γ^tf, γ^lo, and γ^ro as specified in <ref>. A main contribution of this work is to represent and compute the BPOD and chance-constrained collision risk in a unified manner that significantly reduces the computational burden of solving the MPC problem, which will be described in the next subsection. For clarity, we define the set Γ_k={γ^tf_k, γ^lo_k,i, γ^ro_k,i, i=1,⋯,N_o}.
§.§ Approximating Γ_k with Linearized SDF
The SDF quantifies the distance between two shapes. Specifically, the SDF of two sets 𝒜,ℬ⊂ℝ^2 is formulated as the minimum translation distance required to separate or intersect each other, ,
sd( 𝒜 , ℬ) ={[ inf{𝐯 |( 𝐯+𝒜) ⋂ℬ∅} , 𝒜⋂ℬ=∅; inf{𝐯 |( 𝐯+𝒜) ⋂ℬ =∅} , 𝒜⋂ℬ∅; ].,
where 𝐯∈ℝ^2 is the translation vector. Using SDF, we can equivalently reformulate <ref> as follows,
γ^tf_k = Pr(sd(𝐱^t_k,𝒱_k)≤0|𝐛_k^r,𝐛_k|k-1^t),
γ^lo_k,i = Pr(sd(ℒ_k,𝒪_i)≥ 0|𝐛_k^r,𝐛_k|k-1^t),
γ^ro_k,i = Pr(sd( 𝐱^r_k,𝒪_i)≤ 0|𝐛_k^r,𝐛_k|k-1^t).
Note that <ref> are all PoSSDFs because 𝐱^t_k, 𝐱^r_k, ℒ_k and 𝒱_k are all random variables given robot and target beliefs. These probabilities can be evaluated using Monte-Carlo simulation, but at the cost of heavy computational burden. Inspired by <cit.>, we propose a computationally efficient paradigm to evaluate PoSSDF using a linearized SDF expression, as described in <ref>.
The inputs of <ref> include two rigid bodies 𝒜_1,𝒜_2⊂ℝ^2, whose poses are determined by an arbitrary random Gaussian variable 𝐱 with dimension d_x. This algorithm also needs to precalculate the signed distance d̂∈ℝ between 𝒜_1(𝐱̂) and 𝒜_2(𝐱̂), along with the closest points from each set, 𝐩̂_1∈𝒜_1(𝐱̂),𝐩̂_2∈𝒜_2(𝐱̂), as illustrated in <ref>(a). This operation can be efficiently carried out by using one of the two popular algorithms: Gilbert–Johnson–Keerthi (GJK) algorithm <cit.> for minimum distance, and Expanding Polytope Algorithm (EPA)<cit.> for penetration depth. Then we obtain the SDF parameters including the contact normal 𝐧̂=sgn(d̂)·(𝐩̂_1-𝐩̂_2)/𝐩̂_1-𝐩̂_2, and the local coordinates of 𝐩̂_1 and 𝐩̂_2 relative to 𝒜_1(𝐱̂) and 𝒜_2(𝐱̂) respectively, noted as p̂^L_1 and p̂^L_2.
<ref> provides an analytical approximation of the closest points 𝐩_1(𝐱) and 𝐩_2(𝐱) in the world frame.
Here we assume that the local coordinates of 𝐩_1(𝐱) and 𝐩_2(𝐱), relative to 𝒜_1(𝐱) and 𝒜_2(𝐱) respectively, are fixed and equal to p̂^L_1 and p̂^L_2.
The approximate closest point 𝐩_i (𝐱) in the world frame can then be obtained by using 𝐑_i(𝐱), the rotation matrix of 𝒜_i(𝐱), and 𝐩_i^c(𝐱), the origin of the local coordinate system attached to 𝒜_i(𝐱).
In <ref>, the SDF between 𝒜_1(𝐱) and 𝒜_2(𝐱)
is approximated by projecting the distance between 𝐩_1(𝐱) and 𝐩_2(𝐱) onto the contact normal 𝐧̂. <ref>(a) illustrates the SDF approximation in <ref>. In <ref>, we linearize the SDF around the Gaussian mean and analytically calculate the PoSSDF using the fact that the probability of a linear inequality for a Gaussian variable can be explicitly expressed as follows<cit.>,
Pr(𝐚^T·𝐱≤ b)=1/2(1-erf(𝐚^T·𝐱̂-b/√(2𝐚^TΣ𝐚))),
where erf(·) represents the Gauss error function, and 𝐚∈ℝ^d_x,b∈ℝ form the linear constraint of 𝐱.
<ref> provides a general procedure to calculate PoSSDF. Denote 𝐱=[ 𝐳_k^r^T, 𝐳_k^t^T ]^T, we can make the following adjustments and apply <ref> to efficiently calculate Γ_k. First, feeding 𝒱_k into <ref> is error-prone for γ^tf calculation because 𝒱_k is nonconvex. Therefore, we convexify the FOV by excluding the area in Δ M_1M_2O[This SDF-based method enables our planner to adapt to any sensors whose FOV shape is convex or can be well approximated by a convex shape.] (see <ref> for detail). Second, to take the variable length of the LOS ℒ_k(𝐱) into account when calculating γ^lo, we precalculate the separative ratio λ̂ of ℒ_k(𝐱̂) before carrying out <ref>, which is formulated as
λ̂=𝐩̂_1-𝐱̂^t_k/𝐱̂^r_k-𝐱̂^t_k,
where 𝐩̂_1 is the nearest point on ℒ_k(𝐱̂). We then replace the SDF parameter p̂_1^L with λ̂ and reformulate the approximated nearest point (<ref>) on ℒ_k(𝐱) in the world frame as
𝐩_1(𝐱)=λ̂𝐱^r_k(𝐱)+(1-λ̂)𝐱^t_k(𝐱).
This adjustment for calculating γ^lo is illustrated in <ref>(b).
Note that <ref> is not violated even though we alternatively bound the collision probability approximated by <ref> below threshold δ^s. This is because in <ref>, the obstacle is expanded to a half space with the closest point on its boundary and 𝐧̂ being its normal vector. This process overestimates γ^ro thus ensuring <ref> to be strictly enforced.
§.§ Sequential Convex Optimization
After being specified in <ref>, problem <ref> can be solved using the SCP algorithm, where the nonlinear constraints are converted into l_1 penalty functions,
J_m=J+η(∑_i|g^n_i|^++∑_i|h^n_i|).
Here J denotes the objective function <ref>, and g_i^n, h_i^n represent the nonlinear inequality and equality constraints, respectively. Here η is the penalty coefficient and |x|^+=max(x,0).
The SCP consists of two loops:
The outer loop progressively increases η to drive the nonlinear constraint violation to zero, while in the inner loop, the trust region method is implemented to minimize the new objective function J_m. Interested readers can refer to <cit.> for details.
Directly adopting this framework turns out to be slow due to the frequent calls of GJK algorithms and EPA when calculating the gradient of <ref>. To overcome this limitation, we use a fixed SDF parameter set that encodes all the SDF parameters in the MPC horizon to calculate Γ_k+1:k+N in the gradient, and update the SDF parameter set after obtaining a new solution. This two-stage strategy significantly speeds up the planner with minor accuracy loss, and is further described in the SCP-based trajectory planning framework presented in <ref>.
The variable x_0 is initialized with zero control inputs at the first step, and is extrapolated from its value at the previous step for all subsequent steps. This is followed by precalculating an SDF parameter set 𝒞_0 at the mean value of the initial robot and target beliefs, which is propagated by x_0 (<ref>). Using a fixed 𝒞_0, we can fastly obtain the gradient (<ref>) and linearize the objective function (<ref>). After solving the linearized problem, the SDF parameter set is updated (<ref>) and utilized to update the trust region size (<ref>). The inner loop ends when the improvement is small (<ref>). The outer loop checks the terminal condition for the solution x^∗ (<ref>) and increases the penalty coefficient η (<ref>).
0.98
§ SIMULATIONS
The proposed method is validated through multiple simulations in MATLAB using a desktop (12th Intel(R) i7 [email protected]), and the MOSEK solver is adopted to optimize the trajectory according to the SCP routine. We set the range of robot acceleration (m/s^2) as -4≤ a_k^r≤ 2, angular velocity (rad/s) as -π/3 ≤ω_k^r≤-π/3 and speed limit as 4m/s. The robot dynamics follow <ref>, and motion noise is set as 𝐑^r=10^-3· diag(4,4,0.4,0.4). The robot's FOV has a minimal detection distance r_1=2m and maximal distance r_2=10m, while the sensing angle ψ^s is 2π/3. The predictive horizon is set as N=4 and our step interval is Δ t=0.5s. A 60m× 50m map with cluttered polygon obstacles is designed in our simulation tests (<ref>(a)). The initial state of the robot is designated as
[32,7,3/4π,0]^T.
To evaluate the performance of our method, several metrics are obtained from a tracking simulation with a total step T and target trajectory {𝐱̃^t_k,k=1,⋯,T}, including computing time t_cal, visible rate r_vis that denotes the percentage of time seeing the target, and the estimation error e_est that is defined as the mean absolute error (MAE) of the target's estimated position, ,
e_est=1/T∑_k=1^T𝐱̂^t_k-𝐱̃^t_k.
Besides, we claim a tracking failure if the robot collides with an obstacle or loses sight of the target in continuous 15 steps.
Temporary target loss may occur due to the zero thresholds on the right-hand side of the SDF constraints in <ref>. We address this issue by relaxing those zero values to small numbers. To speed up the computation, the robot only considers obstacles near the LOS, noted as valid obstacles.
§.§ Case 1: Target with Linear Motion Model
In this part, we adopt a single integrator to describe the target motion model, which is formulated as
𝐳^t_k+1=𝐳^t_k+𝐮^t_k·Δ t,
where the target state consists of its position, , 𝐳^t_k=𝐱^t_k, and target control 𝐮^t_k∈ℝ^2 is known to the robot.
Covariance 𝐑^t is set as 0.01𝐈_2, where 𝐈_2 denotes the 2× 2 identity matrix. A range-bearing sensor model is adopted to acquire the distance and bearing angle of the target relative to the robot, which is formulated as:
𝐟^s_rb(𝐳_k^t,𝐳_k^r)=[𝐱_k^t-𝐱_k^r, ∠ (𝐱_k^t-𝐱_k^r)-θ^r_k]^T.
The covariance 𝐑^s is set as diag(0.3,0.05). The target position is initialized at [28,9]^T. To make the task more challenging, we design a target trajectory with several sharp turns near obstacles (<ref>(b)). We use cumulative entropy J_1 in <ref> as our objective function.
<ref> shows the tracking process. The target uncertainty is initialized to be very large
(<ref>(a)).
As the tracking progresses, the robot plans a visibility-aware trajectory to track the target (<ref>(b)). Specifically, when the target takes a sharp turn near an obstacle, the robot moves away from the obstacle in advance to reduce the likelihood of target loss (<ref>(c) and (d)). Note that this roundabout path frequently appears in the tracking process (highlighted by yellow ellipses in <ref>(b)) and is a characteristic of visibility-aware trajectories.
<ref> displays the performance data.
The robot can maintain the visibility of the target throughout the simulation without any target loss. To see this, <ref>(a) shows the purple curve that indicates the target is inside our convexified FOV most of the time, and the blue curve that indicates LOS and obstacles never intersect. Due to uninterrupted measurements, target entropy rapidly converges and the estimation error is kept small (<ref>(b)).
We also record the BPOD value and evaluate the accuracy of the BPOD approximation against the Monte-Carlo simulation as the ground truth, as shown in <ref>(c). The subfigure shows that the BPOD is kept near 1 most of the time although we don't use the BPOD as the objective function directly, which verifies the compatibility of the two objectives <ref>. Results also demonstrate that we can calculate one BPOD within 1ms with an MAE of 2.5×10^-3.
This experiment is conducted 50 times to verify the robustness of our method under measurement noise and imperfect control. The results show that the robot can track the target with low mean estimation error e̅_est=0.336m, high mean visible rate r̅_vis=96.5% and high success rate r_suc=96% with the mean computing time t̅_cal= 0.23s/step.
§.§ Case 2: Target with Nonlinear Motion Model
The target takes a unicycle model<cit.> with the target state represented as 𝐳^t_k=[𝐱^t_k^T,θ^t_k]^T, where θ^t_k stands for target orientation. In addition, the target controls 𝐮^t_k=[v^t_k, ω^t_k]^T that encode the speed v^t_k∈ℝ and angular velocity ω^t_k∈ℝ remain unknown to the robot. Therefore, the robot estimates 𝐮^t_k by differentiating target displacement and rotation.
We should note that this coarse control estimation poses a great challenge for the tracking task. We choose the cumulated BPOD J_2 in <ref> as the objective function.
We adopt a camera sensor model to detect target distance, bearing angle and orientation:
𝐟^s_cam(𝐳_k^t,𝐳_k^r)=[𝐱_k^t-𝐱_k^r, ∠ (𝐱_k^t-𝐱_k^r)-θ^r_k, θ^t_k-θ^r_k]^T,
and the measurement noise is set as 10^-2· diag(1, 0.5, 1).
To verify the generality of our algorithm, we set different target speed limits v_max^t and implement the MATLAB Navigation Toolbox to randomly generate 50 trajectories with 400 steps for each trajectory under each v_max^t. Target motion noise is set as 0.5𝐈_3 to simulate stochastic target movements.
Examples of tracking experiments are presented in <ref> and simulation results are shown in <ref>. Results demonstrate that the proposed planner consistently maintains high visibility and success rates in tracking a target with highly-stochastic trajectories, achieving a computing speed of around 10Hz. The increased computational time and decreased visible rate in the linear case result from the fact that the robot has to track a target with a highly challenging target trajectory, as illustrated in <ref>(b).
§ REAL-WORLD EXPERIMENTS
The proposed approach has been tested by using a Wheeltec ground robot equipped with an ORBBEC Femto W camera to keep track of a moving Turtlebot3, on which three interlinked Apriltags <cit.> are affixed.
The speed limit of the tracker and the target are 0.4m/s and 0.31m/s, respectively. The onboard processor (NVIDIA Jetson TX1) on the Wheeltec robot calculates the distance, bearing angle, and orientation of the detected Apriltag from the camera images and transmits the messages through ROS to a laptop (11th Intel(R) i7 [email protected]) that performs the planning algorithm. The parameters of the sensor model are calibrated as r_1=0.3m, r_2=1.5m, θ=100^∘, and 𝐑^s = diag(0.069,8.3× 10^-4,0.0055).
The covariance of robot motion noise is set as 𝐑^r=diag(0.08𝐈_2,0.055𝐈_2) to handle model discretization error and mechanical error. Other parameters are kept the same as <ref>. The ground truth of the robot and target poses are obtained by a Vicon motion-capture system.
We conduct three tracking experiments by remotely controlling the target to randomly traverse an indoor map with cluttered obstacles (<ref>(a)), and the duration of all three experiments exceed 2 minutes. For each experiment, we record the total planning steps T, estimation error e_est, calculation time t_cal and minimum distance to obstacles d_min, as shown in <ref>. Both <ref>(b) and <ref> show that our planner can generate safe trajectories for the robot to track a moving target while reducing target uncertainty in real-world scenarios.
§ CONCLUSION
We propose a target-tracking approach that systematically accounts for the limited FOV, obstacle occlusion, and state uncertainty. In particular, the concept of BPOD is proposed and incorporated into the EKF framework to predict target uncertainty in systems subject to measurement noise and imperfect motion models. We subsequently develop an SDF-based method to efficiently calculate the BPOD and collision risk to solve the trajectory optimization problem in real time. Both simulations and real-world experiments validate the effectiveness and efficiency of our approach.
Future work includes mainly two aspects. First, we will investigate the properties and applications of BPOD in non-Gaussian belief space planning. Second, we will extend the proposed method to unknown and dynamic environments.
*Acknowledgement
We thank Dr. Meng Wang at BIGAI for his help with photography.
ieeetr
|
http://arxiv.org/abs/2306.04146v1
|
20230607044558
|
Blowup analysis for a quasi-exact 1D model of 3D Euler and Navier-Stokes
|
[
"Thomas Y. Hou",
"Yixuan Wang"
] |
math.AP
|
[
"math.AP"
] |
CFDP: Common Frequency Domain Pruning
Samir Khaki, Weihan Luo
University of Toronto
Toronto, Canada
{samir.khaki, weihan.luo}@mail.utoronto.ca
July 31, 2023
===========================================================================================================================
We study the singularity formation of a quasi-exact 1D model proposed by Hou-Li in <cit.>. This model is based on an approximation of the axisymmetric Navier-Stokes equations in the r direction. The solution of the 1D model can be used to construct an exact solution of the original 3D Euler and Navier-Stokes equations if the initial angular velocity, angular vorticity, and angular stream function are linear in r. This model shares many intrinsic properties similar to those of the 3D Euler and Navier-Stokes equations. It captures the competition between advection and vortex stretching as in the 1D De Gregorio <cit.> model. We show that the inviscid model with weakened advection and smooth initial data or the original 1D model with Hölder continuous data develops a self-similar blowup. We also show that the viscous model with weakened advection and smooth initial data develops a finite time blowup.
To obtain sharp estimates for the nonlocal terms, we perform an exact computation for the low-frequency Fourier modes and extract damping in leading order estimates for the high-frequency modes using singularly weighted norms in the energy estimates. The analysis for the viscous case is more subtle since the viscous terms produce some instability if we just use singular weights. We establish the blowup analysis for the viscous model by carefully designing an energy norm that combines a singularly weighted energy norm and a sum of high-order Sobolev norms.
§ INTRODUCTION
Whether the 3D incompressible Euler and Navier-Stokes equations can develop a finite time singularity from smooth initial data is one of the most outstanding open questions in nonlinear partial differential equations. An essential difficulty is that the vortex stretching term has a quadratic nonlinearity in terms of vorticity. A simplified 1D model was proposed by the Constantin-Lax-Majda model (CLM model for short) <cit.> to capture the effect of nonlocal vortex stretching. The CLM model can be solved explicitly and can develop a finite time singularity from smooth initial data. Later on, De Gregorio incorporated the advection term into the CLM model to study the competition between advection and vortex stretching <cit.>, see also <cit.> for a related study on the stabilizing effect of advection for the 3D Euler and Navier-Stokes equations. In <cit.>,
Okamoto, Sakajo, and Wunsch further introduced a parameter for the advection term to measure the relative strength of the advection in the De Gregorio model. These simplified 1D models have inspired many subsequent studies. Interested readers may consult the excellent surveys <cit.> and the references therein. Very recently, the authors in <cit.> established self-similar blowup for the whole family of gCLM models with a≤1 using a fixed-point argument.
On the other hand, these 1D scalar models are phenomenological in nature and cannot be used to recover the solution of the original 3D Euler equations.
In 2014, Luo-Hou <cit.> presented convincing numerical evidence that the 3D axisymmetric Euler equations with smooth initial data and boundary develop a potential finite time singularity. In the same papers <cit.>, the authors proposed the Hou-Luo model along the boundary r=1. This model captures many essential features observed in the Hou-Luo blowup scenario for the axisymmetric Euler equations. In <cit.>, the authors proved the blowup of the Hou-Luo model. In <cit.>, Chen-Hou-Huang proved the asymptotically self-similar blowup of the Hou-Luo model by extending the method of analysis for the finite time blowup of the De Gregorio model by the same authors in <cit.>. Inspired by Elgindi's recent breakthrough for finite time singularity of the axisymmetric Euler with no swirl and C^1,α velocity <cit.>, Chen and Hou proved the finite time blowup of the 2D Boussinesq and 3D Euler equations with C^1,α initial velocity and boundary <cit.>. Very recently, Chen and Hou proved stable and nearly self-similar blowup of the 2D Boussinesq and 3D Euler with smooth initial data and boundary using computer assistance <cit.>.
In 2008, Hou and Li <cit.> proposed a new 1D model for the 3D axisymmetric Euler and Navier-Stokes equations. This model approximates the 3D axisymmetric Euler and Navier-Stokes equations along the symmetry axis based on an approximation in the r direction. The solution of the 1D model can be used to construct an exact solution of the original 3D Navier-Stokes equations if the initial angular velocity, angular vorticity, and angular stream function are linear in r. This model shares many intrinsic properties similar to those of the 3D Navier-Stokes equations. Thus, it captures some essential nonlinear features of the 3D Euler and Navier-Stokes equations and is very different from the Hou-Luo model, which is defined along the boundary r=1, instead of the symmetry axis r=0. In the same paper <cit.>, the authors proved the global regularity of the Hou-Li model by deriving a new Lyapunov functional, which captures the exact cancellation between advection and vortex stretching.
The purpose of this paper is to study the singularity formation of a weak advection version of the Hou-Li model for smooth data. We introduce a parameter a to characterize the relative strength between advection and vortex stretching, just like the gCLM model. Both inviscid and viscous cases are considered. We also prove the finite time singularity formation of the original inviscid Hou-Li model (a=1 and ν=0) with C^α initial data. Inspired by the recent work of Chen <cit.> for the De Gregorio model, we consider the case of a < 1 and treat 1-a as a small parameter. For the C^α initial data, we consider the original Hou-Li model with a=1 and 1-α small.
By using the dynamic rescaling formulation and analyzing the stability of the linearized operator around an approximate steady state of the original Hou-Li model (a=1), we prove finite time self-similar blowup.
We follow a general strategy that we have established in our previous works <cit.>.
Establishing linear stability of the approximate steady state is the most crucial step in our blowup analysis. To obtain sharp estimates for the nonlocal terms, we carry out an exact computation for the low-frequency Fourier modes and extract damping in leading order estimates for the high-frequency modes using singularly weighted norms in the energy estimates. The blowup analysis for the viscous model is more subtle since the viscous terms do not provide damping and produce some bad terms if we use a singularly weighted norm. We establish the blowup analysis for the viscous model by carefully designing an energy norm that combines a singularly weighted energy norm and a sum of high-order Sobolev norms.
§.§ Problem setting
In <cit.>,
Hou-Li introduced the following reformulation of the axisymmetric Navier-Stokes equation:
u_1, t+u^r u_1, r+u^z u_1, z =2 u_1ψ_1, z+νΔ u_1 ,
ω_1, t+u^rω_1, r+u^zω_1, z =(u_1^2)_z+νΔω_1 ,
-[∂_r^2+(3 / r) ∂_r+∂_z^2] ψ_1 =ω_1 ,
where u_1 = u^θ/r, ω_1 = ω^θ, ψ_1 = ψ^θ/r, and u^θ, ω^θ, and ψ^θ are the angular velocity, angular vorticity, and angular stream function, respectively.
By the well-known Caffarelli-Kohn-Nirenberg partial regularity result <cit.>, the axisymmetric Navier-Stokes equations can develop a finite time singularity only along the symmetry axis r=0. To study the potential singularity or global regularity of the axisymmetric Navier-Stokes equations,
Hou-Li <cit.> proposed the following 1D model along the symmetry axis r=0:
u_1, t+2 ψ_1 u_1, z =2 ψ_1, z u_1 +ν u_1,zz ,
ω_1, t+2 ψ_1 ω_1, z =(u_1^2)_z+νω_1,zz ,
- ψ_1,zz =ω_1 .
Such a reduction is exact in the sense that if (ω_1, u_1, ψ_1) is an exact solution of the 1D model, we can obtain an exact solution of the 3D Navier-Stokes equations by using a constant extension in r. This corresponds to the case when the physical quantities u^θ=ru_1, ω^θ=rω_1 are linear in r.
We assume that the solutions are periodic in z on [0,2π].
We already know from the original Hou-Li paper that this system is well-posed for C^m initial data with m ≥ 1. In <cit.>, the authors also used the well-posedness of the Hou-Li model to construct globally smooth solutions to the 3D equations with large dynamic growth.
In two recent papers by the first author <cit.>, the author presented new numerical evidence that the 3D axisymmetric Euler and Navier-Stokes equations develop potential singular solutions at the origin. This new blowup scenario is very different from the Hou-Luo blowup scenario, which occurs on the boundary. In this computation, the author observed that the axial velocity u^z=2 ψ_1+r ψ_1, r near the maximal point of u_1 is significantly weaker than 2ψ_1. This is due to the fact that ψ_1 reaches the maximum at a position r=r_ψ that is smaller than the position r=r_u in which u_1 achieves its maximum, i.e. r_ψ < r_u. Therefore ψ_1, r is negative near the maximal position of u_1. Thus the axial velocity u^z is actually weaker than 2 ψ_1, which corresponds to u^z |_r=0. Thus, the original Hou-Li model along r=0 does not capture this subtle phenomenon, which is three-dimensional in nature. To gain some understanding of this potentially singular behavior, we introduce the following 1D weak advection model.
u_ t+2 aψ u_ z =2 u ψ_ z+ν u_zz ,
ω_ t+2 aψω_ z =(u^2)_z+νω_zz ,
- ψ_zz =ω ,
where a is a parameter that measures the relative strength of advection in the Hou-Li model.
For simplicity, we drop the subscript 1 in the above weak advection model.
The proposed model (<ref>) in the inviscid case ν=0 resembles the generalized Constantin-Lax-Majda model (gCLM) <cit.>
ω_t+auω_x=u_xω , u_x=Hω ,
where
H ω(x)=1/π p.v. ∫_ℝω(y)/x-y d y
is the Hilbert transform. They share similar structures of competition between advection and vortex stretching. The case when a=1 corresponds to the De Gregorio (DG) model.
We obtain an explicit steady-state to the inviscid Hou-Li model (<ref>) (ω,u,ψ)=(sin x,sin x, sin x), similar to the steady state (ω,u)=(-sin x,sin x) of the DG model on S^1. Many of the results we present in this paper have analogies for the gCLM model; see in particular <cit.>.
§.§ Main results
We summarize the main results of the paper below and devote the subsequent sections to proving these results.
Our first result is on the finite-time blowup of the weak inviscid advection model; for its proof see Section <ref> and <ref>.
For the weak advection model (<ref>) in the inviscid case ν=0, there exists a constant δ>0 such that for a∈(1-δ,1), the weak advection model (<ref>) develops a finite time singularity for some C^∞ initial data. Moreover, there exists a self-similar profile (ω_∞,ω_∞,ω_∞) corresponding to a blowup that is neither expanding nor focusing. More precisely, the blowup solution to (<ref>) has the form
ω(x,t)=1/1+c_u,∞tω_∞ ,u(x,t)=1/1+c_u,∞tu_∞ ,ψ(x,t)=1/1+c_u,∞tψ_∞ ,
for some negative constant c_u,∞ with a blowup time given by T= -1/c_u,∞.
Such self-similar blowup that is neither expanding nor focusing is observed numerically for a∈[0.6,0.9]. See also a similar phenomenon observed for the gCLM model in <cit.> for a∈[0.68,0.95]. The blowup result for the gCLM model has been proved in <cit.> for a sufficiently close to 1. We remark that for a very close to 1, since c_u,∞=O(|a-1|), the blowup time becomes very large due to the very small coefficient 1-a in the vortex stretching term which slightly dominates the advection term. It would be extremely difficult to compute such singularity numerically since it takes an extremely long time for the singularity to develop. For a below a critical value a_0, i.e. a < a_0, we observe that the weak advection Hou-Li model develops a focusing singularity.
The second result is on the blowup of the original Hou-Li model with C^α initial data; for its proof see Section <ref>. In <cit.>, the authors made an important observation that advection can be weakened by C^α data. Intuitively if u =O(x^α) in the origin, since ψ is C^2, we have that ψ u_x≈αψ_x u near the origin, the vortex stretching term is stronger than the advection term if α < 1. See <cit.> on results of blowup of the DG model with Hölder continuous data.
Consider the Hou-Li model (<ref>) in the inviscid case ν=0. For any α<1, (<ref>) develops a finite time singularity for some C^α initial data. Moreover, there exists a C^α self-similar profile corresponding to a blowup that is neither expanding nor focusing.
The third result is on the finite-time blowup of the weak advection model with viscosity. The dynamic rescaling formulation implies that the viscous terms are asymptotically small. Thus, we can build on Theorem <ref> to establish Theorem <ref>. See also the ideas in <cit.>.
We will provide more details of the blowup analysis for the viscous case in Section <ref>.
Consider the weak advection model (<ref>) with viscosity. There exists a constant δ_1>0 such that for a∈(1-δ_1,1), the weak advection model (<ref>) develops a finite time singularity for some C^∞ initial data.
We use the framework of the dynamic rescaling formulation to establish the blowups. This formulation was first introduced by McLaughlin, Papanicolaou, and co-workers in their study of self-similar blowup of the nonlinear Schrödinger equation <cit.>. This formulation was later developed into an effective modulation technique, which has been applied to analyze the singularity formation for the nonlinear Schrödinger equation <cit.>, compressible Euler equations <cit.>, the nonlinear wave equation <cit.>, the nonlinear heat equation <cit.>, the generalized KdV equation <cit.>, and other dispersive problems. Recently this approach has been applied to prove singularity in various gCLM models <cit.>, the 1D Hou-Luo model <cit.>, and in Euler equations <cit.>. Our blowup analysis consists of several steps. First, we use the dynamic rescaling formulation to link a self-similar singularity to the (stable) steady state of the dynamic rescaling formulation. Secondly, we identify, either analytically or numerically, an approximate steady state to the dynamic rescaling formulation. Thirdly, we perform energy estimates using a singularly weighted norm to establish linear and nonlinear stability of the approximate steady state. Finally, we establish exponential convergence to the steady state in the rescaled time.
The crucial ingredient of the framework is the linear stability of the approximate steady state, and we usually adopt a singularly weighted L^2-based estimate. To avoid an overestimate in the linear stability analysis, we expand the perturbation in terms of the orthonormal basis with respect to the weight L^2 norm and reduce the linear stability estimate into an estimate of a quadratic form for the Fourier coefficients. We further extract the damping effect of the linearized operator by establishing a lower bound on the eigenvalues of an infinite-dimensional symmetric matrix. We prove the positive-definiteness of this quadratic form by performing an exact computation of the eigenvalues of a small number of Fourier modes with rigorous computer-assisted bounds, and treat the high-frequency Fourier modes as a small perturbation by using the asymptotic decay of the quadratic form in the high-frequency Fourier coefficients.
§.§ Organization of the paper and notations
In Section <ref>, we introduce our dynamic rescaling formulation and link the blowup of the physical equation to the steady state of the dynamic rescaling formulation. The linear stability of the approximate steady state is established.
In Section <ref>, we establish the nonlinear stability of the approximate steady state and the exponential convergence to the steady state, which proves Theorem <ref> and the blowup for the weak advection model. In Section <ref>, we prove Theorem <ref> and establish blowup for the original model with Hölder continuous data. In Section <ref>, we prove Theorem <ref> by designing a special energy norm to estimate the viscous terms. We provide the crucial linear damping estimates in the Appendix using computer assistance.
Throughout the article, we use (·,·) to denote the inner product on S^1: (f,g)=∫_-π^πfg. We use C to denote absolute constants, which may vary from line to line, and we use C(k) to denote some constant that may depend on specific parameters k we choose. We use A≲ B for positive B to denote that there exists an absolute constant C>0 such that A≤ CB.
§ DYNAMIC RESCALING FORMULATION AND LINEAR ESTIMATES
§.§ Dynamic rescaling formulation
We will establish the singularity formation of the weak advection model by using the dynamic rescaling formulation. We first consider the inviscid case with ν=0. For solutions to the system (<ref>), we introduce
ũ(x, τ)=C_u(τ) u( x, t(τ)) , ω̃(x, τ)=C_u(τ) ω( x, t(τ)) , ψ̃(x, τ)=C_u(τ) ψ(x, t(τ)) ,
where
C_u(τ)=exp(∫_0^τ c_u(s) d s) , t(τ)=∫_0^τ C_u(s) d s .
We can show that the rescaled variables solve the following dynamic rescaling equation
ũ_τ+2a ψ̃ũ_ x =2 ũψ̃_ x+c_u ũ ,
ω̃_τ+2a ψ̃ω̃_ x =(ũ^2)_x+c_uω̃ ,
-ψ̃_xx =ω̃ .
We do not rescale the spatial variable x, since we are interested in a blowup solution that is neither focusing nor expanding within a fixed period. The scaling factors for u, ω, ψ are thus the same.
When we establish a self-similar blowup, it suffices to show the dynamic stability of equation (<ref>) close to an approximate steady state with scaling parameter c_u<-ϵ<0 uniformly in time for a small constant ϵ; see also <cit.>. In fact, it's easy to see that if (ũ,ω̃,ψ̃,c_u) converges to a steady-state (u_∞,ω_∞,ψ_∞,c_u,∞) of (<ref>), then
ω(x,t)=1/1+c_u,∞tω_∞ ,u(x,t)=1/1+c_u,∞tu_∞ ,ψ(x,t)=1/1+c_u,∞tψ_∞ ,
is a self-similar solution of (<ref>).
From now on, we will primarily work in the dynamic rescaling formulation and use the notations that ũ =u̅+û, where u̅ is the approximate steady state that we perturb around and ũ is the perturbation. Notations for variables ω̃ and ψ̃ are similar.
§.§ Equations governing the perturbation
We use the steady state corresponding to the case of a=1 to construct an approximate steady state for (<ref>).
ω̅=sin x , u̅=sin x , ψ̅=sin x , c̅_u=2(a-1)ψ̅_x(0)=2(a-1) .
We consider odd perturbations û, ω̂, ψ̂. The parities are preserved in time by equation (<ref>). We use the normalization condition as c_u=2(a-1)ψ̂_x(0). This normalization ensures that u̅_x(0)+û_x(0) is conserved in time.
To simplify our presentation, we will drop the in the perturbation û and use u for û, ω for ω̂, ψ for ψ̂. We also use t for the rescaled time τ variable. Now the perturbations satisfy the following system
u_ t =-2a sin x u_x-2a cos x ψ+ 2 u cos x+2sin x ψ_x+c̅_u u+ c_uu̅+N_1+F_1 ,
ω_ t =-2a sin x ω_x-2a cos xψ +2 u cos x+2sin x u_x+c̅_u ω+ c_uω̅+N_2+F_2 ,
-ψ_xx =ω ,
where N_1, N_2 and F_1, F_2 are the nonlinear terms and error terms defined below:
N_1=(c_u+2ψ_x)u-2aψ u_x , N_2=c_uω+2uu_x-2aψω_x ,
F_1=(c̅_u+2ψ̅_x)u̅-2aψ̅u̅_x=2(a-1)sin x(1-cos x) , F_2=c̅_uω̅+2u̅u̅_x-2aψ̅ω̅_x=F_1 .
We further organize the system (<ref>) into the main linearized term and a smaller term containing a factor of a-1:
u_ t =L_1+(a-1)L'_1+N_1+F_1 ,
ω_ t =L_2+(a-1)L'_2+N_2+F_2 ,
-ψ_xx =ω .
where
L_1=-2 sin x u_x-2 cos x ψ+ 2 u cos x+2sin x ψ_x ,
L'_1=-2sin x u_x-2 cos x ψ+2u+2ψ_x(0)sin x ,
L_2=-2 sin x ω_x-2 cos xψ +2 u cosx+2sin x u_x ,
L'_2=-2 sin x ω_x-2 cos xψ+2ω+2ψ_x(0)sin x .
To show that the dynamic rescaling equation is stable and converges to a steady state, we will perform a weighted-L^2 estimate with a singular weight ρ and a weighted L^2 norm
ρ=1/2π(1-cos x) , f_ρ=(f^2,ρ)^1/2 .
For initial perturbation with u_x(0,0)=0, we have u_x(0,t)=0 for all time and
E^2(t)=1/2((u_x^2,ρ)+(ω^2,ρ)) ,
is well-defined. We will first show that the dominant parts L_1 and L_2 provide damping. The following lemma is crucial and motivates the choice of ρ.
We have the following identity
(sin x f_x,fρ)=1/2(f^2,ρ) ,
which can be verified directly by using integration by parts.
§.§ Stability of the main parts in the linearized equation
In order to extract the maximal amount of damping, we will expand the perturbed solution in the Fourier series and perform exact calculations. We first explore the orthonormal basis in L^2(ρ).
For the space of odd periodic functions on [0,2π], we describe a complete set of orthonormal basis {o^k} in L^2(ρ)
o^k=sin(kx)-sin((k-1)x) , k=1,2,⋯ .
Similarly, for the space of even periodic functions that lie in L^2(ρ), we describe a complete set of orthonormal basis {e^k}
e^k=cos(kx)-cos((k+1)x) , k=0,1,⋯ .
Now we are now ready to establish linear stability.
The following energy estimate holds for the leading linearized operators
dE_1((L_1)_x,u_xρ)+(L_2,ωρ)≤ -0.16[(u_x,u_xρ)+(ω,ωρ)] .
Consider the expansion of ω, u_x, and u in the orthonormal basis
ω=∑_k≥ 1 a_k o^k , u=∑_k≥ 1 b_k o^k , u_x=∑_k≥1 c_k e^k .
Note the summation index for u_x satisfies k ≥ 1 since we can easily see that
(u_x,e^0ρ)=1/2π∫_0^2π u_x=0 .
We first express b_k in terms of c_k. If we insert the expression of the basis into u, take derivative and compare the coefficients with the expansion of u_x, we get
c_i=∑_k=1^i b_k-ib_i+1 .
Therefore we can solve
b_i+1=b_1-∑_k=1^i-1c_k/k(k+1)-c_i/i .
Moreover, we have the compatibility condition u_x(0)=∑_k ≥ 0 b_k=0. Therefore we can solve b_1 and obtain
b_i=∑_k≥ ic_k/k(k+1)-c_i-1/i ,
where we define c_0=0.
Now we write out the terms explicitly using the expansions
dE_1 =2(-usin x-u_xxsin x,u_xρ)+2(sin xψ+sin xψ_xx,u_xρ)
+2(- sin x ω_x- cos xψ,ωρ) +2 (u cosx+sin x u_x,ωρ)
=-[(u_x,u_xρ)+(ω,ωρ)+(u,uρ)]+2[-( cos xψ,ωρ)+(sin xψ,u_xρ)+ (u cosx,ωρ)] .
Here we use the crucial Lemma <ref> to extract damping on the local terms and the Biot-Savart law -ψ_xx=ω to cancel the effect of the nonlocal terms sin xψ_xx in (L_1)_x and sin xu_x in L_2. Next, we calculate the remaining nonlocal terms explicitly.
-2( cos xψ,ωρ)=(2(1-cos x)ψ,ωρ)-2(ψ,ωρ)=1/π(ψ,ω)-2(ψ,ωρ) .
We express ω and ψ both in terms of orthonormal basis o^k corresponding to the weighted norm and the canonical basis sin(kx) corresponding to the (normalized by 1/π) L^2 norm.
ω=∑_k ≥ 1 (a_k-a_k+1) sin(kx) ,
where we denote a_0=0. Therefore
ψ=∑_k ≥ 1a_k-a_k+1/k^2sin(kx) .
Furthermore, we collect
ψ=∑_k≥ 1∑_j≥ ka_j-a_j+1/j^2 o^k .
Therefore we can compute explicitly that
-2( cos xψ,ωρ)=∑_k ≥ 1(a_k-a_k+1)^2/k^2-2∑_k≥ 1 a_k∑_j≥ ka_j-a_j+1/j^2 .
We use integration by parts similar to Lemma <ref> to obtain
2[(sin xψ,u_xρ)+ (u cosx,ωρ)]=2(cosxω+ψ-sin xψ_x,uρ)2(T_u,uρ) .
We further have
T_u =∑_k ≥ 1 (a_k-a_k+1)[k+1/2ksin((k-1)x)+k-1/2ksin((k+1)x)+1/k^2sin(kx)]
=∑_k ≥ 1sin(kx) [k+2/2(k+1)(a_k+1-a_k+2)+a_k-a_k+1/k^2+k-2/2(k-1)(a_k-1-a_k)]
=∑_k≥ 1[k-2/2(k-1)a_k-1+(1/2k(k-1)+1/k^2)a_k+(k+1/2k+1/(k+1)^2-1/k^2)a_k+1
+∑_j>k+1(1/j^2-1/(j-1)^2)a_j]o^k ,
where the terms involving 1/k-1 in the summand is regarded as 0 for k=1. Therefore we collect explicitly that
dE_1 =∑_k≥ 1{ -(a_k^2+b_k^2+c_k^2)+ (a_k-a_k+1)^2/k^2-2 a_k∑_j≥ ka_j-a_j+1/j^2+2b_k[k-2/2(k-1)a_k-1
+(1/2k(k-1)+1/k^2)a_k+(k+1/2k+1/(k+1)^2-1/k^2)a_k+1+∑_j>k+1(1/j^2-1/(j-1)^2)a_j]} .
Substituting (<ref>) into the above and we can simplify
dE_1 =-∑_k ≥ 1{ a_k^2(1+1/k^2-1/(k-1)^2)+c_k^2(1+1/k(k+1))+ 2a_k a_k+11/(k+1)^2
+2 a_k ∑_j>k+1a_j(1/j^2-1/(j-1)^2)+2a_kc_k1+2k-k^2/2k^2(k+1)+2a_k+1c_kk^2-k-1/2k^2(k+1)^2
-2a_k+2c_kk+2/2(k+1)^2+∑_j>k2a_k c_j1/j(j+1)} .
After this explicit computation, we notice that the damping estimate in Proposition <ref> can be cast into an estimate of a quadratic form; see (<ref>), which is equivalent to a lower bound on the eigenvalues of an infinite-dimensional symmetric matrix.
F(a,c) ∑_k≥1{ a_k^2(0.84+1/k^2-1/(k-1)^2)+c_k^2(0.84+1/k(k+1))+ 2a_k a_k+11/(k+1)^2
+2 a_k ∑_j>k+1a_j(1/j^2-1/(j-1)^2)+2a_kc_k1+2k-k^2/2k^2(k+1)+2a_k+1c_kk^2-k-1/2k^2(k+1)^2
-2a_k+2c_kk+2/2(k+1)^2+∑_j>k2a_k c_j1/j(j+1)}≥0 .
We notice that the entries decay fast. Therefore the strategy to prove (<ref>) is to combine a computer-assisted estimate of the eigenvalues of its finite truncation with a decay estimate of the remaining part. We will defer the proof of (<ref>) to the Appendix, see Lemma <ref> and the proof. Thereby we conclude the linear estimate.
§ NONLINEAR ESTIMATES AND CONVERGENCE TO SELF-SIMILAR PROFILE
§.§ Nonlinear stability
By Proposition <ref> and equation (<ref>), we have
1/2d/dtE^2(t) ≤ -0.16E^2(t)+(a-1)[((L'_1)_x,u_xρ)+(L'_2,ωρ)]
+((N_1)_x,u_xρ)+(N_2,ωρ)+((F_1)_x,u_xρ)+(F_2,ωρ) .
We first provide some estimates about the weighted L^2 norm and L^∞ norm of some lower-order terms.
The following estimates hold
* Weighted L^2 norm:
ψ_ρ,ψ_x-ψ_x(0)_ρ,u_ρ≲ E .
* L^∞ norm:
ψ_x_∞,ψ/sin x_∞,u_∞≲ E .
For (1), we use the setting of the Fourier series approach as in the proof of Proposition <ref> and pick up the notation there.
ψ_x-ψ_x(0)_ρ^2 =∑_k ≥ 1 (∑_j≥ ka_j-a_j+1/j)^2≤∑_k ≥ 1∑_j≥ ka^2_j(1/k^2+∑_j>k(1/j-1/j+1)^2)
≲∑_j ≥ 1 a_j^2∑_k ≥ 11/k^2≲ω_ρ^2 ,
where we have used the Cauchy-Schwarz inequality. We can similarly estimate ψ_ρ. Then we get
u_ρ^2=∑_j ≥ 1 b_j^2=∑_j ≥ 11/j(j+1)c_j^2≤∑_j ≥ 1 c_j^2=u_x_ρ^2 .
For (2), we first compute using Fourier series similar to (1)
ψ_x(0)=∑_j ≥ 1a_j-a_j+1/j≲ω_ρ .
Next, we estimate
ψ_x-ψ_x(0)_∞≲ψ_xx_1≲ω_ρ .
Similarly, we obtain the estimate for u_∞.
For ψ/sin x_∞, since ψ is odd and periodic, we have ψ(π)=ψ(0)=0 and only need to estimate this norm in [0,π]. Since sin x≥2/πmin{x,π-x} in [0,π],
we have ψ/sin x_∞≲ψ_x_∞ by Lagrange's mean value theorem.
Combined with the damping in Lemma <ref>, we further obtain
((L'_1)_x,u_xρ) =2(-cos xu_x-sin xu_xx+sin xψ-cos xψ_x+u_x+ψ_x(0)cos x,u_xρ)
≲ E^2+(ψ_ρ+ψ_x-ψ_x(0)_ρ)E≲ E^2 ,
(L'_2,ωρ)=2(-sin xω_x-cos xψ+ω+ψ_x(0)sin x,ωρ)≲ E^2+(ψ_ρ+|ψ_x(0)|)E≲ E^2 .
((F_1)_x,u_xρ)≲ |a-1|E , (F_2,ωρ)≲ |a-1|E ,
((N_1)_x,u_xρ) =2(-ω u+(1-a)(ψ_x-ψ_x(0))u_x-aψ u_xx,u_xρ)
≲ E^2(ψ_x_∞+u_∞)+|(u_x^2,(ψρ)_x)|
≲ E^2(ψ_x_∞+u_∞+ψ/sin x_∞)≲ E^3 ,
(N_2,ωρ)=2((a-1)ψ_x(0)ω+uu_x-aψω_x,ωρ)≲ E^2(ψ_x_∞+u_∞+ψ/sin x_∞)≲ E^3 .
Therefore we have
d/dtE(t)≤ -(0.16-C|a-1|)E+C|a-1|+CE^2 .
We can perform the standard bootstrap argument to show that there exist absolute constants δ,C>0 such that if |a-1|<δ and E(0)<C|a-1|, then we have E(t)<C|a-1| for all time. In particular c_u=O(|a-1|^2) and c_u+c̅_u<0. Therefore we prove that the solution blows up in finite time.
§.§ Estimates using a higher-order Sobolev norm
In order to establish convergence of the solution to a steady state, we need to estimate weighted norms of u_t and ω_t. As was pointed out in <cit.>, we need to provide stability estimates of the equation in higher-order Sobolev norms to close the estimate. In particular, we choose
K^2(t)=D_xu_x_ρ^2+D_xω_ρ^2 ,
where we denote D_x to be the operator sin x∂ x.
This choice of weighted norms is again motivated by the local linear damping estimates. We recall that the leading order terms of the local terms in the linearized operators (L_1)_x, L_2 are -2D_xu_x and -2D_xw, and we have 2(D_x f,fρ)=(f,fρ). Therefore in this new weighted norm, the combined terms would again give damping
(-2D_x D_x u_x,D_x u_xρ)+(-2D_xD_x w,D_x wρ)=-K^2 .
We now obtain
1/2d/dtK^2(t) ≤ (D_x(L_1)_x,D_xu_xρ)+(D_xL_2,D_xωρ)+(D_x(N_1)_x,D_xu_xρ)
+(D_xN_2,D_xωρ)+(a-1)[(D_x(L'_1)_x,D_xu_xρ)+(D_xL'_2,D_xωρ)]
+(D_x(F_1)_x,D_xu_xρ)+(D_xF_2,D_xωρ) .
We will denote the terms that have ·_ρ norm bounded by E as l.o.t.. The bound
D_x[fg]_ρ≲ (f_x_2+f_2) for g=1,cos x,sin x
combined with the oddness of ψ and u would imply that D_x[fg] is l.o.t. for f=ψ,ψ_x,u and g=sin x,cos x,1.
Therefore combined with (<ref>), we have the following estimate for the main term
dK_1(D_x(L_1)_x,D_xu_xρ)+(D_xL_2,D_xωρ) ,
dK_1 ≤ -K^2-2(D_x[sin x ω],D_xu_xρ)+2(D_xD_xu,D_xωρ)+CEK
=-K^2-(sin 2xω,D_xu_xρ)+(sin 2x u_x,D_xωρ)+CEK
≤-K^2+CEK ,
where we have again used a crucial cancellation in the equality, similar to that of dE_1 in Subsection <ref>. We estimate the rest of the terms similar to the nonlinear stability estimates in (<ref>).
(D_x(L'_1)_x,D_xu_xρ)+(D_xL'_2,D_xωρ)≲ K^2+EK ,
(D_x(F_1)_x,D_xu_xρ)+(D_xF_2,D_xωρ)≲|a-1|K ,
(D_x(N_1)_x,D_xu_xρ) ≲ EK^2+|(-2wD_xu+2(1-a)D_xψ_x u_x-2aψ D_xu_xx,D_xu_xρ)|
≲ EK^2+sin xu_x_∞EK+|(u^2_xx,(ψsin^2xρ)_x)|
≲ EK(K+E)+K^2ψ/sin x_∞≲ EK(K+E) ,
(D_xN_2,D_xωρ)≲ EK^2+|(2D_xu u_x-2aψ D_xω_x,D_xωρ)|≲ EK(K+E) ,
where we have used integration by parts and the estimate
sin xu_x_∞≲sin xu_xx_1+cos xu_x_1≲D_xu_x_ρ+u_x_ρ≲ E+K .
We can finally prove that
d/dtK(t)≤ -(1-C|a-1|)K+CE+C|a-1|+CE(E+K) .
Therefore combined with (<ref>), we can find an absolute constant μ>1 such that
d/dt(K+μ E)≤-(0.1-C|a-1|)(K+μ E)+C|a-1|+C(K+μ E)^2 .
By using a standard bootstrap argument, there exist absolute constants δ_0<δ,C>0, if |a-1|<δ_0 and K(0)+μ E(0)<C|a-1|, then K(t)+μ E(t)<C|a-1| for all time.
§.§ Convergence to the steady state
We estimate the weighted norm of ω_t and u_t,x and then use the standard convergence in time argument as in <cit.>.
J^2(t)=1/2((u_t,x^2,ρ)+(ω_t^2,ρ)) .
Applying the estimates of d/dtE to d/dtJ, we can get damping for the linear parts, and the small error terms corresponding to ω̅ and u̅ vanishes. Therefore we yield
1/2d/dtJ^2≤-(0.16-C|a-1|)J^2+((N_1)_t,x,u_t,xρ)+((N_2)_t,ω_tρ) .
Using estimates similar to Lemma <ref> and nonlinear estimates in (<ref>), we get
((N_1)_t,x,u_t,xρ)≲ EJ^2+(ψ_t u_xx,u_t,xρ)≲ EJ^2+J^2u_xxsin x_ρ≤(E+K)J^2 .
((N_2)_t,ω_tρ)≲ EJ^2+(ψ_t ω_x,ω_tρ)≲ EJ^2+J^2ω_xsin x_ρ≤(E+K)J^2 .
Combined with the a priori estimates on E+K, we can establish exponential convergence of J to zero. Then we can use the same argument as in <cit.> to establish exponential convergence to the steady state and conclude the proof of Theorem <ref>.
§ BLOWUP OF THE ORIGINAL MODEL WITH HÖLDER CONTINUOUS DATA
In this section, we follow the strategy of the linear and nonlinear estimates of the weak advection model in Sections <ref> and <ref>, and establish blowup of C^α data for the original model (<ref>) with a=1. Here α<1 is close to 1. Many of the ideas are drawn from the paper <cit.> and we only outline the most important steps. Intuitively, C^α regularity of the profile weakens the advection and therefore contributes to a blowup in finite time.
§.§ Dynamic rescaling formulation around the approximate steady state
Before we start, we will solve the Biot-Savart law of recovering ψ from ω with odd symmetry.
Suppose that ω, ψ are odd and periodic on [-π,π], with -ψ_xx=ω. Then we solve ψ_x(0)=-1/2π∫_0^2πyω(y) and obtain
ψ=∫_0^x(y-x)ω(y)dy+xψ_x(0) .
The proof of this lemma is straightforward by integration in x.
We construct the following approximate steady state with C^α regularity for (<ref>).
ω̅_α=sgn(x)|sin x|^α , u̅_α=sgn(x)|sin x|^1+α/2 , c̅_u, α=(α-1)ψ̅_α,x(0) ,
where ψ̅_α is related to ω̅_α via (<ref>).
We consider odd perturbations u, ω, ψ. The odd symmetry of the solution is preserved in time by equation (<ref>). We will use the normalization condition as c_u=(α-1)ψ_x(0), which ensures that u vanishes to a higher order at all times so that we can use the same singular weight ρ. In fact we compute using (<ref>) and the normalization conditions that
lim_x→ 0u̅_α(x)+u(x,t)/x^1+α/2=lim_x→ 0u̅_α(x)+u(x,0)/x^1+α/2 .
Therefore if we make the initial perturbation u(x,0) vanish to order 1+α around the origin, u(x,t) will also vanish to order 1+α for all time.
Now similar to what we obtain in (<ref>), the perturbations satisfy the following system
u_ t =L_1+R_1,α+N_1,α+F_1,α ,
ω_ t =L_2+R_2,α+N_2,α+F_2,α ,
-ψ_xx =ω ,
where we extract the same leading order linear parts as in (<ref>), while the nonlinear and error terms change and the residual error terms R_1,α, R_2,α model the discrepancy between our approximate profile with C^α regularity and the steady state profile ω̅=u̅=ψ̅=sin x. Define ψ_res=ψ̅_α-ψ̅,ω_res=ω̅_α-ω̅,u_res=u̅_α-u̅. We can express R_i,α and F_i,α as follows
R_1,α=-2 ψ_res u_x-2 u_res,xψ+ 2 u ψ_res,x+2u_resψ_x+c̅_u,α u+ c_uu̅_α .
R_2,α=-2ψ_resω_x-2ω_res,xψ +2 u u_res,x+2u_res u_x+c̅_u,αω+c_uω̅_α ,
N_1,α=(c_u+2ψ_x)u-2ψ u_x , N_2,α=c_uω+2uu_x-2ψω_x ,
F_1,α=(c̅_u,α+2ψ̅_α,x)u̅_α-2ψ̅_αu̅_α,x , F_2,α=c̅_u,αω̅_α+2u̅_αu̅_α,x-2ψ̅_αω̅_α,x .
Before we perform our energy estimates, we will obtain some basic estimates of the residues.
The following estimates hold for κ=7/8<9/10<α<1.
* Pointwise estimates of the residues:
|∂_x^iω_res|+|∂_x^i u_res|≲ |α-1||sin x|^κ-i , i=0,1,2,3,
ψ_res_∞+ψ_res,x_∞≲ |α-1| .
* Refined estimates using cancellations:
|α-1/2u̅_α,x-sin xu_res,xx|+|sin x∂ x[α-1/2u̅_α,x-sin xu_res,xx]|≲ |α-1|^1/2|x||sin x|^α-1 .
The first part of (1) and (2) can be proved by using direct calculations, and we refer to Lemma 6.1 in <cit.> for details. There seems to be a typo in (6.11) in <cit.> where ω̅_α should have been ω̅_α,x.
Furthermore, by the expression in Lemma <ref> we get the second part of (1).
Similar to the weak advection case, we define the energy E^2(t)=1/2((u_x^2,ρ)+(ω^2,ρ)) . We will estimate the growth of E(t).
The leading order linear estimates L_1, L_2 can be obtained in Proposition <ref>.
The estimates for the nonlinear terms N_1,α, N_2,α follow almost exactly the same as the weak advection case by using Lemma <ref>.
§.§ Nonlinear stability
By the computation in the previous subsection, we get
1/2d/dtE^2(t)≤ -0.16E^2+CE^3+((R_1,α)_x,u_xρ)+(R_2,α,ωρ)+((F_1,α)_x,u_xρ)+(F_2,α,ωρ) .
Further, we get
((R_1,α)_x,u_xρ) ≤ (-2ψ_resu_xx,u_xρ)+(c_uu̅_α,x-2u_res,xxψ,u_xρ)
+(ω_res_∞+u_res_∞+C|α-1|)E^2 .
For the first term, we can use integration by parts and Lemma <ref> to obtain
CE^2(ψ_res,x_∞+ψ_res/sin x_∞)≲ψ_res,x_∞E^2 .
For the second term, we compute
c_uu̅_α,x-2u_res,xxψ=ψ_x(0)[(α-1)u̅_α,x-2sin x u_res,xx]+2u_res,xx(sin xψ_x(0)-ψ) .
Thus by Lemmas <ref> and <ref>, we have
c_uu̅_α,x-2u_res,xxψ_ρ≲ E|α-1|^1/2 +|α-1|sin xψ_x(0)-ψ/|sin x|x_∞ .
Finally, for |x|≥π/2, we have
|sin xψ_x(0)-ψ/|sin x|x|≲ |ψ_x(0)|+ψ/sin x_∞≲ E .
For |x|< π/2, we use Lemma <ref> and |sin x|≥ 2/π |x| to obtain
|sin xψ_x(0)-ψ/|sin x|x|≲ |sin xψ_x(0)-ψ/x^2|≤ |(sin x-x)ψ_x(0)/x^2|+|∫_0^x(y-x)ω (y)/x^2|≲ E ,
where we have used the bound ω/x_1≲ω/x_2≲ω_ρ in the last inequality.
Thus we yield
((R_1,α)_x,u_xρ)≲ |α-1|^1/2E^2 .
Similarly we get
(R_2,α,ωρ)≤ (-2ψ/sin xω_res,xsin x,ωρ)+(c_uω̅_α,ωρ)+(2uu_res,x,ωρ)+C|α-1|E^2 .
For the first two terms, we can estimate them using Lemmas <ref> and <ref>. For the third term, we use Hardy's inequality to derive
uu_res,x_ρ/|α-1|≲u/x/sin x_2≲u/x^2_2+u/(π-x)_2≲u_x/x_2+u_x_2≲ E .
Therefore we have
(R_2,α,ωρ)≲|α-1|E^2 .
For the error terms, we can just perform standard norm estimates. We focus on the pointwise estimates for x≥0 and the case x<0 follows by using the odd symmetry of the solution.
F_2,α =(α-1)ψ̅_α,x(0)sin^α x+(α+1)cos xsin^α x-2α(ψ_res+sin x)cos xsin^α-1 x
=(α-1)(ψ̅_α,x(0)-cos x)sin^α x-2αψ_res/sin xcos xsin^α x .
Therefore, we obtain
F_2,α_ρ≲ |α-1|+ψ_res/sin x_∞≲ |α-1| .
Similarly, we have
(F_1,α)_x =α^2-1/2sin^α-1/2 xcos x[ψ̅_α,x(0)-ψ̅_αcos x/sin x]+sin^α+1/2x[(α+1)ψ̅_α-2sin^αx] .
Further, we obtain the following estimate:
(α+1)ψ̅_α-2sin^αx_∞≤ |α-1|ψ̅_α_∞+2ψ_res_∞+2sin x-sin^αx_∞≲|α-1| ,
ψ̅_α,x(0)-ψ̅_αcos x/sin x =(1-cos x)+ψ_res,x(0)-ψ_rescos x/sin x
=(1-cos x)(1+ψ_res/sin x)+ψ_res,x(0)-ψ_res/sin x .
Combined with the estimate similar to that in (<ref>), we get
[ψ̅_α,x(0)-ψ̅_αcos x/sin x]ρ^1/2_∞≲1-cos x/x(1+ψ_res/sin x)_∞+[ψ_res,x(0)-ψ_res/sin x]/x_∞≲ 1 .
Therefore we yield
(F_1,α)_x_ρ≲ |α-1| .
Collecting all the estimates of the residues and the error terms, we arrive at
d/dtE(t)≤ -(0.16-C|α-1|^1/2|)E+CE^2+C|α-1| .
Similar to Subsection <ref>, we can perform the bootstrap argument to conclude finite-time blowup.
§.§ Estimates in higher-order Sobolev norms and convergence to steady state
Following the ideas in Subsection <ref>, we can perform estimates in higher-order Sobolev norms and then close the estimates to establish convergence to a steady state. We use the same energy K and only sketch the main steps here. We first have from Subsection <ref> the estimates of L_i and N_i and obtain
1/2d/dtK^2(t) ≤ -K^2+CEK+CEK^2+((R_1,α)_xx,sin^2x u_xxρ)
+((R_2,α)_x,sin^2xω_x ρ)+((F_1,α)_xx,sin^2xu_xxρ)+((F_2,α)_x,sin^2xω_xρ) .
By Lemma <ref>, sin x∂ x u_res, sin x∂ x ω_res shares the same pointwise estimates as u_res, ω_res. We can obtain, similar to the estimates in E, the estimates
((R_1,α)_xx,sin^2x u_xxρ)≲|α-1|(E+K)K+K[u_res,xx(ψ-sin xψ_x(0))]_xsin x_ρ .
We further obtain the following estimate
[u_res,xx(ψ-sin xψ_x(0))]_xsin x_ρ ≲ |α-1|E+u_res,xx(ψ_x-cos xψ_x(0))sin x_ρ
≲ |α-1|E+|α-1|(ψ_x-ψ_x(0))/x_∞ .
Finally by the Cauchy-Schwarz inequality,
we have
|(ψ_x-ψ_x(0))/x|≤∫_0^x|ω(y)/y|dy≲ω/x_2≲ω_ρ≲ E ,
and therefore we conclude
((R_1,α)_xx,sin^2x u_xxρ)≲|α-1|(E+K)K .
By similar computations, we have
((R_2,α)_x,sin^2x ω_xρ)≲|α-1|(E+K)K .
And for the residue terms, we use similar estimates to obtain
(F_2,α)_xsin x_ρ≲|α-1| ,
(F_1,α)_xxsin x_ρ≲|α-1|+|α-1|(ψ_res/sin x)_xρ^1/2sin x_∞ .
We can use the triangular inequality to estimate
|(ψ_res/sin x)_xρ^1/2sin x|≤ |[ψ_res,x(0)-ψ_res/sin x]/x|+|(ψ_res,x-ψ_res,x(0))/x|+|1-cos x/xψ_res/sin x| .
Combined with the estimate similar to that in (<ref>) and (<ref>), we conclude
(F_1,α)_xxsin x_ρ≲|α-1| .
We can finally obtain the same estimate as in Subsection <ref>
d/dtK(t)≤ -(1-C|a-1|)K+CE+C|a-1|+CEK .
We can again find an absolute constant μ_α>1 such that
d/dt(K+μ_α E)≤-(0.1-C|a-1|)(K+μ_α E)+C|a-1|+C(K+μ_α E)^2 .
And we can use the bootstrap argument on this higher-order energy K+μ_α E to conclude a priori estimate in this norm. Now we can perform weighted estimates in time using the energy J as in Subsection <ref>, where the linear estimates follow from the linear estimates of E, the nonlinear estimates follow from Subsection <ref>, and the error term vanishes under time-differentiation. We can obtain the same estimates of J and establish exponential convergence of J to zero. Then we use the argument in <cit.> to establish exponential convergence to the steady state and conclude the proof of Theorem <ref>.
§ BLOWUP OF THE VISCOUS MODEL WITH WEAK ADVECTION
In this section, we follow the strategy of the linear and nonlinear estimates of the weak advection model and establish blowup for the weak advection viscous model with a<1. Intuitively, the viscosity term is small in the dynamic rescaling formulation and nonlinear stability can be closed using a higher-order norm, where the viscosity term has a stability effect. Therefore we expect that the viscous weak advection model develops a finite time singularity as well.
§.§ Dynamic rescaling formulation
We recall the weak advection model with viscosity.
u_ t+2 aψ u_ z =2 u ψ_ z+ν u_zz ,
ω_ t+2 aψω_ z =(u^2)_z+νω_zz ,
- ψ_zz =ω .
We use the rescaled variables
ũ(x, τ)=C_u(τ) u( x, t(τ)) , ω̃(x, τ)=C_u(τ) ω( x, t(τ)) , ψ̃(x, τ)=C_u(τ) ψ(x, t(τ)) ,
where
C_u(τ)=C_u(0)exp(∫_0^τ c_u(s) d s) , t(τ)=∫_0^τ C_u(s) d s .
For solutions to (<ref>), the rescaled variables satisfy the dynamic rescaling equation
ũ_τ+2a ψ̃ũ_ x =2 ũψ̃_ x+c_u ũ+ν C_u(τ)u_xx ,
ω̃_τ+2a ψ̃ω̃_ x =(ũ^2)_x+c_uω̃+ν C_u(τ) ω_xx ,
-ψ̃_xx =ω̃ .
Different from the rescaling in the inviscid case, we introduce an extra degree of freedom: the constant C_u(0). We will choose it later to ensure that the viscous term has a relatively small scaling compared to the main terms.
In order to establish a finite time blowup, it suffices to prove the dynamic stability of (<ref>) with scaling parameter c_u<-ϵ<0 for all time; see also <cit.>.
As before, we will primarily work in the dynamic rescaling formulation and use the notations ũ=u̅ +u, where u̅ is an approximation steady state and u is the perturbation that we will control in time. We still use t to represent the rescaled time τ.
We consider the following approximate steady state.
ω̅=u̅=ψ̅=sin x , c̅_u(t)=2(a-1)ψ̅_x(0)-ν C_u(t)u̅_xxx(0)/u̅_x(0)=2(a-1)+ν C_u(t) .
We consider odd perturbations u, ω, ψ. The odd symmetry of the solution is preserved in time by equation (<ref>). We use the following normalization condition: c_u=2(a-1)ψ_x(0)-ν C_u(t)u_xxx(0). This normalization ensures that u_x(0) remains 0 if the initial perturbation satisfies u_x(0,0)=0. In fact, if u_x(t,0)=0, then we obtain
d/dtu_x(t,0) =d/dt(u_x(t,0)+u̅_x(t,0))=(2-2a)(u_x(t,0)+u̅_x(t,0))(ψ_x(t,0)+ψ̅_x(t,0))
+(c_u+c̅_u)(u_x(t,0)+u̅_x(t,0))+ν C_u(t)(u̅_xxx(0)+u_xxx(0))=0 .
This particular choice of the approximate steady state and the normalization conditions ensure that u_x(0,t)=0 for all time provided that the initial perturbation satisfies u_x(0,0)=0. We will perform the same weighted norm estimate in the singular weight ρ and the weighted norm E as in the inviscid case.
§.§ Estimates of the viscous terms
Now the perturbation satisfies
u_ t =L_1+(a-1)L'_1+N_1+F_1+ν C_u(t)V^u ,
ω_ t =L_2+(a-1)L'_2+N_2+F_2+ν C_u(t)V^ω ,
-ψ_xx =ω .
Here the terms V^u and V^ω correspond to all of the terms containing the effect of the viscosity and we factor out explicitly the small factor ν C_u(t) for a fixed ν.
V^u=u_xx+u̅_xx+(1-u_xxx(0))(u+u̅)=u_xx-u_xxx(0)sin x +(1-u_xxx(0))u ,
V^ω=ω_xx-ω_xxx(0)sin x +(1-ω_xxx(0))ω .
We invoke the nonlinear estimates in the inviscid case and obtain for the viscous model:
1/2d/dtE^2(t)≤ -(0.16-C|a-1|)E^2+C|a-1|E+CE^3+ν C_u(t)[((V^u)_x,u_xρ)+(V^ω,ωρ)] .
We estimate the viscous terms carefully since they involve singular weights.
((V^u)_x,u_xρ)=(u_xxx-u_xxx(0),u_xρ)+u_xxx(0)(1-cos x,u_xρ)+(1-u_xxx(0))u_x^2_ρ .
Notice that
|ρ_x|=ρ |-sin x/1-cos x | ≲ρ |1/x| , |ρ_xx|=ρ |-cos x/1-cos x+2sin^2 x/(1-cos x)^2 |≲ρ |x^-2|
are singular near the origin and are smooth elsewhere. We can
use integration by parts twice to compute
(u_xxx-u_xxx(0),u_xρ )=-(u_xx-xu_xxx(0),u_xxρ)-(u_xx-xu_xxx(0),x^2/2u_xxx(0)ρ_x)
-(u_xx-xu_xxx(0),(u_x-x^2/2u_xxx(0))ρ_x)
≤ -u_xx^2_ρ+C[|u_xxx(0)|u_xx_ρ+|u_xxx(0)|^2+u_x/x-x/2u_xxx(0)_ρ^2]
≤ -1/2u_xx^2_ρ+C|u_xxx(0)|^2+Cu_x/x_ρ^2 ,
where for the last inequality we use the weighted AM-GM inequality ab≤ϵ a^2+1/4ϵb^2 for a very small constant ϵ.
Therefore we get
((V^u)_x,u_xρ)≤ -1/2u_xx^2_ρ+C[|u_xxx(0)|^2+u_x/x_ρ^2 +(1+|u_xxx(0)|)E^2] .
Similarly, we estimate via integration by parts
(ω_xx,ωρ) =-(ω_x-ω_x(0),(ω_x-ω_x(0))ρ)-(ω_x-ω_x(0),ω_x(0)ρ)
-(ω_x-ω_x(0),xω_x(0)ρ_x)-(ω_x-ω_x(0),(ω-xω_x(0))ρ_x)
≤-ω_x-ω_x(0)^2_ρ+C[|ω_x(0)|ω_x-ω_x(0)/x_ρ+ω/x-ω_x(0)_ρ^2] .
And we obtain
(V^ω,ωρ)
≤ -ω_x-ω_x(0)^2_ρ+C[|ω_x(0)|ω_x-ω_x(0)/x_ρ
+ω/x-ω_x(0)_ρ^2+|ω_xxx(0)|^2 +(1+|ω_xxx(0)|)E^2] .
The essential difficulty for the viscous terms is that after integration by parts, the singular weight produces various positive terms, on top of the damping terms -u_xx_ρ; see (<ref>), (<ref>). Fortunately, the positive terms contribute only to higher-order terms near the origin.
Consider the interval I=[-π/2,π/2]. ρ and |1/x| are upper bounded by a positive constant outside of the interval and we have
u_x/x_ρ^2≲u_x_ρ^2+u_x/x^2_L^2(I)^2≲ E^2+u_xxx_L^∞(I)^2 ,
ω/x-ω_x(0)_ρ^2≲ω-xω_x(0)_ρ^2+ω/x^2-ω_x(0)/x_L^2(I)^2≲ E^2+|ω_x(0)|^2+ω_xx_L^∞(I)^2 ,
ω_x-ω_x(0)/x_ρ≲ω_x-ω_x(0)_ρ+ω_x-ω_x(0)/x^2_L^2(I)≲ω_x-ω_x(0)_ρ+ω_xxx_L^∞(I) .
Plugging these estimates into the (<ref>), (<ref>) and using again the weighted AM-GM inequality, we can yield
(V^ω,ωρ)+((V^u)_x,uρ)
≤ -1/2(ω_x-ω_x(0)^2_ρ+u_xx^2_ρ)+C[E_V^2 +(1+E_V)E^2] ,
where
E_V=ω_xxx_L^∞(I)+u_xxx_L^∞(I)+|ω_x(0)| .
§.§ Estimates in a higher-order norm
We will use a weighted higher-order norm to close the estimates. To have a good estimate in this higher-order norm, it needs to satisfy three criteria. First, we need to extract damping in the leading order linear term. Secondly, we need to bound the terms like ω_xxx(0) using interpolation between the lower and the higher-order norms via the Gagliardo-Nirenberg inequality; therefore it needs to be at least as strong as a regular higher-order norm near the origin. Thirdly, we need damping for the diffusion terms to close the estimates. This motivates us to choose a combination of the k-th order weighted norms for k≥1:
E_k^2(t)=(u^(k+1),u^(k+1)ρ_k)+(ω^(k),ω^(k)ρ_k) , ρ_k=(1+cos x)^k ,
where we use the notation that f^(k)=∂_x^k f. We denote E_0=E and ρ_0=ρ.
This weighted norm immediately satisfies criterion 2 and we will verify in the linear estimates that it satisfies criterion 1. Finally, a clever combination of the weighted norms can produce damping for the viscous terms and we make the damping terms in the estimates of the (k-1)-th order norms greater than the positive terms in the estimates of the k-th order norms. We will elaborate on those points and establish the nonlinear estimates.
Now we can estimate d/dtE_k(t) for k>0 as follows
1/2d/dtE_k^2(t)=(L_2^(k)+(a-1)(L_2')^(k)+N_2^(k)+F_2^(k)+ν C_u(t)(V^ω)^(k),ω^(k)ρ_k)
+(L_1^(k+1)+(a-1)(L_1')^(k+1)+N_1^(k+1)+F_1^(k+1)+ν C_u(t)(V^u)^(k+1),u^(k+1)ρ_k) ,
where the parts L_i,L_i',F_i,N_i are defined exactly the same as in the inviscid case.
We first look at the viscous terms.
We have for example
((V^u)^(k+1),u^(k+1)(1+cos x)^k)≤ (u^(k+3),u^(k+1)(1+cos x)^k)+CE_VE_k+(1+E_V)E_k^2 .
We use integration by parts twice to obtain
(u^(k+3),u^(k+1)(1+cos x)^k)=-(u^(k+2),u^(k+2)(1+cos x)^k-u^(k+1)k(1+cos x)^k-1sin x)
=-(u^(k+2),u^(k+2)(1+cos x)^k)+1/2(u^(k+1),u^(k+1)k(1+cos x)^k-1[(k-1)-kcos x])
≤ -(u^(k+2),u^(k+2)(1+cos x)^k)+C(k)(u^(k+1),u^(k+1)(1+cos x)^k-1) .
We can also get a similar bound for V^ω. Therefore combined with the leading order estimate (<ref>) and using the idea in Remark <ref>, we conclude that for small enough constants 0<μ<μ_0(k_0)<1, we have the following viscous estimate
∑_k=0^k_0μ^k[((V^u)^(k+1),u^(k+1)ρ_k)+((V^ω)^(k),ω^(k)ρ_k)]≤ C(k)[E_V^2+(1+E_V)∑_k=0^k_0μ^kE_k^2] .
Here μ_0(k_0) is a generic constant depending on k_0. We can choose k_0 large enough later so that E_V can be bounded using the interpolation inequalities.
Now we look at the linear terms and extract damping. We denote the terms as lower order terms (l.o.t. for short) if their ρ_k-weighted L^2-norms are bounded by ∑_i=0^k-1E_i. For the terms of intermediate order, since ρ_k≤ C(k)ρ_i for i<k, combined with the classical elliptic estimate, we can show that u^j, ψ^i for 0≤ j<k+1 and 0≤ i<k+2 are l.o.t.
Using the l.o.t. notation, we keep track only of the higher-order terms
(L_1^(k+1),u^(k+1)ρ_k)=(-2sin x u^(k+2)-2kcos x u^(k+1)+2sin x ψ^(k+2)+l.o.t.,u^(k+1)ρ_k) ,
(L_2^(k),ω^(k)ρ_k)=(-2sin x ω^(k+1)-2kcos x ω^(k)+2sin x u^(k+1)+l.o.t.,ω^(k)ρ_k) .
Again we have a crucial cancellation of the cross terms and for the leading order terms we use integration by parts to obtain for example
(-2sin x u^(k+2)-2kcos x u^(k+1),u^(k+1)ρ_k) =(u^(k+1),u^(k+1)(-k-(k-1)cos x)ρ_k)
≤ -(u^(k+1),u^(k+1)ρ_k) .
Therefore we derive the following estimate
(L_1^(k+1),u^(k+1)ρ_k)+(L_2^(k),ω^(k)ρ_k)≤ -E_k^2+C(k)∑_i=0^k-1E_i E_k≤ -1/2E_k^2+C(k)∑_i=0^k-1E_i^2 .
Similarly we have
(L_1'^(k+1),u^(k+1)ρ_k)+(L_2'^(k),ω^(k)ρ_k)≤ 2E_k^2+C(k)∑_i=0^k-1E_i^2 .
We have the trivial bound for the error term
(F_1^(k+1),u^(k+1)ρ_k)+(F_2^(k),ω^(k)ρ_k)≤ C(k) (a-1)E_k .
The nonlinear terms are more subtle. We will show that
(N_1^(k+1),u^(k+1)ρ_k)≤ C(k)∑_i=0^kE_i^2E_k ,
and we can have the same bound for ω.
In fact, for a canonical term in N_1^(k+1) and N_2^(k), it is of the form ψ^(i)u^(k+2-i) or ψ^(i)ψ^(k+3-i) or u^(i)u^(k+1-i). For the terms ψ u^(k+2) and ψω^(k+1), we can use integration by parts and Lemma <ref> to show that
(ψ u^(k+2),u^(k+1)(1+cos x)^k)≤ C(k)(ψ_x_∞+ψ/sin x_∞)E_k^2≤ C(k)E E_k^2 .
The terms associated with ψ_x u^(k+1), ψ_x ω^(k), u u^(k+1), and u ω^(k) have the same bound trivially. We can then focus on controlling the weighted norms of ω^(i)u^(k-i), ω^(i)ω^(k-1-i), u^(i+1)u^(k-i) for indices 0<i<k to establish the bound (<ref>). For example, we get
ω^(i)u^(k-i)(1+cos x)^k/2_2≤ω^(i)(1+cos x)^(i+1)/2_∞E_k-1-i .
Finally, by the fundamental theorem of calculus, we can bound the L^∞-norm by
C(K)[ω^(i+1)(1+cos x)^(i+1)/2_1+ω^(i)(1+cos x)^(i-1)/2sin x_1]
≤ C(K)[E_i+1+ω^(i)(1+cos x)^(i)/2_1]≤ C(k)∑_i=0^kE_i .
Therefore we conclude that (<ref>) holds.
§.§ Collection of norms and finite time blowup
We collect the bounds (<ref>) (<ref>) for viscous and nonlinear terms, along with the linear bounds and the leading order estimate (<ref>). For any fixed k_0, there exists a small enough constant 0<μ_1(k_0)<μ_0(k_0), such that the following estimate holds
d/dtI_k_0^2≤ -(0.1-C|a-1|)I_k_0^2+C|a-1|I_k_0+CI_k_0^3+Cν C_u(t)[E_V^2+(1+E_V)I_k_0^2] ,
where the energy is defined as
I_k_0^2=∑_k=0^k_0μ_1^k(k_0)E_k^2 .
Here the constants depend on k_0 and μ but once we first prescribe k_0 then μ=μ_1(k_0), they become just constants. We will later make our C_u(t) and |a-1| small to close the argument.
Finally, by the Gagliardo-Nirenberg inequality, for k=1,3, we have
ω^(k)_L^∞(I)≲ω^(4)^θ_L^2(I)ω^1-θ_L^2(I) , θ=k+1/2/4 .
This is the classical Gagliardo-Nirenberg inequality applied to a bounded domain, and we can just use the extension technique to prove it; see for example <cit.>. We get similar bounds involving u and conclude that E_V≲ I_k_0, for any fixed k_0≥ 4. For example, we just take k_0=4 and obtain
d/dtI_4≤ -(0.1-C|a-1|)I_4+C|a-1|+CI_4^2+Cν C_u(t)(1+I_4)I_4 .
Now we choose C_u(0)=|a-1|^2 for |a-1|<δ with a small enough δ>0. It is easy to check that the bootstrap argument for I_4≤ C|a-1| and C_u(t)≤ C_u(0)exp((a-1)t)≤ C_u(0) will hold for all time provided that it holds initially. We again use the estimate for the normalization constants
c_u+c̅_u=2(a-1)+ν C_u(t)(1-u_xxx(0))+2(a-1)ψ_x(0)<(a-1)<0 .
Thus we can obtain a blowup in finite time in the physical variables.
§ APPENDIX
Assume ∑_k≥ 1 a_k^2+c_k^2<∞, then we have the following inequality for (<ref>)
F(a,c) ∑_k≥1{ a_k^2(0.84+1/k^2-1/(k-1)^2)+c_k^2(0.84+1/k(k+1))+ 2a_k a_k+11/(k+1)^2
+2 a_k ∑_j>k+1a_j(1/j^2-1/(j-1)^2)+2a_kc_k1+2k-k^2/2k^2(k+1)+2a_k+1c_kk^2-k-1/2k^2(k+1)^2
-2a_k+2c_kk+2/2(k+1)^2+∑_j>k2a_k c_j1/j(j+1)}≥0 .
Denote the summation of terms in F(a,c) that only involve a_i, c_j for i,j≤ N as F_N(a,c). Here N=200. This quadratic form F_N(a,c) can be expressed as a^(N),TF^(N)c^(N), where a^(N), c^(N) are two vectors with entries a_i,c_j respectively and F^(N) is a symmetric matrix. We numerically verify using interval arithmetic in Matlab that the smallest eigenvalue of F^(N) is greater than 0.01; see remarks after the proof for details. Therefore we have
F_N(a,c)≥0.01∑_k=1 ^N (a_k^2+c_k^2) .
For the remainder F(a,c)-F_N(a,c), we estimate it term by term via the trivial bound 2ab≥-(a^2+b^2) and obtain
F(a,c)-F_N(a,c)≥∑_k>N[ a_k^2(0.84+1/k^2-1/(k-1)^2)+c_k^2(0.84+1/k(k+1))]
+∑_k=1^ N a_k^2(-2/N^2-1/N+1) +∑_k=N-1^ N c_k^2(-N^2-N-1/2N^2(N+1)^2-N+1/2N^2)
+∑_k> N a_k^2(-3/N^2-N(1/(N)^2-1/(N+1)^2)-N^2-N-1/2N^2(N+1)^2-N+1/2N^2-N^2-2N-1/2N^2(N+1)
-1/N+2)+ ∑_k> N c_k^2(-N^2-N-1/2N^2(N+1)^2-N+1/2N^2-N^2-2N-1/2N^2(N+1)-1/N+2) .
For N=200, we estimate all of the coefficients by a lower bound and obtain
F(a,c)-F_N(a,c)≥ -2/N∑_k=1^ N (a_k^2+c_k^2)+(0.84-3/N)∑_k> N (a_k^2+c_k^2)≥-2/N∑_k=1^ N (a_k^2+c_k^2) .
Therefore we conclude F(a,c)≥ 0.
We now explain how to verify that the smallest eigenvalue of the symmetric matrix F^(200) is greater than 0.01. We proceed in three steps.
* We first use Matlab to perform an (approximate) SVD decomposition of
F^(200)-0.011I≈ VDV' .
Here D is the diagonal matrix consisting of (approximate) eigenvalues of F^(200)-0.011I, and V is the unitary matrix consisting of (approximate) eigenvectors of F^(200)-0.011I.
* We use interval arithmetic to verify that the maximal absolute value of entries of F^(200)-0.011I- VDV' is at most 10^-10. Therefore the spectral norm of F^(200)-0.011I- VDV', which is bounded by its 1-norm, is rigorously bounded from above by 200 × 10^-10.
* Since D has positive entries, we know that VDV' is positive definite. We conclude that
F^(200)-0.01I=VDV'+0.001I+F^(200)-0.011I- VDV'
is positive definite.
Acknowledgments. The research was in part supported by NSF Grant DMS-2205590 and the Choi Family Gift Fund and the Graduate Fellowship Fund. We would like to thank Dr. Jiajie Chen for some stimulating discussions, especially on the design of the energy norm. We also thank Changhe Yang for some valuable discussions in the early stage of this project.
abbrv
|
http://arxiv.org/abs/2306.08176v3
|
20230613234622
|
An Open Optimal Power Flow Model for the Australian National Electricity Market
|
[
"Rahmat Heidari",
"Matthew Amos",
"Frederik Geth"
] |
math.OC
|
[
"math.OC"
] |
Note on the Corrected Hawking Temperature of a Parametric Deformed Black Hole
with Influence of Quantum Gravity
Zhonghua Li
===============================================================================================================
The Australian National Electricity Market (NEM) is a complex energy market that faces challenges due to the increasing number of distributed energy resources (DERs) and the transition to a net-zero emissions target. Power system modelling plays a crucial role in addressing these challenges by providing insights into different scenarios and informing decision-making. However, accessing power system data containing sensitive information can be a concern. Synthetic data offer a solution by allowing researchers to analyze and develop new methods while protecting confidential information. This paper utilizes an existing synthetic network model based on the NEM (`S-NEM2300'-bus system) to develop a benchmark for power system optimization studies. The model is derived and enhanced using PowerModels.jl and MATPOWER data models, and feasibility is ensured through power flow and optimal power flow studies. The resulting benchmark model, called `S-NEM2000'-bus system, is validated and enriched with additional parameters such a thermal limits, generation fuel categories and cost models. The `S-NEM2000'-bus system is an open[<https://github.com/csiro-energy-systems/Synthetic-NEM-2000bus-Data>] dataset which provides a valuable resource for optimization studies in the power system domain.
Disclaimer:
The open network data developed in this study, specifically the `S-NEM2000'-bus benchmark model, is a representation created for research purposes and does not reflect the actual NEM. It is important to note that this model is a work in progress and may not include all the features and components present in the real NEM. Furthermore, the model can be further improved by incorporating additional features and elements and refining existing components. The ongoing development and growth of this work aim to enhance the representation of the NEM and its associated power system for future studies and analyses. The authors invite collaborators to reach out through the issue tracker in the https://github.com/csiro-energy-systems/Synthetic-NEM-2000bus-Datarepository for any inquiries, feature requests, or collaboration opportunities.
§ INTRODUCTION
The Australian National Electricity Market (NEM) is a complex energy market that operates across five states and territories in Australia and regulates and manages the supply and demand of electricity within the market. The NEM, shown in Figure <ref>, is one of the largest interconnected power systems in the world, covering 40,000 km of transmission lines and cables, and servicing around 10 million customers[<https://www.aemo.com.au/-/media/Files/Electricity/NEM/National-Electricity-Market-Fact-Sheet.pdf>]. The market operates on a real-time basis, balancing the demand for electricity with the supply of electricity from a range of sources, including large-scale generators, renewable energy systems, and distributed energy resources (DERs).
The NEM is facing challenges due to the rising number of DER such as rooftop solar, batteries, and electric vehicles. Challenges include integrating DERs into the existing grid infrastructure, ensuring system stability and reliability, and addressing grid congestion in areas with high penetration of rooftop solar, which can cause voltage fluctuations and result in power outages <cit.>. This requires significant investment in network infrastructure, as well as changes to the regulatory framework to support the integration of DERs.
The transition to a net-zero emissions target in the NEM also presents challenges, including the need for energy market reform and changes to regulatory frameworks to incentivize renewable energy resources deployment and emissions reduction. Additionally, the uncertainty surrounding government policy and support for renewable energy can make it difficult for investors to make long-term investment decisions.
Power system modelling is essential in dealing with the above-mentioned challenges in the electricity sector. Modelling tools provide insights into the impacts of different scenarios, such as the deployment of renewable energy sources or grid upgrades, and inform decision-making related to planning, design, operation, policy, regulation, and investment. This helps to ensure the reliability, sustainability, and affordability of the power system.
Despite necessity of access to power system data that enables reliable modelling and decision-making, power system data can contain confidential information such as operational parameters, network topology, and customer data. By using synthetic data, this sensitive information can be protected while still allowing researchers and engineers to analyse and develop new methods.
Synthetic data are beneficial in the power systems context as they allow engineers to simulate and analyse power systems under different scenarios, identify potential issues, and test new algorithms and techniques. This helps to ensure the reliability and safety of the power system, which is critical to the functioning of society as a whole.
Furthermore, openly accessible data, open data, plays an important role in power systems as it contributes to increased transparency, efficiency, and innovation. Open data encourages collaboration and knowledge sharing among different organizations and researchers, leading to the development of best practices and the advancement of the power sector as a whole.
The authors of <cit.> provided a comprehensive library of all publicly available transmission network datasets (to date of the paper) under a common data format to be used for AC power flow optimisation studies. To this end, the authors updated the dataset by reconstructing the missing data and adding key parameters such as generator cost functions and thermal limits, for algorithmic benchmarking purposes.
More recently, a detailed network model of the European countries has been released in <cit.> to study market simulations of different combinations of wind and transmission capacity that is installed in the North Sea. The modelling framework and datasets are open which facilitates a range of other studies to investigate the impact of large-scale projects on the European system and electricity markets.
An open grid data of the NEM transmission network has been developed and released in <cit.> which includes network and generation data sets, geospatial locations of network elements, and is designed to interface with AEMO's public database. This interface is specifically advantageous since it allows historic regional load and generator dispatch data to be integrated with the network model. The authors ensured completeness and functionality of the dataset by assessing power flow models.
Recently a synthetic network model based on the NEM has been released <cit.>. This model, called S-NEM2300-bus benchmark model, has longitudinal structure and is useful for EMT real-time digital simulations. The authors developed this model based on a PSS/E model of the NEM and used statistical methods to obfuscate generation, transmission and load parameters. The resulting model is a relational database released as CSV files and can be used in real-time EMT platforms.
In this work we use this S-NEM2300 network data to develop a benchmark for power system optimisation studies. We develop a data model to represent ac-feasible steady state power flow behaviour of the synthetic NEM model by parsing, cleaning and enhancing the input relational data in csv format (extract from Hypersim simulations) to PowerModels.jl <cit.> and MATPOWER <cit.> data models.
PowerModels.jl <cit.> is a Julia/JuMP package to study steady-state power network optimization problems. It provides capabilities such as parsing and modifying network data as well as a common platform for computational evaluation of emerging power network formulations and algorithms. PowerModels' network data model is organized as data dictionaries and the key names are designed consistent with MATPOWER's <cit.> file format which is a familiar data model for power system researchers, with some exceptions on the having separate components for loads and shunts, and design of time series simulation as “multi-network”.
To ensure feasibility of the converted data we conduct a series of power flow and optimal power flow studies as described in Section <ref>. The ac-feasible converted data is then validated by comparing power flow results with the steady state results in the original data release. The converted synthetic network data is then complemented and extended by adding thermal limits and generation fuel type and cost models to serve as a benchmark model for optimisation studies.
§.§ Motivation
The S-NEM2300 network is developed as a benchmark model based on the NEM which is compatible for real-time simulations. We aim to convert this data to a useful data for optimal power flow studies. The original data comes with the following limitations:
* It is generated based on a snapshot of the actual NEM at a Summer day of 2018. As such, it is likely that not all generators were scheduled at the time, hence not represented in the synthetic model.
* Generators are classified as thermal or hydro, but further details regarding fuel type and cost data are not taken into account.
* Line and transformer thermal ratings are not identified.
The main motivation to develop a benchmark model is studying the following applications on the NEM-level network:
* We aim to develop the data model to represent ac-feasible steady state power flow behaviour of the S-NEM2300 model. That is, the AC formulations are feasible for the developed network data, which ensures feasibility of other formulations such as current-voltage (IV), Quadratic Convex (QC), and `DC'.
* Impact of DER integration on the grid can be studied in a large scale network with similar properties the the NEM.
* Conducting optimal power flow studies such as unit commitment, economic dispatch, and market studies.
* Conducting security constrained optimal power flow studies.
* Development of hybrid ac-dc network by adding the NEM HVDC lines in the synthetic model and investigating security of the network against credible contingencies.
§ ORIGINAL SYNTHETIC NEM (S-NEM2300) NETWORK DATA
We first revisit the main properties of the synthetic NEM data released in <cit.>. Table <ref> and Figure <ref> illustrate the main network components and the areas of the network based on synthetic coordinates available in the dataset. More properties of the network such as area interconnectors, graph properties, structural properties and area diameters are provided in <cit.>.
§ DATA MODEL DEVELOPMENT FRAMEWORK
This section describes data model cleaning and derivation process and the methodology to develop an ac-feasible network data. The input data is in CSV format which we first parse into PowerModels data model that can then readily be serialized into MATPOWER data format[command ].
§.§ Data Models
§.§.§ Hypersim
Opal-RT Hypersim[<https://www.opal-rt.com/systems-hypersim/>] is a powerful real-time simulation platform that can be used for power system simulations. It is a software solution that allows engineers to create a virtual environment that mimics the physical power system, enabling accurate modeling and testing of complex power systems.
§.§.§ MATPOWER
The MATPOWER data model format is a standardized data format used by the MATPOWER software package <cit.> for representing power system data in a MATLAB file. It provides an established, flexible format that researchers know, which makes it easier to analyse, compare, and share data in the power system research community.
§.§.§ PowerModels.jl
The PowerModels.jl package internally uses a dictionary to store network data with strings as keys so that it can be serialized to JSON for algorithmic exchange <cit.>. A detailed description of the PowerModels data model is provided in the documentation[<https://lanl-ansi.github.io/PowerModels.jl/stable/>].
PowerModels' data model can be instantiated by reading MATPOWER case studies, and PowerModels can serialize data to both MATPOWER and JSON.
§.§ Data Model Derivation
We parse and clean the input relational data into the PowerModels data model with bus, branch, load, gen, switch, storage, shunt and dcline as dictionary keys. Each network element is indexed and the data is stored in per unit values. Transformers are stored as branches with an identifier flag.
The data cleaning process consists of the following steps:
* Identifying how to accurately convert each element data from the relational dataset to the PowerModels format. For most network elements such as buses, lines, shunts, load and generators, the cleaning is fairly straightforward. For two-winding and three-winding transformers, however, the cleaning requires implementing an equivalent circuit, which explained later in this section. Note that the input data does not contain any dc line, so in this paper we follow the same approach, and develop a model with dc line and buses in a future work.
* Regional areas and interconnector branches are identified and flagged.
* Per unit conversion and unit transformations: We convert all data to per unit w.r.t their individual base values as given in the original data set (per unit conversion) and then update the per unit values to be on the base value of the cleaned data model (unit transformation).
§.§.§ Two-Winding Transformer
In the input data, two-winding transformer models include primary, secondary, and magnitization impedance values. This model represents a T-section circuit as shown in Figure <ref> on the left. The branch model in PowerModels data format, however, is in Π-section Figure <ref> on the right. We therefore perform a T to Π circuit conversion using equations in (<ref>) and calculate the transformer tap ratio from primary and secondary per unit voltage values.
Z_A = Z_1 Z_2 + Z_2 Z_3 + Z_1 Z_3/Z_2,
Z_B = Z_1 Z_2 + Z_2 Z_3 + Z_1 Z_3/Z_3,
Z_C = Z_1 Z_2 + Z_2 Z_3 + Z_1 Z_3/Z_1
§.§.§ Three-Winding Transformer
An equivalent circuit of a three-winding transformer is shown in Figure <ref>. This circuit is implemented in PowerModels format as three branches representing three two-winding transformers that are connected to a common auxiliary node in between. The primary, secondary and tertiary impedance values are available from input data and the magnetization impedance is considered to be at the primary side, i.e. the primary two winding transformer.
To accurately capture the actual three winding transformer tap ratios, each two-winding transformer tap ratio is calculated from its per unit voltage divided by the auxiliary node per unit voltage which is assumed to be 1.
§.§ Methodology
While the original network data is implemented and tested in Hypersim, cleaning and derivation of a different data model is still challenging as the cleaned data may cause infeasibility in optimal power flow (OPF) studies, as OPF needs to satisfy bounds such as voltage and current limits, and avoid voltage collapse.
A technique to analyse power flow infeasibility is to extend traditional power-balance constrained OPF to include nodal slack variables that indicate constraint violations when there is no feasible solution.
The infeasibility analysis technique is such that if the network is feasible, the slack variables have values of zero at convergence. Otherwise, nonzero slack variables can be used to identify and localize infeasibility for planning insight or corrections during operation.
This method was first introduced in <cit.> for transmission systems and then extended in other work such as <cit.>. Lately this technique has been applied to distribution systems in <cit.>.
In this work we apply the infeasibility analysis method used in <cit.> in an algorithmic way to identify different local constraint violations and the physical meaning behind it. Figure <ref> shows the flowchart of the algorithm implemented to identify infeasibility issues in the derivation process. Starting from input files, the cleaned data is first tested by running a linearized `DC' OPF with slack nodal power variables to identify whether active power balance at all nodes are maintained. Nodes with significant imbalance are investigated as to what active elements could be the problem. The next step is to identify reactive power balance at the nodes which is the outcome of an AC-OPF with nodal slack power variables. Similar to the previous step, the nodes with substantial imbalance are investigated and the derivation process is modified until active and reactive power balance at all nodes are maintained. If the slack AC-OPF renders perfectly balanced network, the derivation could conclude at this point, but in case of existing imbalance in this step, the next two steps prove effective. The next step is to test generation dispatch setpoints, which is achieved by comparing AC and linearized `DC' PF solutions of the network with slack nodal power variables. If the generator setpoints mismatch, slack power variables are introduced to generators to identity which generators cause the most mismatch and the derivation process is modified accordingly.
§.§ Validation of the Derived Synthetic NEM (S-NEM2000) Network Data
Figure <ref> demonstrates how the power flow results of the derived network data compares to the steady state results of the original network. As can be seen, the derived model accurately tracks the original model in voltage magnitude and phase angle, and generators' active power setpoints. The slight mismatch in generators' reactive power setpoints is mainly due to approximations made in generator models which ignore internal impedance values.
§ THERMAL LIMITS
The S-NEM2300 network data was developed to serve for phasor and EMT simulation studies. To enhance the network data to be useful for optimisation studies we need to add reasonable network constraints such as line and transformer thermal limits.
We adopt the data driven approach described in <cit.> built on publicly available datasets. As noted in <cit.>, these datasets reflect many of the statistical features that are common on real networks. Section <ref> provides an overview of the reference datasets and the derivation and validation of the thermal limit models.
To apply the thermal limit models to the synthetic NEM model, however, we first review the properties of the S-NEM2300 data. The network data is then modified to reasonably resemble the transmission systems by reducing network through joining short lines, and removing ideal lines and circuit breakers. The data driven thermal limit models are then applied to the modified synthetic NEM network data which we call S-NEM2000. We next run an algorithm that links and updates thermal limits of the connecting lines and transformers so that it more closely represents a real power system set of constraints. The properties of the modified network and linked thermal limits are validated and discussed.
§.§ A Thermal Limit Model
In practice, line thermal limits are straightforward to calculate if conductor type and line length are available. In this synthetic dataset, however, we only have line impedance values and the connecting bus nominal voltages. With this limited information we use a linear regression model proposed in <cit.> to determine line thermal limits based on the values of line reactance X, line resistance R and nominal voltage v. For the elements that any of these values are not available or not applicable (in case of ideal lines and transformers), a reasonable upper bound is derived.
§.§.§ Reviewing the Reference Data
We use two sources of transmission thermal limits for this benchmark study: the Polish transmission system data <cit.>, and the Irish transmission transmission system data <cit.>. Although some transformer data is also available in these datasets, we focus the data driven study only on the available transmission line data. In order to stay within reasonable range of thermal limits and line types, we only consider lines with X/R ratio below 20 and normalised thermal rating (thermal rating divided by nominal voltage, that is, current rating) above 0.005. Figure <ref> shows the correlation between line thermal limits and the nominal voltage. As stated in <cit.>, the significant variance at each voltage level which even overlaps with the neighbouring voltage levels replicates a realistic behaviour in real networks where a given nominal voltage may contain low and high capacity lines.
§.§.§ Statistical Model
The statistical model proposed in <cit.> derives thermal limits in power flow (MVA) rather than current flow (kA), which means that identical conductors can have different MVA limits based on their nominal voltage levels, but converting back to kA they should have similar thermal limits. The model also utilizes the ratio of line reactance to resistance to de-correlate the line conductor type with line length. Figure <ref> indicates a clear correlation between X/R ratio and the kA thermal limits.
We next combine the two datasets and fit the following linear regression model on the log-log scale
y = e^a x^k
for which we derive the following values for a and k,
a = -5.1407,
k = 0.6078.
The regression fit model and its parameters are shown in Figure <ref> which is a fairly crude but effective fit of the data for our purposes.
In <cit.> the values for a and k are derived as -5.0886 and 0.4772, respectively. Therefore the value for a we derived is close to the one in <cit.>, but the value of k is slightly different, which could be due to having a more recent Irish dataset, inclusion of more points from the reference datasets and/or the filters applied to the data.
The model (<ref>) is used to estimate per unit thermal limits S^u for the lines that have all three parameters X, R, v on both sides of the line as,
S^u = v e^-5.1407( X/R) ^0.6078.
§.§.§ Upper Bound
The statistical model (<ref>) is only applicable when all three X, R, v are available and both sides of the element have the same nominal voltage. It is therefore not possible to apply this model to ideal lines with R=0 and transformers with different nominal voltage at either side. In these cases, the authors in <cit.> propose calculating a reasonable theoretical upper bound for the thermal limit that is close to the theoretical maximum, but not to the extent that it deactivates the thermal limit constraints. The upper bound accurately indicates the line's true throughput limitation and is calculated from <cit.>
(S^u_ij)^2 = (v_i^u)^2 Y_ij^2 ( (v_i^u)^2 + (v_j^u)^2 - 2 v_i^u v_j^u cos(θ_ij^Δ) )
where the index ij means the line/transformer between buses i and j, v_i^u is the voltage magnitude upper bound at bus i, Y_ij is the line/transformer's admittance, and θ_ij^Δ is line/transformer's phase angle difference bound. The thermal limit upper bound can be calculated given that reasonable voltage bounds are available throughout the network and with the assumption of θ_ij^Δ = 15^∘.
§.§.§ Validating the Statistical and Upper Bound Models
Figure <ref> shows correlation of the proposed statistical and upper bound thermal limit values to the real thermal limits given in the reference datasets of Polish and EIRGrid transmission systems. The figure on the statistical model shows a reasonable correlation of the model and the actual data as the data forms a stretch of points along the x = y line which indicates perfect predictions. The upper bound, on the other hand, shows points below the x = y line affirming the optimistic high bound on the thermal capacity by design, while the large upper bound limits indicate lines with small reactance values which represent ideal conductors.
§.§ Reviewing the S-NEM2300 Data
Figure <ref> shows the correlation between normalised thermal limits and the X/R ratios in S-NEM2300 network. Note that all thermal limits are set to the nominal MVA of 100 MVA for all lines and transformers and only the X/R values of non-ideal lines are shown. From the figure it is evident the network data requires some modifications to show a correlation behaviour similar to reference datasets in Figure <ref>.
We perform two basic network reduction techniques to modify and prepare the network data for the thermal limit models application:
* we remove the lines that represent ideal lines or circuit breakers, filtered as lines with
* small impedance (<0.01 p.u.) and small shunt admittance (<0.01 p.u.), since in the synthetic data there exist constant parameter (CP) and decoupling lines with negligible impedance but non-negligible shunt admittance values, or,
* high X/R ratio (>100) and small shunt admittance (<0.01 p.u.);
* we join the cascading lines connected through degree 2 buses to ensure a uniform thermal limit over linear paths in the network.
Figure <ref> shows how the reduced network bus voltage magnitudes and phase angles results still closely match the original network results. In fact, we used this power flow validation to specify and visualise a small variation of voltage magnitude and angle w.r.t the S-NEM2300 results in order to determine the empirical thresholds for impedance, shunt admittance, and X/R ratio values of the lines to be removed.
§.§ Adding Thermal Limits and Validations
In the next step we add line and transformer thermal limit using the statistical and upper bound thermal limit models where applicable.
We also set lower and upper bounds of 30 and 1500 MVA for the thermal limits on transformers according to NEM equipment ratings <cit.>.
Figure <ref> shows line thermal limits across different voltage levels and correlation of normalised thermal limits and X/R ratio. The left figure shows that the lines with higher nominal voltage have higher thermal limits which is a true behaviour of real systems. The perfect prediction of line thermal limits in the right figure indicates that no ideal lines exists in the data any more. Note that the transformer thermal limits are derived from the upper bound model and are not represented in Figure <ref>.
The estimated thermal limits, though seem to have been predicted reasonably well as seen in Figure <ref>, should be tested against power flow results of the modified network data. Figure <ref> on the left shows line and transformer loading w.r.t their thermal limits. We note that with these thermal limits a total of 9 line flow violations occur in steady state conditions.
Since the number and magnitude of violations are manageable, in the next step we increase the thermal limits of the violating lines appropriately to ensure that the network is secure under normal operation. Figure <ref> on the right shows line and transformer loading w.r.t their thermal ratings while Figure <ref> shows how the thermal limits have changed with respect to Figure <ref>.
§.§ S-NEM2000 Network Model Components
After cleaning the S-NEM2300 network data, reducing the lines, and fixing some minor issues with generators, the network data components count is listed in Table <ref>, which compared to the ones reported in Table <ref>, the main differences are the number of buses and lines due to removing degree-2 buses and joining the connecting lines. Note that the three-winding transformers are modelled as three two-winding transformers in the released version of the S-NEM2000 network data.
§ GENERATION MODELS
The canonical objective in OPF is generation cost minimization. Such an application, and related ones such as market clearing and unit commitment, depend on detailed generator specifications such as fuel type, active and reactive power capability curves, ramp up/down rates, minimum on/off times, startup/shutdown costs, and fuel efficiency. To assign the appropriate properties to generators in the synthetic model, we first notice that most generator properties are directly linked to its mechanical design, which in turn is determined by the fuel type. We therefore develop data driven models to assign generator fuel categories, and then, using publicly available data sets we derive generator properties.
To assign generator fuel categories and properties we use publicly available generation datasets provided by AEMO <cit.> and GeoScience Australia (GA) <cit.>. We categorize the generators fuel type so that the synthetic network model is a reasonable representative of the actual NEM generation fuel mix.
§.§ Reviewing the Reference Data
The reference data we use to assign synthetic generator types is GA major power stations list <cit.> which contains generators' fuel type, registered capacity, voltage level, status, and more.
We only consider operational generators and the ones with a single fuel source. The reference dataset <cit.> classifies generation fuel categories into black/brown coal, gas, distillate, biomass, biogas, water, solar, wind, battery, fuel oil and coal seam methane.
We first ignore generators using coal seam methane and fuel oil as there are only a few of them with very limited capacity. Next, inspired by Figure <ref> that shows generation fuel mix of the NEM for the past year <cit.>, we ignore distillate, biomass, biogas and battery power stations as well.
Figures <ref> and <ref> show the count and capacity of generators throughout the NEM categorized by fuel type and state. For the S-NEM2000 model we target generation fuel type allocation to represent the generation fuel mix as shown in these two figures.
Upon determining generation fuel categories, we assign generator properties and fuel cost data from Fuel and Technology Cost Review <cit.> which contains parameters such as start up and shut down costs, no-load, fixed and variable operation costs, minimum up/down times, ramp up/down rates, and more operational properties.
§.§ Reviewing the S-NEM2000 Data
The S-NEM2300 model classifies generators into synchronous generators and network sources, and does not take into account any fuel category and cost data. However, the authors in <cit.> state that synchronous generations include fossil-fuel generators as well as hyrdo and wind powered generators whereas the network source category consists of `other' types of generators. We base our classification on these assumptions and choose the network sources to only include solar farms. It is also worth noting that the reactive power compensators such as synchronous var generators and synchronous condensers can be part of the network sources as well, particularly by noting that some of the networks sources only provide reactive power and no active power to the network.
Remark:
The S-NEM2300 network data models the Basslink HVDC interconnector between the mainland NEM and Tasmania as two network sources connected to each sub-network. However, since both network sources are indexed and placed as Tasmania elements, we assign the generator (and the generator bus), representing Tasmania in the mainland, to the Victoria area.
Figure <ref> shows how the total count and capacity of generators across different states are compared between the NEM and the S-NEM2300.
§.§ Generation Fuel Category Classification
In effort to classify synthetic generators by fuel category, a data-driven classification approach was taken. Within this approach the registered capacity of generators from GA major power stations list <cit.> were used to train an Sklearn Random Forest classification model <cit.> to predict the fuel type of a given generation. Given that in the reference dataset some fuel types are overrepresented compared to others, we used oversampling of the under-represented classes by generating samples based on nearest neighbours using Synthetic Minority Over-sampling Technique (SMOTE) <cit.>. Using SMOTE to balance the classes before classifying, the trained classification model was then used with the maximum generation capacity of synthetic generators within the S-NEM2000 dataset to predict the fuel type of the generators in each state separately.
To stay true to the S-NEM2300 model, the classifier uses the synthetic synchronous generators list to identify generators with black/brown coal, natural gas, hydro and wind fuel categories and uses the network source list to assign solar generation units[
An important point that is missing from many datasets is the identification of synchronous condensers and other var compensator units. The S-NEM2300 model does not mention var compensators as part of the generator categories, but close observation of the data reveals that the network sources contains several generation units with zero or very small active and substantial reactive power contribution to the network. One could assume these units as var compensator units and assign the reminder as solar generation units.
].
Nonetheless, we pre-allocate the the network source representing Basslink in NEM mainland as a hydro generator, due to hydro being the main fuel category in Tasmania, and similarly the network source representing Basskink in Tasmania as a natural gas fueled generator, due to natural gas being the only fossil-fuel present in Tasmania fuel categories[These two network sources can be removed in future work when HVDC lines are added to the network.].
Figure <ref> shows comparison of the generator capacities categorized by fuel type between the NEM and the S-NEM2000. Quantiles comparison shows that the classification has rendered a reasonable traction of generator capacities for each fuel type.
To further assess the performance of classification, the predicted generation type was used to aggregate the generation capacity and the count of the synthetic generators in each state to compare to the aggregate generation capacity and count of generators within the NEM <cit.>. These aggregate comparisons can be seen in Figures <ref> and <ref>.
These figures show how the total generation capacity and count of each state compare between the classified synthetic model and the reference data. The aggregate generation capacity and count categories in both network models follow a similar pattern while the synthetic network has a lower total capacity level. The shortage generation is because the reference data is an updated list of all existing generators in the NEM, while the S-NEM2300 model was developed from a PSS/E model of the network representing a 2018 Summer day. The synthetic model was developed to replicate that specific moment in time, which caused the exclusion of several generation units that were not in operation at the time. It is therefore realistic to see a lower generation capacity level in the synthetic network model.
More observations from the Figures <ref> and <ref> are as follows:
* The total generation capacity and generators count for the whole NEM, classified by fuel type, are often lower in the S-NEM2000, except for solar which is due to consideration of all `other' generation units such as var compensators and battery storage.
* Tasmania generation capacity and count are represented quite well.
* Queensland is represented reasonably well in terms of generator count and capacity.
* Generation capacity and count in New South Wales, Victoria and South Australia have been represented fairly reasonable for most fuel types and less accurate for some others.
Overall, the classification demonstrates a reasonable representation of fuel types in each state and across the whole NEM. One could also further modify generator capacities to match the actual aggregated value to be a closer representation of the actual system.
§.§ Generation Cost and Operation Data
After generator fuel type classification we use AEMO's Fuel and Technology Cost Review <cit.> to assign cost and operational models for different generators. Table <ref> reports the main parameters identified from the reference file and assigned to the fuel categories.
§ PROOF OF CONCEPT STUDY
In addition to the integrated NEM data (), we divide it into two sub-networks 1) mainland () and 2) Tasmania ().
In this section we run optimal power flow on S-NEM2000 and its mainland and Tasmanis sub-networks for AC power flow relaxations and compare objective values and runtime, which is similar to how the results in <cit.> are presented. We use PowerModels.jl version 0.19.8 and the interior point solver Ipopt <cit.> with MUMPS linear solver to solve the optimal power flow problems. The computation is performed on a PC with 6-Core Intel(R) Core(TM) i7 @ 2.60GHz processor with 32 GB of RAM.
§ CONCLUSION AND FUTURE WORK
In this study, we utilized an existing synthetic representation of the NEM and developed the S-NEM2000 benchmark for power system optimization studies. By converting the synthetic network data into the PowerModels.jl and MATPOWER data models, we enabled feasibility and validation checks through power flow and optimal power flow studies. The resulting benchmark model includes enhanced features such as thermal limits and generation fuel type and cost models, providing a valuable resource for optimization studies in the power system domain.
We release the whole of NEM data () under the creative commons `CC BY 4.0'[<https://creativecommons.org/licenses/by/4.0/>] license at <https://github.com/csiro-energy-systems/Synthetic-NEM-2000bus-Data>. The authors also intend to publish this data set in Power Grid Library Optimal Power Flow benchmarks.
However, it is important to note that the synthetic model used in this study is not an exact replica of the actual NEM, and there is still room for improvement. Future work should focus on incorporating additional components, such as High-Voltage Direct Current (HVDC) lines, which play a significant role in interconnecting regions within the NEM. The inclusion of HVDC lines would enable more accurate simulations and analyses of power flow and optimization scenarios.
Furthermore, expanding the benchmark model to include detailed load and generation profiles is another important avenue for future work. Incorporating realistic and time-varying data on load and generation patterns would provide a more comprehensive representation of the NEM and enable more accurate assessments of system performance and optimization strategies. Additionally, for the data-driven generation classification approach, using additional generator properties, for example voltage, could improve the classification results.
The ongoing development and growth of this work will involve continuous refinement and expansion of the S-NEM2000 model, incorporating additional components and data sources to enhance its representation of the real-world system. These advancements will contribute to furthering our understanding of the NEM and supporting effective decision-making in the context of power system optimization and planning.
§ ACKNOWLEDGEMENT
We would like to express our gratitude to Dr. Felipe Arraño-Vargas and Dr. Georgios Konstantinou who released the original data used in this study, and for their ongoing support and advice on gaining insights from the data. Their invaluable contribution and commitment to open data have facilitated our research and enabled us to develop the S-NEM2000 benchmark for power system optimization studies.
We extend our appreciation to AEMO and GeoScience Australia for providing the publicly available generation datasets that formed the basis of our analysis. Their efforts in collecting and maintaining comprehensive data on the NEM have been instrumental in enhancing our understanding of the power system domain and played a crucial role in assigning appropriate properties to the synthetic generators in our model and ensuring its fidelity to the NEM.
IEEEtran
|
http://arxiv.org/abs/2306.05786v1
|
20230609095718
|
Two-level histograms for dealing with outliers and heavy tail distributions
|
[
"Marc Boullé"
] |
cs.LG
|
[
"cs.LG"
] |
#1502em
(#1)
=0pt =0
aquote[1]
#1
=7
*theorem*Theorem
theoremTheorem
definitionDefinition
notationNotations
Orange Labs - 22300 Lannion - France
Two-level histograms for dealing with outliers
and heavy tail distributions
Marc Boullé
July 31, 2023
===========================================================================
Histograms are among the most popular methods used in exploratory analysis to summarize univariate distributions.
In particular, irregular histograms are good non-parametric density estimators that require very few parameters: the number of bins with their lengths and frequencies.
Many approaches have been proposed in the literature to infer these parameters, either assuming hypotheses about the underlying data distributions or exploiting a model selection approach.
In this paper, we focus on the G-Enum histogram method, which exploits the Minimum Description Length (MDL) principle to build histograms without any user parameter and achieves state-of-the art performance w.r.t accuracy; parsimony and computation time.
We investigate on the limits of this method in the case of outliers or heavy-tailed distributions.
We suggest a two-level heuristic to deal with such cases. The first level exploits a logarithmic transformation of the data to split the data set into a list of data subsets with a controlled range of values. The second level builds a sub-histogram for each data subset and aggregates them to obtain a complete histogram.
Extensive experiments show the benefits of the approach.
§ INTRODUCTION
Histograms are among the most popular methods used in exploratory analysis to summarize univariate distributions.
Regular histograms are the simplest savor of histograms to represent a distribution: all bins are of the same width and the only parameter to select is the number of bins. While they are suited to roughly uniform distributions <cit.>, they fail to capture the density of more complex distributions.
Irregular histograms are non-parametric piecewise constant density estimators that require very few parameters: the number of bins with their widths and frequencies.
Several irregular histogram methods have been proposed in the literature, but they often require user-defined parameters, such as the number of bins or the accuracy ϵ at which the data is
to be approximated. For example, the minimum description length (MDL) histogram methods <cit.> automatically choose the number of bins and their widths, but these widths need to be a multiples of a ϵ user parameter.
In the context of exploratory analysis, the choice of this parameter is not an easy task, and fully automatic histogram methods are preferable.
Several automatic irregular histogram methods have been proposed in the literature, such as the taut string methods based on penalized likelihood <cit.>, the Bayesian blocks histograms based Bayesian regularization <cit.> or the G-Enum method <cit.> based on the MDL approach.
In a comparison between several regular and irregular histograms methods, the G-Enum method achieves state-of-the-art accuracy for estimated density while being much more scalable than its closest competitors <cit.>. It is also among the most parsimonious methods, with far fewer intervals than the most accurate alternative methods, which is an essential feature for exploratory analysis when interpretability is an issue.
These properties being in line with our main objective in this paper, we focus on this method.
The G-Enum method extends the MDL method <cit.> with an automatic choice of ϵ, a fast to compute closed-form evaluation criterion and scalable efficient optimization heuristics.
Its modeling space is described on the basis of ϵ-length elementary bins, where each histogram bin consists of a subset of adjacent ϵ-length bins.
A granularity parameter is exploited to automatically select the ϵ parameter.
Together with efficient linearithmic optimization heuristic, this granulated MDL criterion provides a resilient, efficient and fully automated approach to histogram density estimation.
Nevertheless, this method reaches its limits in the case of outliers or heavy-tailed distributions.
We suggest a two-level heuristic to deal with such cases. The first level exploits a logarithmic transformation of the data to split the data set into a list of data subsets with a controlled range of values. The second level builds a sub-histogram for each data subset and aggregates them to obtain a complete histogram.
The rest of the paper is organized as follows.
We briefly recall the G-Enum method in Section <ref>.
We illustrate the limit of histogram methods in the case of outliers and discuss possible solutions to push these limits in Section <ref>.
We suggest a two-level approach for building histograms in Section <ref>,
and analyze its properties in Section <ref>
We perform extensive experiments with artificial data sets in Section <ref>.
Finally, we suggest future work in Section <ref> and give a summary in Section <ref>.
§ G-ENUM METHOD: SUMMARY
This section is a brief reminder of the G-Enum method <cit.>.
§.§ Problem formulation
We consider a sample of n observations x^n = (x_1,...,x_n) on the interval [x_min, x_max]. Let ϵ be the approximation accuracy, so that each x_j ∈ x^n can be approximated by x_j ∈𝒳={ x_min + tϵ ; t = 0,... , E} where E = L/ϵ and L = x_max - x_min is the `domain length' of the data. We expect to have E ∈ℕ.
Let 𝒞 be the set of possible endpoints for sub-intervals as
𝒞 = {c_t=x_min - ϵ/2 + tϵ ; t = 0,… , E }
These endpoints define E elementary bins of length ϵ, which are called ϵ-bins. They are the building blocks of histogram intervals: each combination of ϵ-bins into K intervals, with K ranging from 1 to E, defines a histogram model. In this range of possibilities, the goal is to select a set of K-1 endpoint C=(c_1,...,c_K-1), c_k ∈𝒞 such that [c_0, c_E]=[x_min - ϵ/2, x_max + ϵ/2] is partitioned into K intervals { [c_0, c_1],]c_1, c_2], ..., ]c_K-1,c_E]} that are well-suited to the actual data distribution. Each interval k has a data count of h_k entries and a length L_k=c_k - c_k-1, which is a multiple of ϵ:
∀ k, ∃ E_k ∈ℕ such that L_k = E_k ·ϵ
A histogram model is entirely defined by the choice of the number of intervals, the set of endpoints that define them and their data counts. We thus note a histogram model ℳ = (K, C, {h_k}_1 ≤ k ≤ K).
The relevance of each model can be measured through different types of MDL criteria, for example using an enumerative criterion.
§.§.§ Granularity and choice of ϵ
To get rid of the user parameter ϵ, a new method parameter is introduced, that will automatically be inferred.
Let G be the granularity parameter.
For a given E, the numerical domain is split into G bins (1 ≤ G ≤ E) of equal width.
In practice, the constant E=10^9 is used, which is both close to the limits of the representation of machine integers and allows to obtain very accurate histograms, with an accuracy of up to one billionth of the value domain.
Each of these new elementary bins, that are called g-bins, is composed of g = E/G ϵ-bins. Each of the intervals of any histogram constructed has then a length that is a multiple of these g-bins. In other words, each interval is no longer composed of a multiple of ϵ-bins but rather composed of G_k g-bins.
This new criterion, which is called G-Enum is still very similar to the MDL-based enumerative criterion Enum for histograms, as shown in table <ref>.
§.§ Enum and G-Enum criteria for histogram models
Table <ref> recalls the Enum criterion for histogram models and its granulated extension G-Enum.
The log^*K and log^*G prior terms encode the choice of the number of intervals and of the granularity parameter. They exploit Rissanen's universal prior for integers <cit.>, that favors small integers, i.e. simpler histograms.
The logG+K-1K-1 term encodes the boundaries of the intervals at the granularity precision.
The multinomial terms are used to encode the multinomial distribution of the n instances on the K intervals.
They rely on an enumerative criterion with appealing optimality properties <cit.>.
The ∑^K_k=1 h_k log G_k + n logE/G term encodes the position of the h_k instances of each interval on the E_k = G_k E/G elementary ϵ-bins of the interval.
§.§ Optimization algorithms
For additive criteria such as Enum, a dynamic programming algorithm can be applied to obtain the optimal solution. However, its computational complexity is cubic w.r.t. the size of the data, which makes it impractical in the case of large data sets.
The G-Enum method exploits a greedy bottom-up optimization heuristic followed by post-optimization steps that mainly consist in adding, removing, or moving endpoints around the locally optimal solution.
Experiments in <cit.> show that the accuracy of histograms optimized using these heuristics is indistinguishable from those using the optimal algorithm, while the computational complexity is O(n log n) instead of O(n^3).
§.§ Experimental results
We summarize below the results of the comparative experiments performed to evaluate the G-Enum method <cit.>.
The comparison include the following irregular and regular histogram methods:
* , the method summarized in this section,
* histograms <cit.>,
* histograms <cit.>,
* <cit.>,
* Sturges rule histograms,
* rule histograms <cit.>.
They are evaluated on artificial datasets with know distributions: Normal, Cauchy, Uniform, Triangle, Triangle mixture and Gaussian mixture.
The methods are compared on three criterions: parsimony using the number of intervals, accuracy evaluated with the Hellinger distance and computation time.
The analysis of the experimental results show that the method achieves state of the art accuracy while being much more parsimonious and fast its closest competitors.
<cit.>
"Although rarely the best for each distribution type, histograms are consistently among the best estimators, and this without the high variability of the other methods. Focusing on irregular histograms, is certainly among the most parsimonious in number of intervals. For exploratory analysis, this is an important quality because it makes the interpretation of the results easier and more reliable. is also by far the fastest of irregular methods, making it suitable to large data sets."
§ LIMITS OF HISTOGRAM METHODS W.R.T. OUTLIERS
We first give an illustrative example of the limits of the G-Enum method in the case of outliers, and then discuss possible solutions to push these limits.
§.§ Illustative exemple
Let us consider a data set containing n=10,000 data entries distributed according to a Gaussian distribution G(μ=0, σ=1). The range of the numerical domain is L = (x_max-x_min).
As σ=1, we have L ≤ 10 with high probability.
The range of the numerical domain at ϵ accuracy is E=L/ϵ.
Let us recall that we have chosen E=10^9 to be compliant with the computer representation of integers using four bytes. As a matter of fact, computer integers are in the value domain ]-; [, with =2^31≈ 2.10^9.
Using the E=10^9 precision parameter, the bounds of the histogram intervals are very precise, and the underlying distribution can be very well approximated as the number of data entries n increases.
Let us now assume that we have an outlier data entry in our data set, with value x_out=10^12. The range of the value domain becomes L≈ 10^12 and using the same precision parameter E=10^9 amounts to setting ϵ≈ 1000. With this ϵ parameter, the optimal histogram reduces to a histogram with two intervals, consisting of a first interval of width E_1=1 that contains all the n initial Gaussian data entries in a bin of width 1000, and a second interval of width E_2=E-1 containing the outlier data entry. The quality of the histogram becomes very poor as the whole data set except one outlier is summarized using one single interval.
Let us note that, to the best of our knowledge, this problem is likely to occur with most alternative histogram methods.
In the following we investigate on solutions to push these limits.
§.§ Possible solutions to push the limits of the method
We suggest three possible solutions to push the limits of the method and summarize their potential benefits and drawbacks.
§.§.§ Use of long integers
One computer-based solution consists in using long integers instead of standard integers for the choice of our precision parameter E.
We could then extend the precision parameter to E=10^18 and be compliant with the computer representation of long integers using eight bytes, in the value domain ]-; [, where =2^63≈ 9.10^18.
Unfortunately, this solution is not likely to work well.
First, it extends the outlier limits by "`only"' nine additional orders of magnitude.
Second, this long int based choice of E raises critical numerical issues in the optimization algorithm.
For example, let us assume that we have an interval i with length E_i ≫ 1 and frequency h_i. Let us consider the merge of this interval with a singleton interval j of width E_j=1 and frequency h_j=1.
The likelihood part C_w() of the histogram cost criterion related to the width of the intervals is
C_w(i) = h_i log E_i,
C_w(j) = 0,
C_w(i ∪ j) = h_i log (E_i+1).
The variation of cost δ C_w is then
δ C_w = C_w(i ∪ j)-C_w(i)-C_w(j),
= h_i (log (E_i+1) - log(E_i)),
= h_i (log E_i(1+1/E_i) - log(E_i)),
= h_i log (1+1/E_i),
≈ h_i/E_i.
On a computer, real values are stored using a floating-point representation with a mantissa up to 15 digits (≈ 2. 10^-16). Two distinct values will be equal if their relative difference is lower than .
Back to our optimization algorithm, for h_i≈ 1 and E_i ≈ E, we get δ C_w ≈ 10^-18=0. Therefore, finding the best merge of intervals may be impossible in some tricky cases.
§.§.§ Extension to hierarchical histogram models
One solution to cope with outliers consists in extending the G-Enum method to a hierarchical model.
A histogram consists in a set of adjacent intervals, whereas a hierarchical histogram consists in a tree of intervals, where:
* each leaf node is an interval,
* each intermediate node can be seen both as an interval, union of its children intervals, and as a histogram, set of its children intervals,
* the root node represents the whole value domain.
Such a hierarchical histogram could potentially cope with outliers.
For example, using the data set described in Section <ref>, we could have one root node with three children nodes; the first one for all the Gaussian data entries, the second one with an empty interval and the last one with the outlier. Then the first node could be divided again so as to produce a standard histogram focused on the Gaussian data entries, without any outlier issue.
This possible solution looks appealing, but its implementation may encounter several problems:
* devising an effective prior for hierarchical models is not an easy task,
* optimizing hierarchical models is known to be difficult, with little hope of achieving optimality efficiently,
* the optimization algorithm may face numerical problems, since many models to be compared may have almost the same cost.
§.§.§ Exploitation of the properties of floating-point representation
Let us first summarize how real values are encoded on computers using a floating-point representation. Computer real values are stored on 8 bytes and thus encoded using 64 bits:
* 1 bit for the sign: -1 or +1,
* 11 bits for the exponent: between =10^-308 and =10^308,
* 52 bits for the mantissa: about 15 digits, for mantissa in interval [1;10[.
Whereas mathematical real values that belong to ℝ are continuous and unbounded, computer real values are discrete in essence and bounded. They belong to a finite set ℝ^(cr) (where (cr) stand for computer representation). The set ℝ^(cr) contains 2^64 ≈ 1.8.10^19 distinct values that belong to
the finite numerical domain [-10^308;-10^-308] ∪{0}∪ [10^-308;10^308].
Let us note that all computer real values have an approximately constant relative precision related to the mantissa, but an absolute precision that exponentially increases around the value 0. There are more than 600 orders of magnitude of difference of absolute precision between the largest and the smallest computer real values. In other terms, mathematical real values have translation-invariant
density properties all over ℝ (like in the case of fixed-point representation values). Conversely, the density of floating-point representation values in ℝ^(cr) is heavily peaked around the value 0: it increases exponentially for x → 0 until reaching the underflow regime and decrease exponentially for x →∞ until reaching the overflow regime.
Histograms where the width of intervals are multiple of ϵ-bins rely on a constant absolute precision and they cannot cope well with outliers.
We suggest to investigate the properties of floating-point representation to extend the G-Enum method. This is detailed in next section.
§ TWO-LEVEL METHOD FOR HISTOGRAMS
The principle of the method is to build a histogram directly from a data set only if the result is likely to be of sufficient quality. Otherwise, the data set is split into data subsets and a global histogram is obtained by aggregating the sub histograms built from each data subset.
Note that contrary to the hierarchical models suggested in Section <ref>, the method outputs a single global histogram, not a hierarchy of histogram.
In this section, we first introduce a quality criterion based on the notion of well conditioned data set for histograms.
We then present a log-transformation method that can be applied to any data set and will be used to effectively split the data set into data subsets.
We also suggest a way to get around the limits of floating-point representation.
We finally detail the two-level method that exploits the quality criterion and the split heuristic.
§.§ Well conditioned data sets for histograms
Let introduce the notion of well conditioned data sets for histograms.
A data set 𝒟 is well conditioned for histograms (WCH) of ϵ-bin length E if all its data entries with distinct values can be separated in different intervals. Otherwise, a data set is said ill conditioned for histograms (ICH).
If a data set is well conditioned, histograms can be build without any risk of loss of numerical precision.
To investigate this notion, let us first define some characteristics of data sets.
The range of a data set 𝒟 is defined as rng(𝒟) = max_𝒟 x - min_𝒟 x, that is the difference between it maximum and minimum values.
The precision of a data set 𝒟 is defined as pr(𝒟) = min_𝒟, δ x>0δ x, that if the min difference between two successive distinct values.
The granular length of a data set 𝒟 is defined as gr(𝒟) = rng(𝒟)/pr(𝒟).
The following results are trivial and given without proof.
A data set 𝒟 is ill conditioned for histograms if its precision is smaller that the ϵ-bin length of the histogram, or if its granular length is larger that number E of ϵ-bins.
More formally, we have:
* 𝒟 is ICH ⇔ pr(𝒟) < ϵ,
* 𝒟 is ICH ⇔ gr(𝒟) > E.
The WCH (resp. ICH) property of a data set 𝒟⊂ℝ is invariant under any linear transformation of the data entries of 𝒟.
Let us now define the notion of histogram collision in a data set 𝒟 as the case where two data entries with distinct values fall in the same ϵ-bin of a histogram.
It is noteworthy that the focus is on being able to separate data entries with distinct values, not to separate any data entries that may share the same value.
A data set is ill conditioned for histograms if its number of collisions is greater or equal than 1.
Evaluating the risk of loss of precision while building a histogram from a data set relates to evaluating its ICH property. This can be done in O(n log n) either by computing the range and precision of the data set or alternatively by counting its number of collisions.
We can notice that the range and precision of a data set are characteristics that are related to extreme value statistics and that they are likely to exhibit a very large variance.
Inspired by robust statistics, we suggest a stronger condition for the ICH property, with a threshold t_c for a minimum number of collisions.
Having t_c collisions in the data set means that t_c data entries fall in a set of colliding bins, each one containing at least two data entries with distinct values.
This covers two extreme cases: a flat one where each colliding bin contains only two data entries and a peaked one with one single colliding bin containing t_c data entries.
We choose to exploit a condition for the ICH property based on the peaked case with 1 < t_c ≪ n because it is likely to require larger data sets to trigger the condition while minimizing the potential loss of accuracy w.r.t. interval bounds.
Let us introduced another threshold t_E as the number of ϵ-bins in a histogram used to evaluate the ICH property. Although t_E=E seems a natural choice, let us recall that the choice E=10^9 is not driven by a required accuracy of one billionths. In fact, the G-Enum method optimizes the granularity of histograms that rely on G bins, 1 ≤ G ≤ E, and convergence is expected as E →∞. The value E=10^9 was then chosen to be as large as possible within the computer numerical limits. We hope that the optimal granularity can be found for G ≪ E to avoid potential instabilities around the point of convergence.
In the end, we choose the thresholds t_c=logn, t_E = √(E)logE and introduce the PICH criterion in Definition <ref>.
A data set 𝒟 of size n is practically ill conditioned for histograms (PICH) built upon E elementary ϵ-bins if at least one colliding bin within a granularized histogram with G=√(E)logE bins contains more than logn data entries. Otherwise, the data set is practically well conditioned for histograms (PWCH)
This heuristic PICH criterion is designed to push the limits of the method's applicability.
In practice, the thresholds t_c and t_E have been chosen to jointly optimize a set of competing criteria, which are summarized below.
* automation
* a parameter-less criterion is important so that data scientists can actually spend more time on the business problem at hand,
* theoretical optimality
* although E →∞ should be considered, E is set to 10^9 which is close to the computer numerical limits,
* t_E is as small as possible to avoid potential instability during the optimization of the granularity in the G-Enum method,
* accuracy
* t_c is as small as possible to minimize to potential loss of accuracy for the interval bounds,
* t_E is as large as possible to allow accurate granularities,
* scalability
* using large enough t_c and t_E thresholds, the PICH criterion should be conservative enough to avoid triggering advanced heuristics too often in the case of ill conditioned data sets,
* although optimal algorithms look appealing, only heuristics with at most super-linear time complexity can be used in the case of large real world data sets.
The ICH property and the PICH criterion are further investigated in Section <ref>, with a sensitivity analysis w.r.t. the t_E threshold.
§.§ Log-transformation of computer real numbers
Let us first introduce a new function log^(cr), that extends the standard log function to any negative, null or positive computer real value:
log^(cr)(x) = --(log -x-log), ∀ x ∈ℝ_-^*(cr),
log^(cr)(0) = 0,
log^(cr)(x) = +log x - log, ∀ x ∈ℝ_+^*(cr).
For x=mant × 10^exp, we have log x= log (mant) + exp ×log 10.
We now evaluate the bounds of the set log(ℝ^(cr)) obtained after the log-transformation of ℝ^(cr):
sup(log(ℝ^(cr))) = log() + log - log,
≈ log 10^-15 + log 10^308 - log 10^-308,
≈ 600 ×log 10,
inf(log(ℝ^(cr))) = - sup(log(ℝ^(cr))).
Whereas the values of log(ℝ^(cr)) exploit the mantissa with the same limits as in ℝ^(cr), the exponents in log(ℝ^(cr)) are bounded by around 3, that is about one hundredth of the related bound in ℝ^(cr) (since 308 ≈ 3 × 100).
The set log(ℝ^(cr)) thus contains about 10^17 distincts values, approximately 100 times less than its super set ℝ^(cr).
Conversely, the log values have approximately the same absolute precision (15 digits) and are almost uniformly distributed on the numerical domain [-600 ×log 10;600 ×log 10].
To summarize, this log-transformation provides a monotonous transformation of the initial values in ℝ^(cr) to log values in log (ℝ^(cr)), with an almost constant density on a smaller value domain.
Despite the decrease of size compared to the ℝ^(cr), we suggest that these properties of log (ℝ^(cr)) are particularly suitable for histograms, which assume a piecewise constant density per interval.
Let us notice that for a histogram build on ℝ^(cr), each ϵ-bin represents a sub set with a constant maximum absolute difference of values. Conversely, on log(ℝ^(cr)), each ϵ-bin represents a sub set with a constant maximum relative difference of values.
We thus expect the log(ℝ^(cr)) space to be well suited for dividing a data set with a wide range of values into data subsets with limited range of values.
In the case of a data set 𝒟 to analyze, we suggest to adapt the log^(cr) function in order to reduce the range of values and to avoid the potential gaps around the value 0:
log_𝒟^(cr)(x) = - min_𝒟_-^*,δ x > 0δlog x -(log -x-logmin_𝒟_-^* -x), ∀ x ∈𝒟_-^*,
log_𝒟^(cr)(0) = 0,
log_𝒟^(cr)(x) = min_𝒟_+^*,δ x > 0δlog x +log x - logmin_𝒟_+^* x, ∀ x ∈𝒟_+^*.
This is illustrated in Figure <ref> in the the case of a data set of size n=1000 drawn from a Gaussian distribution G(μ=0, σ=1).
Mainly, the log-transformation exploits the opposite of the function log -x for the negative values and the function log x for the positive values, and shifts them to achieve a smooth monotonous transformation of all the values.
§.§ Dealing with the limits of floating-point representation
Whereas the PWCH criterion allows to cope with data sets with very large range of values, new numerical limits can be encountered for data sets with very small range.
As an example, let us take a data set 𝒟 with a range of 1, rng(𝒟) = max_𝒟 x - min_𝒟 x=1.
Let us assume that this data set is PWCH, so that the G-Enum method is able to separate all its data entries using E=10^9 ϵ-bins.
If min_𝒟 x=1 and max_𝒟 x=2, the 15 digits of the mantissa of computer real values (cf. Section <ref>) allow to encode the boundaries of the ϵ-bins with an excellent precision.
If min_𝒟 x=1,000,000,000 and max_𝒟 x=1,000,000,001, 10 digits are necessary to encode the boundaries of the data set, and only 5 digits remain available to encode the boundaries of the 10^9 ϵ-bins, which is not feasible.
A more critical limit is that the computer real values no longer behave as continuous values within the range of this data set, as only 10^5 distinct values can be encoded.
The "discrete" limit of computer real values is reached, and there is an important risk that the G-Enum method will treat this data set as a discrete one even if it comes from a continuous data distribution.
We suggest getting around this numerical limit by estimating the number of distinct values n_d that can be encoded within the range of a data set and to exploit an accuracy parameter E small enough to get on average at least t_n=100 distinct values per ϵ-bin.
Let us first focus on the case where 0 ∉ [min_𝒟 x; max_𝒟 x], for example 0 < min_𝒟 x < max_𝒟 x.
The total number of distinct positive values of ℝ^(cr) that can be encoded between and is 2^64/2 ≈ 9.10^18 (cf. Section <ref>).
Assuming that the density is almost constant in log (ℝ^(cr)) (cf. Section <ref>), a raw approximation of the total number of distinct values
in [min_𝒟 x;max_𝒟 x] is
if 0 < min_𝒟 x < max_𝒟 x, n_d([min_𝒟 x;max_𝒟 x]) ≈ 2^63log (max_𝒟 x) - log (min_𝒟 x)/log - log,
if min_𝒟 x < max_𝒟 x< 0, n_d([min_𝒟 x;max_𝒟 x]) ≈ n_d([-max_𝒟 x;-min_𝒟 x]),
if min_𝒟 x < 0 < max_𝒟 x, n_d([min_𝒟 x;max_𝒟 x]) ≈ n_d([min_𝒟 x;-])+ n_d([, max_𝒟 x]).
If n_d(min_𝒟 x , max_𝒟 x)/E < t_n, the average number of distinct values that can be encoded per ϵ-bin is below the threshold, and we replace E=10^9
by E=⌈ 10^9 × n_d,ϵ([a;b])/t_b ⌉ to get ϵ-bins with enough distinct values per bin and keep a smooth continuous behavior of computer real values.
Note that when this numerical limit is reached, the separability of the values cannot be improved by splitting it into subsets, and we will consider the related data set as PWCH.
§.§ Two-level heuristic
If a data set 𝒟 is practically well conditioned for histograms (PWCH), no significant loss of numerical precision is to be feared and we can compute a standard histogram.
Conversely, if it is PICH, we propose in Algorithm <ref> a two level heuristic that exploits the log_𝒟^(cr) function.
|
http://arxiv.org/abs/2306.11225v1
|
20230620012807
|
Imprinting spiral Higgs waves onto superconductors with vortex beams
|
[
"Takeshi Mizushima",
"Masahiro Sato"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con"
] | |
http://arxiv.org/abs/2306.09508v1
|
20230615210455
|
Complementarity of $B\to K^{(*)} μ\bar μ$ and $B\to K^{(*)} + \mathrm{inv}$ for searches of GeV-scale Higgs-like scalars
|
[
"Maksym Ovchynnikov",
"Michael A. Schmidt",
"Thomas Schwetz"
] |
hep-ph
|
[
"hep-ph",
"hep-ex"
] |
hep-ph/***
CPPC-2023-02
1.5cm
bold
Complementarity of B→ K^(*)μμ̅ and B→ K^(*) + inv
for searches of GeV-scale Higgs-like scalars
normal
.3cm
0.5 cm
Maksym Ovchynnikov^1,
Michael A. Schmidt^2,
Thomas Schwetz^1
.7cm
^1 Institut für Astroteilchen Physik, Karlsruher Institut für Technologie (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany
^2 Sydney Consortium for Particle Physics and Cosmology, School of Physics, The University of New South Wales, Sydney, NSW 2052, Australia
.5cm
[l].9
E-mail:
[email protected], [email protected], [email protected]
1cm
The rare decays B^+→ K^+ μμ̅ and B^0→ K^*0μμ̅ provide the strongest constraints on the mixing of a light scalar with the Higgs boson for GeV-scale masses. The constraints sensitively depend on the branching ratio to muons. Additional decay channels like an invisible partial width may substantially weaken the constraints. This scenario will be probed at Belle II in B→ K^(*) + inv. We illustrate the complementarity of scalar decays to muons and invisible decays using the currently available results of LHCb and BaBar. We provide two simple model realisations providing a sizeable invisible scalar width, one based on a real scalar and one based on a U(1)_B-L gauge symmetry. In both examples the scalar decays into heavy neutral leptons which can be motivated by the seesaw mechanism for neutrino masses.
§ INTRODUCTION
Light GeV-scale Higgs-like scalars occur in several well-motivated extensions of the Standard Model (SM), in which a SM gauge group singlet scalar field φ couples to the Higgs via a cubic or quartic interaction. Examples include a mediator between a dark sector and the SM <cit.>, a light inflaton <cit.>, the pseudo-Goldstone bosons associated with the breaking of scale-invariance in a classically scale-invariant model <cit.>, or light scalars to explain different low-energy anomalies, see e.g. <cit.>. The light scalar could be the scalar breaking B-L symmetry <cit.>, which has been studied in e.g. <cit.>.
The phenomenology of GeV-scale Higgs-like scalars has been recently studied in <cit.> (see also the review <cit.>). Below the scale of the electroweak symmetry breaking, the interaction of the scalars with the SM particles may be generically described by the two independent interactions: the mixing with the Higgs boson h (parameterised by the mixing angle θ), and the trilinear coupling h which gives rise to an invisible Higgs decay. The mixing coupling makes the scalar unstable: it may decay into a pair of leptons or into hadrons.
The most stringent constraints on Higgs-like scalars with masses 0.3 GeV≲ m_φ<m_B-m_K (see e.g. <cit.>)
come from the LHCb displaced vertex search for B→ K φ(→μμ̅) <cit.> which constrains the scalar mixing down to θ≃ 10^-4 depending on the scalar mass.
The corresponding search for long-lived particles decaying to a pair of light leptons, ee̅, μμ̅ or light mesons ππ, KK <cit.> are currently less sensitive.
However, these constraints are subject to the assumption that the scalar dominantly decays visibly into SM particles which may not hold in general (if additional couplings apart from the mixing exist). Thus it is crucial to also consider searches for invisible decays of the scalar.
The B factories BaBar, Belle and Belle II searched for B→ K^(*) +inv and reported upper limits on the different decay channels which are listed in Tab. <ref>.
Belle II is expected to measure all four decay channels including the polarisation fraction F_L of the K^* <cit.>, which may be used to search for additional invisible final states like heavy neutral leptons (HNLs) <cit.> or light invisibly-decaying scalars <cit.>, which may constitute dark matter.
In fact, the Belle II collaboration already published their first analysis <cit.>.
A simple weighted average indicates the branching ratio Br(B^+→ K^+ +inv) = (1.1± 0.4)× 10^-5, which is 1.4σ in excess over the SM prediction.
The complementarity of displaced and invisible searches has also recently been highlighted for Belle II.
Ref. <cit.> studied a Higgs-like scalar decaying to dark sector particles. It stressed the benefit of searching for displaced pairs of leptons or light mesons in the Belle II detector. They conclude that Belle II is able to probe mixing angles down to 10^-5.
Ref. <cit.> studied axion-like particles. While displaced vertex searches are currently more sensitive for masses above the muon threshold, invisible decay searches are more sensitive for lighter masses.
In this paper, we study the bounds on the light scalar mixing with the SM Higgs under the assumption of a sizeable invisible width of the scalar Γ_ inv. We show how the limits from B→ Kμμ̅ become weaker if the scalar invisible width is increased. At the same time bounds from B→ K +inv become relevant and for sufficiently large values of Γ_ inv they will dominate the bound on the scalar/Higgs mixing. We illustrate this complementarity by using current LHCb bounds on B→ Kμμ̅ <cit.> and the BaBar bound on Br(B^+→ K^+ +inv) <cit.>. We will give also two simple examples for light new physics, where the required values for Γ_ inv can be achieved by letting the scalar decay into HNLs.
The paper is structured as follows. In Sec. <ref> we introduce the couplings of the scalar and the relevant decay modes. In Sec. <ref> we provide an overview of searches for scalars at B factories. The complementarity of B→ K μμ̅ and B→ K +inv is presented as the main result in Sec. <ref>. In Sec. <ref> we provide a few examples for models with a sizeable invisible decay width, before concluding in Sec. <ref>. Technical details are collected in the appendix.
§ PHENOMENOLOGY OF GEV-SCALE HIGGS-LIKE SCALARS IN A NUTSHELL
We consider a light real scalar field[It may be the real component of a complex scalar field like in the gauged B-L extension of the SM with GeV-scale mass m_φ.] φ, which mixes with the Higgs with mixing angle θ. In the minimal scenario, θ and the scalar mass m_φ control every observable such as the scalar lifetime τ_S∝ f(m_φ)θ^-2 and the partial branching ratios.
Because of the mixing, the structure of the scalar interaction with SM particles is similar to those for the Higgs, but the mixing angle suppresses the couplings. This way, φ couples to all SM fermions at tree level proportional to their respective masses as (m_f sinθ /v). These tree-level interactions generate effective couplings to other SM particles such as gluons, photons, and the bound states such as nucleons <cit.>. They differ from those of the Higgs boson not only by θ but also by mass which determines the scale associated with the couplings.
Thus, the scalar decays to kinematically-accessible SM lepton pairs with partial decay width[We assume small scalar mixing angle sinθ≪ 1 and thus use cosθ≈ 1.]
Γ(φ→ ff̅)= √(2)G_F m_f^2 m_φsin^2θ/8π(1-4m_f^2/m_φ^2)^3/2 .
Decays into hadrons are more complicated. Their description depends on the scalar mass which sets a characteristic energy scale of the process. For m_≫Λ_QCD = 𝒪(1 GeV), they may be described inclusively by the decay of the scalar into a quark-antiquark pair; the corresponding decay width is given by Eq. (<ref>) with the additional colour factor N_c=3.
For lower masses, m_≃Λ_QCD, the perturbative QCD description breaks down, and hadronic decays must be treated exclusively, i.e., into different mesons.
The lightest possible hadronic decay is into a pair of pions, φ→ππ. It may be described using chiral perturbation theory, which, however, quickly becomes unreliable just above the decay threshold due to strong interactions of the pions <cit.>. Alternatively, the calculation of the decay form-factor may be performed using the method of dispersion relations, see <cit.> for an overview. It was realised (see, e.g., <cit.>) that the approach suffers from theoretical uncertainties that may significantly change the results. Recently, <cit.> has calculated the decay width using experimental data for the gravitational pion form-factors with an uncertainty of about a factor 𝒪(2).
Other hadronic decays of , e.g., into a pair of kaons and multihadronic states, also suffer from similar problems. In particular, there is no clear way to describe the transition between the exclusive and inclusive approaches. Refs. <cit.> match the two regimes at different scalar masses – 2 GeV and 1.3 GeV, respectively. Motivated by this issue, in Fig. <ref> we show the lifetime and partial branching ratios of following the descriptions from <cit.> and <cit.>, interpreting the difference between them as the uncertainty in the decay width.
Depending on the scalar mass, the difference between the widths may be as large as an order of magnitude. We will comment on the impact of this uncertainty on searches for scalars considered in this work in Sec. <ref>.
Let us now discuss the scalar production channels. Depending on the facility, the main channels are <cit.> proton bremsstrahlung or decays of various particles: 2- and 3-body decays of kaons and B mesons B→ + other, or a 2-body decay of the Higgs boson h→.
The production channel of main interest in this work is the decay of B mesons into a scalar and a strange meson:
B→+X_s, X_s = K,K^*,K_0^*,K_1(1270),K_1(1400),K_2^*,…
The decay vertex originates from a flavour-changing neutral current coming from weak 1-loop contributions <cit.> (see also Appendix <ref> for details).
In the limit of small scalar masses m_≪ m_B and sin^2θ≪ 1, the total exclusive branching ratio is <cit.> (see also Appendix <ref>)
∑_X_sBr(B→ X_s+) ≈ 3.3 sin^2θ
Decays into K consist of only around 10% of this number:
Br(B→ K+)≈ 0.4 sin^2θ, m_≪ m_B
Despite this small branching ratio, this channel is attractive from the experimental point of view. The main reason is that, unlike the heavier resonances (K^*, K^*_0, K_1,...), K is stable on the experiment scales, so the kinematics reconstruction is simple: it only requires to reconstruct the kaon itself. The other resonances are short-lived and decay into a kaon plus other particles such as pions and photons, which require additional signal selection and reconstruction.
Because of this, searches for new physics via the B→ Kμμ̅ channel typically give the strongest constraint.
§ OVERVIEW OF SEARCHES FOR SCALARS AT B FACTORIES
Two types of searches at B factories – BaBar, Belle/Belle II, LHCb – may be applied to dark scalars: B→ K+visible, or B→ K+invisible.
The first type corresponds to the production B→ K+, with subsequently decaying into visible particles within the detector. In general, such decays may be →μμ̅, →ππ, → KK, as well as decays into jets – quarks and gluons. The number of events for this signature (without taking into account further selection and reconstruction) is proportional to
BR(B^+→ K^+ φ) BR(φ→visible) [1- P(r_ det|βγ)] ,
where P(r_ det|βγ)=exp[-r_ det/βγ cτ_] denotes the probability to decay outside of the detector of size r_ det[For LHCb, r_ det=0.6m <cit.>.], and BR(φ→visible) is the branching ratio of decays of into visible particles. The scalar is produced on-shell and thus the process can be considered as a series of 2-body decays with
Γ(B→ K^(*)φ (→ ff̅)) = ∫_0^∞dq^2/2π[Γ(B→ K^(*)φ)2m_φΓ(φ→ ff̅) ]_m_φ^2→ q^2/(q^2-m_φ^2)^2 + m_φ^2Γ^2_φ
Γ_φ≪ m_φ⟶Γ(B→ K^(*)φ) BR(φ→ ff̅)
where q^2 denotes the 4-momentum squared of the scalar and the narrow width approximation has been employed in the last line, which is a good approximation for the relevant parameter space. As a result, the invariant mass distribution of the visible particles (if they are fully reconstructed) would have a peak at m_ inv = m_ with a width due to the finite resolution of the 4-momenta reconstruction, which may be used to reduce the background efficiently.
From Fig. <ref>, we conclude that the most probable decay of a GeV-scale scalar is into hadrons, in particular into a pair of pions, kaons, or jets such as two gluons. The decay into muons is significantly suppressed. However, the latter may give the cleanest decay due to better reconstruction capabilities for muons and lower backgrounds.
For the minimal scalar model with only two parameters m_ and θ, the most stringent constraints on a GeV-scale Higgs-like scalar come from LHCb <cit.>. Namely, Ref. <cit.> searched for displaced vertices in the decays B^+→ K^+ φ (→μμ̅) <cit.>, while the work <cit.> considered B^0 → K^*0φ(→μμ̅).
Belle II <cit.> also searched for B^+→ K^+ φ(→μμ̅).
The strongest constraint is placed by <cit.> which we thus use it in the following analysis.
It placed constraints on the scalar mixing angle as a function of the scalar mass m_φ and the lifetime τ_φ down to sinθ≲ 10^-4.
We extracted the constraint on BR(B^+→ K^+ φ)BR(φ→μμ̅) as a function of the scalar mass m_φ and lifetime τ_φ from Fig. 4 in <cit.> which is model-independent. In Fig. <ref> (left) we use this constraint to set a limit on the scalar mixing, where we assume the scalar decay width as in <cit.> but including NLO corrections to decay widths into quarks and gluons.
The gaps at several masses are caused by the exclusions in the searched mass range due to the contribution of SM resonances K^0_S, J/ψ,ψ(2S), and ψ(3770). Interestingly, the lower bound of the excluded region does not depend on the total decay width of the scalar, being determined only by Γ(→μμ̅); in particular, there is no drop of the sensitivity at m_S≃ 1 GeV observed in Fig. <ref> for Br(→μμ̅). The reason for this is that scalars with sinθ close to the lower bound are very long-lived, with the characteristic decay length cτ_⟨βγ_⟩ exceeding much the geometric size of the detector. As a result, the decay probability in (<ref>) behaves as 1- P(r_ det|βγ) ∝τ_^-1, and the product Br(→μμ̅)τ_^-1 is just Γ(→μμ̅). Indeed, the scalar's proper lifetime at the vicinity of 1 GeV and sinθ∼ 10^-4 is τ_≳𝒪(100) ps, which corresponds to the “extremely displaced region” in the LHCb search τ≫ 10 ps. Because of this, in the minimal scalar model, the lower bound does not depend on the description of the hadronic decays of the scalar (remind Fig. <ref>).
The situation may be different if the total scalar decay width has contributions from θ (given by the decay width Γ^(θ)_) as well as another contribution, for instance, an invisible decay width Γ_inv:
Γ_,tot=Γ^(θ)_ +Γ_ inv .
If the latter is sufficiently large to be comparable with Γ^(θ)_, the lifetime becomes small enough such that all scalars decay inside the detector, see Fig. <ref> (right).
The second type of searches, B→ K+ inv, corresponds to the scenario when is not detected. This may happen if decays into invisible particles (such as neutrinos or hypothetical feebly-interacting particles (FIPs) that leave the detector) or if it is too long-lived and escapes the detector before decaying. Therefore, the number of events scales as
BR(B^+→ K^+ φ) [BR(φ→inv) + P(r_ det|βγ) BR(φ→ ff̅)] .
BaBar, Belle, and Belle II already searched for B→ K+inv <cit.>.
In the minimal scalar model all scalar decays into, e.g., ee̅,μμ̅,ππ, KK, etc., would induce a visible activity in the detector. Therefore, the first summand is effectively zero. As for the second contribution, due to the scaling τ_∝θ^-2, to be long-lived enough to escape the detector, scalars need to have small θ; otherwise, the probability is exponentially suppressed. The latter means the suppression of the production branching ratio, making “invisible” events with scalars very rare. Therefore, the second type of search is not very efficient.
However, the situation may drastically change if the scalar may decay into invisible particles, such that the invisible decay width Γ_ inv becomes comparable with the widths into visible decay states. Similarly to the decays into visible particles, the missing invariant mass distribution would be peaked at m_. This is especially useful since, in addition to the constraint on the branching ratio Br(B→ K+ inv), BaBar <cit.> provides the distribution in the missing squared invariant mass q^2. For 2-body decays that feature a narrow resonance in the q^2 distribution, almost all events are contained within one bin. This is illustrated in Fig. <ref>, which shows the number of events in each bin from a scalar with mixing angle sinθ=3× 10^-3 and invisible decay width Γ_ inv=10 eV for different scalar masses m_φ. This will be used to extract constraints on the scalar mixing angle θ in Sec. <ref>.
§ COMPLEMENTARITY OF B->KMUMU AND B->K+ INV
As argued in the previous section, the simple picture changes for scalars with a sizeable invisible decay width.
Decays to invisible final states leave a signal in B→ K +inv.
We recast the BaBar search for B^+ → K^+ +inv <cit.>, which is the most sensitive search channel of the different B→ K +inv decays. It provides the data in ten equally spaced bins in the q^2 distribution as reproduced in Fig. <ref> from Fig. 5 in <cit.>. The BaBar data is statistics limited with systematic errors at the percent level. We validated our analysis by reproducing the rescaled SM prediction and the upper bound for BR(B^+→ K^+ νν̅) of the BaBar analysis <cit.> within 20% (19% including systematic errors).[Our upper limit is lower compared to the one reported in <cit.>. Improving the analysis would require to generate events to include other cuts such as the one on the E_ extra variable which quantifies additional energy deposits. Moreover, the precision is also limited by the achievable precision from manually reading off the data from figures 5 and 6 in <cit.>.] As the systematic errors are negligible compared to the statistical errors and the precision of the final result, we neglect them in the following statistical analysis. Following <cit.> we assume the events in each bin are distributed following a Poisson distribution Poisson(k|λ)=λ^k e^-λ/k!
and thus the likelihood is
ℒ = ∏_iPoisson(N_i| ϵ_i s_i N_BB̅+ b_i )
with the signal efficiency ϵ_i ∼𝒪(10^-3), extracted from Fig. 6 in <cit.>, the total number of BB̅ events N_BB̅=4.71× 10^8, and the expected background events in each bin, b_i.
The estimates for the background events are separated into peaking background events with a correctly reconstructed tagged event, estimated from Monte Carlo simulations, and combinatorial background from continuum events and incorrectly reconstructed events, which has been extrapolated from data <cit.>.
Finally, the branching ratio for signal events in each bin is given by
s_i = τ_B ∫_ bin idq^2 (
dΓ_ SM(B^+→ K^+νν̅)/dq^2
+dΓ(B^+→ K^+ φ(→inv))/dq^2) .
The first term denotes the SM contribution, see <cit.>. As the decay width of the scalar is much smaller than its mass for the relevant parameter space, we employ the narrow width approximation[The detector resolution in q^2 is 𝒪(1%) <cit.>, and thus the broadening due to the finite width Γ_ is negligible.] and find
dΓ(B^+→ K^+ φ(→inv))/dq^2 = δ(q^2-m_φ^2)Γ(B→ K^+φ)
( Γ_ inv/Γ_φ, tot + P(r_ det|βγ) Γ_φ^(θ)/Γ_φ, tot) .
The detector size is set to r_ det=0.5m following <cit.>.
For large invisible decay width Γ_ inv≫Γ_^(θ), the second term is negligible and the differential decay width is proportional to sin^2θ, while for small invisible decay width the total decay width Γ_, tot≈Γ_^(θ)∝sin^2θ, and thus the sin^2θ dependence cancels in the differential decay width. Hence, the constraint from B^+→ K^++inv can be interpreted as a constraint on Γ_ inv for small invisible decay widths and on sinθ for large invisible decay widths.
Without performing any statistical analysis, from the few number of observed events, the total number of BB̅ mesons and the efficiency, we expect to be sensitive to branching ratios BR(B^+→ K^+φ)∼𝒪(10^-5).
From the likelihood function, the corresponding χ^2 function is given by
χ^2 = -2lnℒ = ∑_i f(N_i|ϵ_i s_i N_BB̅+b_i^ peak + b_i^ comb)
with f(n|ν)=2ν -2 nlnν +2 ln(n!). Minimising the χ^2 function with respect to the scalar mixing angle sinθ for fixed scalar mass m_φ and invisible decay width Γ_ inv, we derive an upper limit on sinθ at 90% CL (Δχ^2=2.7).
The main results are presented in Figs. <ref> and <ref>, which show the constraints on the scalar mixing angle sinθ from the search for B^+→ K^+ φ(→μμ̅) at LHCb <cit.> and invisible decay searches at BaBar <cit.> as a function of the invisible decay width Γ_ inv and the scalar mass m_φ for different benchmark values. The BaBar constraints are shown as solid contours in both figures, while the LHCb constraints are indicated by dashed lines in Fig. <ref> and by shaded regions in Fig. <ref>.
For small invisible decay widths B^+→ K^+ φ (→μμ̅) places the most stringent constraint on sinθ, while B^+→ K^+ φ(→inv) places the most stringent constraint for large invisible decay widths. The cross over occurs for Γ_ inv∼𝒪(0.1-10) eV depending on the scalar mass.
The dependence of the LHCb search for B^+→ K^+φ(→μμ̅) <cit.> is straightforward to understand. For Γ_ inv≪Γ_^(θ), there is no dependence on the invisible decay width, as can be observed on the left side of Fig. <ref>. The grey line shows Γ_ inv =Γ_ ^(θ) for m_φ=2.2 GeV (orange contours). At the intersection of the dashed orange and grey line, the LHCb constraint starts to weaken. For Γ_ inv≳ 1 meV, the LHCb constraints are described by lines on a log-log scale in Fig. <ref>, because the scalar decays promptly, while for Γ_ inv≲ 1 meV, the constraint depends on the finite scalar lifetime. Similarly in Fig. <ref>, the constraint for Γ_ inv=1 keV can be obtained from the one for Γ_ inv=1 eV by rescaling, while the constraints for Γ_ inv≲ 1 meV feature a non-trivial dependence on the invisible decay width Γ_ inv via the dependence on the scalar lifetime τ_. Note, the LHCb result is largely insensitive to the different predictions <cit.> for the branching ratios.
The dependence of the BaBar constraints (solid contours) in Fig. <ref> can be understood from the above argument: For large Γ_ inv≫Γ_^(θ), B→ K+inv can be interpreted as a constraint on sinθ, while for small Γ_ inv it has to be interpreted as a constraint on Γ_ inv, which explains the sharp drop in sensitivity for Γ_ inv≲ 1 meV. This can also be observed in Fig. <ref>: The BaBar constraints for Γ_ inv=1 eV and 1 keV agree except for scalar masses close to the kinematic cutoff. There is no sensitivity for Γ_ inv=0 eV and we observe a strong dependence on the scalar mass for Γ_ inv=1 meV, because the BaBar sensitivity depends on the q^2 bin.
To illustrate the sensitivity of B^+→ K^+ +inv to new physics, we show iso-contours
for the branching ratio of BR(B^+→ K^+φ) being equal to (10% of) the SM branching ratio BR(B^+→ K^+ νν̅) for a scalar with m_=2.2 GeV as dot-dashed (dot-dot-dashed) orange lines in Fig. <ref>.
Finally, depending on the scalar mass,
the theoretical uncertainty in the scalar's hadronic decay width affects the lower bounds for visible/invisible signatures.
Let us first consider the invisible case. In Fig. <ref>, we consider two descriptions of the scalar's width discussed in Sec. <ref> for the mass m_ = 0.9 GeV: as in <cit.> (the solid blue line) and <cit.> (the dotted blue line). Also, Fig. <ref> shows the sensitivity from the invisible signature assuming these two descriptions (the left and right panels correspondingly). In the domain of large Γ_ inv≫Γ^(θ)_, where Γ^(θ)_ is the total width controlled by θ, the dependence of the number of events on the total width disappears. Therefore, the sensitivity does not depend on the uncertainty, as we see for Γ_ inv. Indeed, in Eq. (<ref>), the first summand in the brackets reduces to 1, while the second summand tends to 0.
Once Γ_ inv decreases, Γ^(θ)_ becomes essential. In particular, in the limit Γ_ inv≪Γ^(θ)_, the number of events scales as (Γ^(θ)_)^-1: larger width means lower number of events. The width from <cit.> is resonantly enhanced compared to the one from <cit.>, and therefore the sensitivity of the invisible signature is weaker in the former case.
For the visible case, the number of events scales as given in Eq. (<ref>). We illustrate the results for two different descriptions in Fig. <ref>. As we already discussed, if the invisible width is very small or zero, the LHCb bounds extend to small values of the mixing angle where the probability of the scalar to decay inside the decay cancels with the Br(→μμ̅). In the opposite case of large Γ_ inv, the decay probability does not have such scaling and tends to 1. The dependence on the total width via Br(→μμ̅) survives.
Similarly to the invisible signature, larger Γ^(θ)_ means weaker bounds. In particular, assuming the description from <cit.> and Γ_ inv = 1 eV, we see that the sensitivity is reduced at m_φ=1 GeV due to the resonant enhancement of the f_0(980) and thus large visible decay width into SM particles via the scalar mixing. The shape is different if assuming the description as in <cit.>, where no resonant enhancement exists.
§ SM EXTENSIONS WITH A LIGHT SCALAR WITH INVISIBLE WIDTH
There are several possibilities how the light scalar may decay and escape detection in the LHCb searches for B→ K φ (→μμ̅). An attractive possibility is to couple the scalar field to fermionic dark matter. However thermal production of this dark matter candidate in the early universe is strongly constrained <cit.> and production via freeze-in requires small couplings. We thus focus on scalar decays to unstable particles which further decay to lighter SM particles, like HNLs.
HNLs are well-motivated as explanation of neutrino masses via the seesaw mechanism <cit.>.
Big bang nucleosynthesis (BBN) places a lower bound on the lifetime of GeV-scale HNLs of τ_N<0.02s <cit.> and thus a lower bound on the active-sterile mixing angle. Direct searches on the other hand provide an upper bound on the active-sterile mixing angle. Assuming that the HNLs generate neutrino masses, together the two constraints exclude HNL masses below 0.33-0.36 GeV, set by the kinematic threshold of K→πν N, apart from a small window M_N∈ [0.12,0.14] GeV <cit.>.
For the allowed parameter space GeV-scale HNLs escape the detector undetected, see e.g. <cit.>.
§.§ Real scalar coupling to heavy neutral leptons
The simplest viable scenario is to couple a real scalar field to HNLs N_i, which allows to generate the invisible scalar decay width via φ→ NN. The relevant Lagrangian is
ℒ = - 1/2 N^TC (μ_N + y_N φ) N + h.c.
where C is the charge conjugation matrix, M_N = μ_N+ y_N v_ϕ is the HNL mass term and y_N the Yukawa coupling to the scalar φ. Hence, the Yukawa coupling y_N and the HNL mass can be chosen independently.
An important constraint on this scenario comes from SM Higgs decays. In presence of a quartic Higgs portal interaction λ_Hφ2 H^† H φ^2, the SM Higgs can decay in the real scalar with the branching ratio
BR(h→φφ) = λ^1/2(m_h^2,m_φ^2,m_φ^2)/32 π√(2)G_F m_h^3 Γ_h |λ_Hφ|^2
with the Källén function λ(x,y,z)=x^2+y^2+z^2-2xy-2xz-2yz.
The upper bound BR(h→inv)≤ 0.18 <cit.> constrains the quartic coupling as |λ_Hφ|≲ 0.01. Similarly, for non-zero vacuum expectation value (VEV) of φ, the quartic Higgs portal interaction contributes to the scalar mixing. Demanding this contribution to be small results in λ_Hφ≪√(2)sinθ m_h^2/vv_φ with the electroweak VEV v^2 = 1/√(2) G_F, i.e. we assume that the mixing is dominated by the cubic term in the Higgs potential.
Even in absence of a quartic Higgs portal interaction, there is a contribution to invisible Higgs decay from h→ NN which translates into an upper bound on the HNL Yukawa coupling
BR(h→ N N) = ∑_i,j|y_Nij|^2 sin^2 θ/8π(1+δ_ij)m_h/Γ_h ⇒ ∑_i,j|y_Nij|^2/1+δ_ij≲
1.4 (10^-2/sinθ)^2 .
The partial decay width for scalar decay to HNLs is
Γ(φ→ N_iN_j) = m_φ |y_Nij|^2 /8π(1+δ_ij)λ^1/2(1,x_i^2,x_j^2)
(1-(x_i+x_j)^2)
with x_i≡ M_Ni/m_φ.
Taking the constraint from invisible Higgs decay into account, we obtain an upper bound for the invisible width for x_i≪ 1
Γ(φ→ NN) ≲ 0.06 (10^-2/sinθ)^2 m_φ .
Hence, there is no strong constraint on the partial decay width Γ(φ→ NN) and a sizeable invisible decay width is allowed for a real scalar coupling to HNLs. As argued above, the lifetime of HNLs is long compared to the size of the detector and thus they escape undetected. For example, using the lifetime of HNLs from <cit.>, we explicitly find for m_N=2.3 GeV a decay length of at the seesaw line U^2_seesaw∼ 5· 10^-11 (1 GeV/m_N) <cit.>. Together with the allowed mass range of HNLs, this demonstrates the whole range of invisible partial decay widths in Fig. <ref> can be obtained in the real singlet model. The model is also able to explain light neutrino masses via the seesaw mechanism <cit.>.
§.§ B-L gauge symmetry
The real scalar field discussed in the previous subsection has many undetermined parameters, which can be reduced by introducing a symmetry. A well-motivated scenario is the B-L symmetry. After spontaneous breaking of the B-L symmetry, a Majorana mass term for the HNLs is generated, which provides an explanation of active neutrino masses via the seesaw mechanism <cit.>.
A global B-L symmetry is straightforwardly ruled out: spontaneous breaking of the B-L symmetry results in a Majoron, which is efficiently produced via its interactions with the HNLs. It thus contributes to the effective number of neutrinos, N_ eff, and as it only decouples below the scale of the HNLs, its contribution to N_ eff is too large and excluded by BBN. These constraints are avoided by promoting the global B-L symmetry to a gauge symmetry, which has been first proposed in <cit.>.
The relevant interactions for the following discussion are
ℒ⊃ (D_μϕ)^† ( D^μϕ)
- λ_Hϕ(|ϕ|^2-v_ϕ^2/2) (H^† H -v^2/2)
-1/2 N^TC y_N ϕ N
.
After spontaneous breaking of the B-L gauge symmetry with ⟨ϕ⟩ = v_ϕ/√(2), HNLs and the Z' gauge boson acquire the masses[Without loss of generality, we use mass eigenstates for the HNLs. The Yukawa coupling matrix y_N is then approximately diagonal, where off-diagonal entries only appear due to small non-zero active-sterile neutrino mixing, which we neglect in the following discussion.] M_Ni= y_Ni v_ϕ/√(2) and m_Z'= 2 g_ BL v_ϕ with the B-L gauge coupling. The interactions of the real scalar φ≡Re(ϕ)/√(2) are by construction proportional to the HNL and Z' masses
ℒ⊃ - 1/2φ N_i^T CM_Ni/v_ϕ N_i + 1/2m_Z'^2/v_ϕ^2φ^2 Z^'_μ Z^'μ + m_Z'^2/v_ϕφ Z^'_μ Z^'μ
which results in tight connections between the different observables.
The partial decay width for decay to HNLs in Eq. (<ref>) can be expressed in terms of the masses and the gauge coupling g_ BL
Γ(φ→ NN)
= g_ BL^2 m_φ^3 /2π m_Z'^2∑_i x_i^2 (1-4x_i^2)^3/2
and the decay to a pair of Z' gauge bosons is given in terms of
Γ(φ→ Z'Z')
= g_ BL^2 m_φ/8π(1-4z)^1/2(1 -4z +12z^2 )/z
with z=m_Z'^2/m_φ^2. The Z' gauge boson however is not stable and further decays to SM particles. Its lifetime is inversely proportional to the square of the gauge coupling, τ_Z'∝ g_ BL^-2 m_Z'^-1. Hence, the Z' gauge boson quickly decays to neutrinos, and depending on its mass to charged leptons and hadrons. The standard model charged leptons couple to Z' vectorially and the branching ratio for active neutrinos can be straightforwardly obtained as
Br(Z' →ℓℓ̅)
=g_ BL^2/12πm_Z'/Γ_Z'(1-4y)^1/2(1+2 y)
and Br(Z' →∑_iν_i ν̅_i)
=g_ BL^2/8πm_Z'/Γ_Z'
with y=m_ℓ^2/m_Z'^2. Hence, a substantial fraction of the Z' gauge bosons decays to visible final states which leave a signal in the detector with multiple leptons and/or hadrons and thus do not contribute to either B^+→ K^+φ(→μμ̅) or B^+→ K^+ φ(→inv).
Variants of the B-L gauge symmetry with non-universal lepton number, see e.g. <cit.>, may forbid Z' decays to light leptons and thus evade any constraints from the cascade decay B^+→ K^+ φ (→ Z' Z'→ℓℓ̅ℓℓ̅). In the following we consider the scenario, where the decay φ→ Z'Z' is kinematically forbidden by choosing m_Z'≥ m_φ/2.
In Fig. <ref> we present the maximum decay width Γ(φ→ NN) as a function of the scalar mass m_φ and the Z' mass m_Z'/g_ BL. For this we consider two degenerate HNLs with mass m_N=max(m_ϕ/√(10),0.3 GeV) which maximises the partial decay width Γ(φ→ NN) and respects the lower bound on the mass of HNLs.
The grey-shaded region is excluded by Z' searches.
It has been extracted from the top-left plot of Fig. 2 in <cit.> (see also <cit.>). The coloured solid contours show Γ_ inv∈ [0.01, 0.1,1 ,10] eV. Together this demonstrates the possibility of a sizeable invisible scalar decay width Γ_ inv≳ 1 eV. We indicate the maximum possible invisible decay width consistent with the constraint on m_ Z'/g_ BL also in Fig. <ref> as stars for the three benchmark masses.
Finally, invisible Higgs decays place a constraint on the gauge B-L model via the quartic Higgs portal H^† H ϕ^†ϕ.
For small mixing and light scalars, m_φ≪ m_h, the Higgs branching ratio to a pair of scalars is given by
BR(h→φφ)
= g_ BL^2/8π m_h^3 /m_Z'^2 Γ_hsin^2θ ,
where we expressed the Higgs portal coupling in terms of the scalar mixing angle θ.
Higgs decays to HNLs and Z' gauge bosons are negligible in this case, because they are suppressed by the small masses of the final state particles. The constraint on the invisible Higgs decay branching ratio thus results in an upper bound on sinθ,
sin^2θ≤ 0.93 (m_Z'/GeV)^2 (10^-4/g_ BL)^2 .
It is however weaker than the searches for the scalar in B meson decays and does not pose any additional constraint.
§ CONCLUSIONS
We considered a scalar, which mixes with the SM Higgs boson with an additional invisible decay width Γ_ inv, and scrutinised the constraints on the scalar mixing angle from the search B^+→ K^+ φ (→μμ̅) at LHCb <cit.>. The LHCb search looses sensitivity for invisible decay widths larger than the visible decay width, Γ_ inv≳Γ_^(θ). We demonstrated that for scalars with an invisible decay width of Γ_ inv=𝒪(0.1-10) eV, the decay B^+→ K^+ φ(→inv) provides the most stringent constraint on the scalar mixing angle θ, which opens an opportunity for Belle II to discover new physics in B^+→ K^+ +inv.
We provided two explicit models which realise a sizeable invisible decay width. The gauged B-L model is well motivated as an explanation of non-zero neutrino masses. In this model, the scalar breaking B-L gauge symmetry may decay to heavy neutral leptons.
The heavy neutral leptons are long-lived on the time scale of the detector, escape it undetected and thus contribute to the invisible decay width of the B-L scalar.
This scenario is mostly constrained by searches for the Z' gauge boson which limits the invisible decay width to less than 𝒪(30) eV. As those constraints do not apply in the real scalar model, it is possible to obtain an invisible decay width in the MeV range.
Finally, as the phenomenology mainly depends on scalar mixing and the coupling of the scalar to heavy neutral leptons, the conclusions from the B-L model apply at least qualitatively to other gauged U(1)' extensions of the SM,
as long as the scalar spontaneously breaking the U(1)' symmetry can decay to heavy neutral leptons.
§ ACKNOWLEDGEMENTS
MS thanks the Karlsruhe Institute of Technology for their hospitality and acknowledges support by the Australian Research Council Discovery Project DP200101470.
This work was supported by the KIT International Excellence Fellowships Program with funds granted to the University of Excellence concept of the Karlsruhe Institute of Technology.
This work has been supported by the European Union’s Framework Programme for Research and Innovation Horizon 2020 under grant H2020-MSCA-ITN-2019/860881-HIDDeN.
§ RELEVANT DECAY CHANNELS B->K(*)Φ
At 1 loop, the SM Higgs h has flavour-violating couplings to quarks <cit.>[See <cit.> for a calculation of the effective vertex of the second Higgs in R_ξ gauge.] from electroweak corrections. The dominant contribution originates from top quarks
ℒ_ eff = h/v C_sb m_b s P_R b + h.c. with C_sb = 3√(2) G_F m_t^2 V_ts^* V_tb/16π^2 .
They give rise to various decays B→ X_s +, where X_s are mesons containing a strange (s) quark. The matrix element of the process separates into short-distance part C_sb m_bsinθ /v and the matrix element ℳ_BX_s which describes the long-distance QCD contributions
ℳ_B→ +X_i = C_sbm_b sinθ/vℳ_BX_s, ℳ_BX_s = ⟨ X_s|s̅P_Rb| B⟩ .
The matrix elements ℳ_BX_s can be expressed in terms of the matrix elements mediating the weak charged current transitions
ℳ_BX_s,V = ⟨ X_s|s̅γ_μb|B⟩, ℳ_BX_s,A = ⟨ X_s|s̅γ_μγ_5b|B⟩ ,
see Appendix F in <cit.> for a detailed discussion how to express the matrix elements for different states X_s (scalar, pseudoscalar, vector, axial-vector, or tensor) in terms of form-factors.
We focus on B decays into K^+, K_S^0 and K^* mesons.
The decay width for B^+→ K^+φ is[The production of a light Higgs from flavoured meson decays has been first considered in <cit.>.]
Γ(B^+→ K^+ φ ) =
|C_sb|^2 sin^2θ√(2)/16πG_F (m_B^2-m_K^2)^2 /m_Bm_b^2/(m_b-m_s)^2 |f_0^BK(m_φ^2)|^2 λ^1/2(1,m_K^2/m_B^2,m_φ^2/m_B^2)
with the Källén function λ(x,y,z)=x^2+y^2+z^2-2xy-2xz-2yz, and f_0^ BK is the transition form-factor B→ K.
The decay width for the neutral B meson to K_S^0 is obtained from Eq. (<ref>) by replacing the charged meson masses with neutral meson masses and dividing the partial decay width by two to take into account the overlap of K_S^0 and K^0. The partial decay widths for K^* vector mesons in the final state are given by
Γ(B→ K^* φ ) = |C_sb|^2 sin^2θ√(2)/64π G_F m_B^3 m_b^2/(m_b+m_s)^2 |A_0^BK^*(m_φ^2)|^2λ^3/2(1,m_K^*^2/m_B^2,m_φ^2/m_B^2) ,
where K^* refers either to the charged K^*+ or neutral K^*0. The B→ K^(*) form factors are parameterised by (see e.g. <cit.>)
F(q^2) = ∑_i=0^2 a_i ( z(q^2)-z(0))^i/1-q^2/m_R^2,
z(t) = √(t_+-t)-√(t_+-t_0)/√(t_+-t)+√(t_+-t_0)
with t_± = (m_B ± m_K)^2, t_0 = t_+ (1-√(1-t_-/t_+)). The resonance mass m_R and the coefficients a_i for the two form factors f_0^BK and A_0^BK^* are collected in Tab. <ref>. Due to finite width of the K^*, the partial decay width Γ(B→ K^*(→ Kπ)+inv) is enhanced by 20% <cit.>.
utphys
|
http://arxiv.org/abs/2306.12240v1
|
20230621125929
|
ICAR, a categorical framework to connect vulnerability, threat and asset managements
|
[
"Arnaud Valence"
] |
cs.CR
|
[
"cs.CR",
"math.CT"
] |
1]Arnaud Valence
[1]ESIEA (Graduate school of digital engineering)
ICAR, a categorical framework to connect vulnerability, threat and asset managements
[
====================================================================================
We present ICAR, a mathematical framework derived from category theory for representing cybersecurity NIST and MITRE's ontologies. Designed for cybersecurity, ICAR is a category whose objects are cybersecurity knowledge (weakness, vulnerability, impacted product, attack technique, etc.) and whose morphisms are relations between this knowledge, that make sense for cybersecurity. Within this rigorous and unified framework, we obtain a knowledge graph capable of identifying the attack and weakness structures of an IS, at the interface between description logics, database theory and cybersecurity. We then define ten cybersecurity queries to help understand the risks incurred by IS and organise their defence.
Keywords Vulnerability management, threat management, asset management, CVE, CWE, CAPEC, CVSS, CPE, Category theory
§ INTRODUCTION
When it comes to cyber systems defense, security operations management has long involved separate tasks: vulnerability management, cyber threat management, and asset management. Today, these disciplines are intended to interoperate within a broader framework, supported by public knowledge bases about cyber threats, vulnerabilities, and IT assets. This interoperability draws an integral and integrated research path, at the interface between ontology language, database theory and cybersecurity, in order to understand how adversaries use vulnerabilities to achieve their goals.
From a general perspective, the research efforts strive to integrate several repositories: the Common Platform Enumeration (CPE) listing IT assets, the Common Vulnerabilities and Exposures (CVE) listing discovered vulnerabilities, the Common Weakness Enumeration (CWE) listing commonly appearing weaknesses, the MITRE ATT&CK framework listing Adversary Tactics and Techniques (ATT) and the Common Attack Pattern Enumeration and Classification (CAPEC) which helps facilitate attack identification and understanding. The latter repository thus acts as a bridge connecting vulnerability management and threat management. On this basis, research work has explored several avenues.
* Some works propose unified ontologies, more or less interoperable, such as Kurniawan et al.<cit.>, preceded in this by partial ontologies such as UCO and SPESES, which do not yet include the CTI incorporated in the ATT&CK (even if they can include other vulnerability repositories, such as the CYBOX, KillChain or STUCCO standards).
* Other research explores the track of domain-specific languages (DSL), and in essence that of the Meta Attack Language (MAL) meta-language. This is the case for Xiong et al.'s EnterpriseLang meta-language <cit.> and Åberg and Sparf's AttackLang meta-language <cit.>.
* A third research direction proposes to deepen the graph visualization aspects of attack paths through a relational representation of threats and vulnerabilities. This is the case of the BRON model of Hemberg et al.<cit.>.
The approach proposed here is a new way to deepen the mathematical aspects of integrated security operations management. This approach combines three advantages in that
(i) like the first approaches mentioned above, it develops a unified vision of vulnerability and threat repositories;
(ii) like the second ones, they articulate vulnerabilities and threats within the framework of a cybersecurity-oriented meta-language, except that — and this is a fundamental point — it is a mathematical meta-language rather than an ontological one[It may be noted that the DSL approach adds an ontological layer to the ontology already at work in the MITRE and NIST repositories.].
(iii) like the third ones, it deepens the study of graph visualization and structural properties of the unified cybersecurity ontology, by borrowing the powerful and rigorous graph-theoretic concepts of category theory.
We believe that category theory can be put to good use by cybersecurity teams. Following the example of a growing number of researchers, involved in more and more diverse fields of knowledge, we believe that the concepts of category theory offer important keys to understanding that simplify and unify the treatment of security operations. We see category theory as the very language of interoperability that enables the integrated management of assets, vulnerabilities, and cyber threats.
The article is organized as follows. The second section discusses the construction of the integrated cybersecurity resource, which will lead to the knowledge graph called ICAR. Based on this, the third section shows how to exploit the knowledge graph to answer different concrete cybersecurity queries. We will see how the categorical concepts allow us to handle bottom-up (from assets to defend to adversaries) as well as top-down (from adversaries to assets) queries. The fourth section concludes.
§ BUILDING ICAR
§.§ Data sources
The data sources are from the knowledge bases provided by the NIST (National Institute of Standards and Technology) and the MITRE Corporation.
* Common Platform Enumeration (CPE) is a way of assigning standardized identifiers to classes of IT assets.
* Common Vulnerabilities and Exposures (CVE) is a knowledge base listing publicly known vulnerabilities. Each CVE entry contains an identification number, a description and at least one reference to publicly known cyber security vulnerabilities. Additional information may include patch information, severity scores and impact assessments according to the Common Vulnerability Scoring System (CVSS), as well as links to exploit information and advisories.
* Common Weakness Enumeration (CWE) is a knowledge base listing software and hardware weaknesses: flaws, features, breaches, bugs, and other errors in the design, architecture or implementation of software and hardware components that, if left unfixed, can make systems and networks vulnerable to attack. CVE entries have a relational link to CWE entries, as an example of a weakness that actually affects a computer system.
* Common Attack Pattern Enumeration and Classification (CAPEC) enumerates and classifies attack patterns to facilitate the identification and understanding of attacks. The attack patterns have a tree structure, i.e. they are organised into categories and sub-categories of attacks. They allow the ATTs to be linked to CWE weaknesses.
* MITRE ATT&CK framework abstractly describes cyber attack techniques organised into twelve sequential tactics. The framework is presented in a matrix format where the columns represent tactics and the rows represent techniques.
These five knowledge bases (or six including CVSS) thus make up an integrated ontological resource for cybersecurity (which we will call ICAR). At this point, it is important to note that this resource only represents the abstract relationships between the data sources. In the language of database, we would say that it shows the column headings of the primary and secondary keys, but not the column entries themselves.
§.§ Ontologies as knowledge graphs
The integrated ontological resource can be represented more formally as a graph.
A graph G is a sequence G := (V, E, src, tgt), where V et E are sets (respectively the set of vertices and the set of arrows of G), and src,tgt : E → V are functions (respectively the source and target function of G). An arrow e ∈ E with source src(e) = v and target tgt(e) = w is represented as follows:
vw.
On this basis, it is possible to represent each dictionary (or ontology) by a vertex and each link between dictionaries by an arrow, without forgetting that dictionaries can have internal links. This is the case of CAPEC patterns. For example, the CAPEC-593 pattern (Session Hijacking), linked to the CWE-287 weakness (Improper Authentication) and to several techniques, sub-techniques and MITRE ATT&CK tactics, has itself children (the CAPEC-60, CAPEC-61, CAPEC-102, CAPEC-107 patterns) and is itself linked to the CAPEC-21 pattern (Exploitation of Trusted Identifiers). We must therefore add to the knowledge graph a loop on CAPEC representing the 𝖢𝗁𝗂𝗅𝖽𝖮𝖿 dependency relation. It is also possible to add the dual relation 𝖯𝖺𝗋𝖾𝗇𝗍𝖮𝖿, although redundant, as foreseen by the MITRE corporation. This is also the case for weaknesses. For example, the aforementioned weakness CWE-287 has children CWE-295, CWE-306, CWE-645, CWE-1390, and is itself a child of weakness CWE-284 (Improper Access Control). Finally, it remains to take into consideration the internal structure of the ATT&CK framework, which is broken down into the dictionaries Tactics, Techniques (including sub-techniques) and Procedures. In this article, we will only deal with tactics and techniques. Sub-techniques will be assimilated to techniques of which they are children.
Taking into account these additional specifications, we finally obtain the graph depicted in figure <ref>, faithful to the structure of the asset, attack and weakness ontologies.
Remark the complementarity of the CAPEC and Techniques dictionaries in the overall understanding of threat, beyond their simple logical link. Techniques and (attack) patterns contextualise threat differently. Patterns are intended to focus on the compromise of applications in order to understand the path taken by adversaries to exploit end-to-end application weaknesses in the information system (IS), while techniques describe the concrete dynamics of an attack scenario executed step by step to compromise the IS (see <cit.> for more details). Thus, technique T1528, which describes the theft of application access tokens in order to obtain credentials for access to remote systems and resources, can be contextualised in two different ways: (i) as a step to legitimise application actions under the guise of an authenticated user or service by obtaining a trusted identifier, hence its belonging to the CAPEC-21 (Exploitation of Trusted Identifiers) pattern, (ii) as a strategic step to steal account names and passwords, hence its belonging to the TA0006 (Credential Access) tactic. Ultimately, the techniques embody attack tactics as the "how" of the attack (where tactics characterise the "why").
§.§ Semantic facts and knowledge schema
We can do better. What the knowledge graph represents are roughly the data tables (the vertices) and the data columns (the arrows). However, there is still some information missing which is not made explicit in the graph: the path equivalences in G.
Let G = (V, E, src, tgt) be a graph. A path of length n in G, denoted p ∈ Path^(n)_G is a sequence
p = (v_0 v_1 v_2 … v_n)
of arrows in G. In particular, Path^(0)_G = V and Path^(1)_G = E. The set of all paths in G is denoted
Path_G := ⋃_n∈ℕ Path^(n)_G.
Paths may themselves carry higher level information about the knowledge structure. This is the case if constraints are imposed on the paths to translate properties that make sense. These constraints can then be expressed as path equivalences.
Let G = (V, E, src, tgt) be a graph and p,q:b→ c ∈ Path^(n)_G two paths in G. A categorical path equivalence relation in G, or simply a path equivalence in G, is a relation denoted ≃ such that p≃ q if and only if src(p)=src(q) and tgt(p)=tgt(q). Moreover, if m:a→ b and n:c→ d are two arrows in G, then m and n are respectively an epimorphism (a right simplifiable morphism) and a monomorphism (a left simplifiable morphism), i.e. p≃ p if and only if mp≃ mq and pn≃ qn.
Following Spivak<cit.>, we call this equivalence relation facts.
There are facts in our study. It is indeed natural to ask for a form of reciprocity in the links between weaknesses and attack patterns. If an attack pattern CAPEC-X exploits a weakness CWE-Y, it is natural that it is part of the patterns referenced by this weakness. We can therefore add a path equivalence in the knowledge structure to obtain the following fact:
() ≃.
for all ∈ and ∈.
Parent/child relations express other facts. It is natural to require that a weakness declaring a child is itself declared as the parent of the child. We therefore have a constraint such as
() ≃.
It is also possible to express path equivalences in the more convenient algebraic form of path equalities, using composition operator (.). The two previous equivalence relations can then be rewritten, for any i ∈{,}
i. = i,
i. = i,
i. = i.
Are there other facts? Could we not ask that the child of a weakness belong to the same attack pattern as its parent, or one of its children? The answer is no. The data structure of the CWE and CAPEC does not have this characteristic. No hybrid facts can be derived from the two previously defined facts.
This negative result can be attributed to the meaning provided by the labelling of arrows. Stress that facts are dependent on the meaning of arrows. They are semantic facts. For example, in a bijective data structure where each parent has exactly one child and a child exactly one parent, there is an equivalence between the path p=(v_0 v_1 ) ∈ Path^(2)_G and the path p'=v_0 ∈ Path^(0)_G, but this equivalence no longer holds in a data structure with multiple parents and children.
This is why it is not possible to apply Spivak's theory of ologs<cit.>. Ologs are elegant categorical frameworks for rigorously representing knowledge structures exploiting databases, but are limited to structures of functional type. It is inappropriate in this study since the vertices of our knowledge graph generally have several arrows, and may in some cases have none.
On the other hand, the usual theory of oriented (multi)graphs is too broad to capture all the properties present in this study, since we added a path equivalence property. If we add the facts <ref> to the knowledge graph <ref>, we obtain a richer structure called (knowledge) schema .
A categorical schema S consists of a pair S := (G,≃) where G is a graph and simeq a path equivalence on G.
In the remainder of this study, we will speak more simply of a schema, in the absence of any risk of confusion.
§.§ More about relation CWE ⇆ CAPEC
It was noted that the CWE and CAPEC dictionaries are linked in both directions. This may seem strange, as a mapping can in principle be read both ways: if the weaknesses correctly refer to the attack patterns, it should be possible to recover the former from the latter.
Actually, this is not always the case. Kanakogi et al.<cit.> report some CAPEC-IDs that are not identified by CWE-IDs that fall within their attack pattern. As a result, some CVE-IDs would not be correctly mapped to their attack pattern(s). The authors give the example of the CVE-2018-18442 vulnerability, which is linked to a weakness due to network packet flooding. However, while there is an attack pattern for this weakness (the CAPEC-125 pattern), the fact is that the vulnerability is also associated with the CWE-20 weakness (incorrect input validation) which, according to the authors, prevents the vulnerability from being linked to the CAPEC-125 pattern, as the latter is not referenced by the CWE-20 weakness. This problem then motivates the authors to link CVE-IDs directly to CAPEC-IDs. Their solution is to use similarity indicators between CVE-IDs and CAPEC-IDs, using machine learning and natural language processing.
In fact, the traceability problem discussed by Kanakogi et al. does not describe an architectural flaw (since weaknesses can list several attack patterns), but reflects the incomplete mapping between dictionaries. From this point of view, the strategy of the authors seems to be good, even if it consists in directly linking dictionaries that are not graphically related. In the end, this direct approach seems to be complementary to ours in that it allows to complete the collection of arrows that will be used to populate the knowledge schema. This remark is also valid for other approaches of direct mapping between dictionaries, like the projects of Grigorescu et al. <cit.>, Kuppa et al. <cit.> or Ampel et al. <cit.>, which aim to link CVE-IDs to MITRE ATT&CK tactics and techniques.
§.§ ICAR as schema instance
The knowledge schema provides an abstract view of cybersecurity data ontologies, the "skeleton". It represents the structure of the data in the form of a triplet (of vertices, arrows and equivalence relations) in exactly the same way as the attributes of database tables present the n-uplets of the database.
It is now a question of populating the knowledge schema in such a way as to make the knowledge base explicit. This explicitation is in fact an instantiation (a "concretisation") of its schema.
Let S:=(G,≃) a categorical schema where G := (V, E, src, tgt) is a graph. An instance I on S is given by
* a set I(v) for any vertex v∈ V ;
* a function I(e):I(v)→ I(v') for any arrow e:v→ v' ;
* the equality I(p)=I(q) for any path equivalence p≃ q.
In other words, an instance on S is a path equivalence preserving functor F:S→𝖲𝖾𝗍.
Among the infinite number of instances that can be generated by C, there is one that interests us the most: the up-to-date resource for cybersecurity ontologies. We call this instance for Integrated CAtegorical Resource. To fix ideas, we represent in the tables <ref> an extract of ICAR, where appear at the time of writing the most salient added or updated entries, among more than 20,000 CPE, about 176,000 CVEs, 668 CWEs, 559 CAPECs, 193 Techniques and 14 Tactics.
It is difficult not to make a connection with a database schema, as we suggested above. It is indeed possible to see an arrow e∈ E ∈ G ∈ C as a relation linking the table identified by src(e) with a table identified by tgt(e). For example, the arrow → expresses that the table CWE points to the table CAPEC, i.e. entries that have a primary key in CWE are related to entries that have a primary key in CAPEC, via the secondary keys found in the CAPEC column of the table CWE.
At this point we can see that the database schema is not in normal form, since the attribute values are not necessarily atomic (so a weakness frequently has several parents and several CAPECs). Strictly speaking, we should decompose the database schema so as to express it in first normal form. In fact, we do not need such a normalization in this study because it would unnecessarily transform the resource ICAR by adding redundancy. We do, however, need a normal form to check the consistency of ICAR. This leads us to a concept of categorical normal form.
A database is said to be in categorical normal form if
* any table t has a single primary key column ID_t fixed at the beginning;
* any entry belonging to a column c∈ t refers to a primary key in a single table t', which is denoted by p_c:t→ t' ;
* any database equivalence between two relations p_c,q_c:t→ t' must be declared as a path equivalence in the corresponding categorical schema, i.e. p_c≃ q_c.
We check that ICAR actually is in categorical normal form. Condition 1 is met because each dictionary has a single primary key column. Condition 2 is assumed to be met by the successive updates of the dictionaries: if a new entry appears in the foreign key columns, it is assumed that it is indexed at the same time in another table as a primary key. There are no unreferenced entries in primary key. On the other hand, it is possible that no foreign key is associated with the entry of a new item as a primary key. This is typically the case when, for instance, an asset affected by a vulnerability has not yet been found, or the weakness corresponding to this vulnerability is still awaiting identification, etc. It is also possible for a primary key column to have no foreign key column. In this case (very common in databases), the table is limited to a single column. This is the case here for the CPE and Tactics tables. In this case, we speak of a leaf column. Condition 3 is respected because it is easy to check that the facts <ref> are translated into relational equivalences in database: the attack patterns declared in the weaknesses declare in turn the declaring weaknesses, and vice versa, and the children declared by the weaknesses or the attack patterns declare in turn their declaring parents.
§ USING ICAR
In this section, we illustrate the applicability of ICAR through several use cases. First of all, we must start by introducing the assets of the IS subject to attack.
§.§ Instantiate ICAR with an IS
Graph <ref> brings together knowledge about vulnerability and threat managements in a single categorical schema. But asset management is still to be considered. Assets are explicitly taken into account by Kiesling et al.<cit.> in the SEPSES knowledge graph. Indeed, we find there the sub-graph . We take up this idea with two differences. Firstly, we consider only a subset of assets. This restriction allows us to refer to a concrete entity to be analysed, i.e. an IS made up of assets inventoried in a database (to be monitored or investigated). This inventory of assets is commonly materialised by a configuration management database (CMDB). Secondly, and by pure convention, we reverse the arrow formalising the dependency between CPEs and assets. This is indeed what CMDBs suggest, which normally provide for each component added to the database as a primary key a foreign key CPE as illustrated in table <ref>.
CMDB can thus be connected to ICAR via the CPE attribute. It can be noted that this correspondance is surjective (each CPE reference refers to at least one asset in the CMDB) but not necessarily injective since a CMDB can have several assets with the same CPE[It can also happen that the CPE reference is not entered in the CMDB. Furthermore, there are many "exotic" assets that are not listed in the CPE dictionary]. And finally, it is possible to complete the knowledge schema C of which ICAR is the instance, which is represented in figure <ref> by noting DB_X the inventory of assets from the CMDB of SI X.
We therefore have the following Q1 query:
[Q]
Instantiate an inventory of assets DB_X ∈.
We start by noting that the instantiation already referred to here is different from the instantiation of the knowledge schema. The idea now is to instantiate an object which already has the database structure (ICAR), in other words to populate ICAR (where ICAR instantiates the knowledge schema as a "concretisation"). In category theory, this notion of instantiation can be approached in many ways. In fact, there are at least two ways of dealing with Q1, either by first "connecting" table to table CPE and then filtering on the assets DB_X ⊂, or by directly connecting DB_X and CPE tables. In the first case, a filtering operation must be added to the asset connection operation. This operation is not trivial in category theory. Moreover, it implies adding ex post the quantitative aspect induced by the potential presence of more assets with same CPE reference. This is why we will apply the second method, which is easier and more direct. The idea of filtering will nevertheless be discussed later in order to answer query Q6.
In pratical terms, if we think in terms of database management, the addition of DB_X to ICAR can be understood as a database migration, and more precisely as a database union. This intuition can be translated into terms of "categorical data". The idea of "migration" finds a natural translation in category theory with the concept of functor. Let S be the (categorical) schema associated with figure <ref> (i.e. devoid of assets) and T the schema associated with figure <ref> (i.e. enriched with an inventory of assets). Following the example of Spivak<cit.>, we can then define a schema morphism (i.e. a functor) F:S → T. Migration functors follow.
Let S and T be two schemas, S-𝖨𝗇𝗌𝗍 and T-𝖨𝗇𝗌𝗍 instances on S and T respectively, F:S → T a schema morphism and I∈ T-𝖨𝗇𝗌𝗍 :T→𝖲𝖾𝗍. Then the composite functor S T 𝖲𝖾𝗍 lives in the S-instance (I∘ F∈ S-𝖨𝗇𝗌𝗍) and we define the functor Δ_F such that
Δ_F : T-𝖨𝗇𝗌𝗍→ S-𝖨𝗇𝗌𝗍
I I ∘ F
as well as the functors Σ_F, Π_F :S-𝖨𝗇𝗌𝗍→ T-𝖨𝗇𝗌𝗍 as adjoint functors of Δ_F, respectively on the left and on the right.
In the language of category theory, Δ_F, Σ_F and Π_F are called pullback[Here, the term "pullback" is understood as "a category of instances assigning a set of row-IDs to a schema element". This definition is related to Grothendieck's construction (and fibration).], left pushforward and right pushforward respectively.
Intuitively, Δ_F can be understood as a projection operator in the sense that data (tables, columns) is duplicated. In contrast, Σ_F is interpreted in terms of unifying tables, and Π_F in terms of joining tables. This difference between left and right pushforwards (between unification and junction) is important. When the tables to be joined have no common key, the merging operation can take place in one of two ways:
* either by adding the rows of the second table to those of the first, which has the effect of creating Skolem variables in the unfilled "foreign" columns (in this case we reason on the sum of the primary key spaces);
* either by multiplying the rows of the second table with those of the first, which has the effect of duplicating the rows of the first table as many times as there are rows in the second (in this case we reason on the product of the primary key spaces).
But the situation is simplified if tables have a common key. In this case, left pushforward and right pushforward are equivalent and there is no duplication of rows or new variables created. This is exactly what happens in our case since asset inventories are supposed to include CPE IDs. Table reconciliation therefore occurs naturally by matching the foreign keys of the inventories with the primary keys of the CPE dictionary.
§.§ List all vulnerable assets
For a CISO or security analyst, one of the most natural queries is to list the vulnerable assets of the IS.
[Q]
List all vulnerable assets of a given IS
To process this query, one must first list the entries in the CMDB whose foreign key (i.e. the CPE attribute) also appears as a foreign key in the CVE table. In category theory terminology, we say that we use a pullback (or fiber product), which is one of the many variations of the categorical concept of limit.
Let be the dictionaries CVE and CPE, DB_X the inventory of the IS X, and the relations DB_X and . The pullback of the cospan DB_X, denoted DB_X CPE×, is defined by the set
DB_X CPE×:={(x,y)|x∈ DB_X,y∈,has(x) = has(y) }
respecting the commutative diagram
To obtain the only vulnerable assets (dissociated from their vulnerabilities), it is sufficient to retain only the left projection of the pullback. For assets affected by several vulnerabilities, an additional projection morphism is necessary. We then obtain the set of vulnerable assets denoted by 𝖠𝖿𝖿𝖾𝖼𝗍𝖾𝖽𝖠𝗌𝗌𝖾𝗍𝗌_𝖷.
§.§ List all vulnerabilities of the IS
In the same way, it is also useful to list all vulnerabilities affecting a given IS.
[Q]
List all vulnerabilities of a given IS
This query, which is a dual of the previous one, consists in keeping only the vulnerabilities from the pullback <ref>. This list is obtained by using the right projection of DB_X CPE×. The resulting set is denoted 𝖵𝗎𝗅𝗇_𝖷.
§.§ List the vulnerabilities affecting an asset
Similarly, it is natural to ask for a list of vulnerabilities affecting a particular asset in the IS.
[Q]
List the vulnerabilities affecting an asset x∈ DB_X.
To process this query, we have to isolate the pairs (asset, vulnerability) of the same asset x in the pullback <ref>. We therefore need to reason about the following commutative diagram:
3pc
x CPE×[r]^-isIn[d]^has DB_X CPE×[d]^has
x [r]_isIn DB_X
It turns out that this diagram also defines a pullback, by virtue of the pullback propagation theorem. Consider the following diagram:
3pc
x CPE×[r]^-isIn[d]^has DB_X CPE×[r]^-has[d]^has [d]^has
x [r]_isIn DB_X [r]_has
such that the commutative square on the right-hand side is a pullback. It follows that that of the left-hand side is also a pullback, and consequently the entire commutative diagram. The set x CPE× thus satisfies Q4 by providing all vulnerabilities impacting asset x.
§.§ List the assets affected by a vulnerability
From the pullback DB_X CPE×, we see that it is also possible to filter the resulting pairs by CVE rather than by asset. This filtering fulfils another mission of the CISO (or of any administrator or analyst whether or not they have been mandated to do so): that of monitoring the changes needed to guarantee the logical and physical security of the IS for which he is responsible. This task includes monitoring vulnerabilities likely to affect the IS, and in practice begins by consulting the security alerts issued by the CERT (to which every CISO is in principle a subscriber). Each alert contains one or more CVE entries on a given subject. When a CISO becomes aware of a vulnerability, s/he has to ask her/himself whether the IS is affected, with the level of attention weighted by its CVSS score. Assuming that the new vulnerability is added to ICAR, we therefore have the following Q5 query, dual to Q4 :
[Q]
List the assets affected by a vulnerability y∈.
This query is processed by choosing from DB_X CPE× the pairs corresponding to the vulnerability y we are looking for, which we note DB_X CPE× y (i.e. as many pairs as assets impacted by y).
The same applies to the resulting commutative diagram, which is a pullback, and by combining Q4 with Q5 we obtain the pair (x,y) giving the vulnerability y of the asset x, that is useful for consulting the remediation status of a vulnerability to be treated (is it fixed, in progress, scheduled...?).
3pc
x × y [r]^-isIn[d]^isIn DB_X CPE× y [r]^-has[d]^isIn y [d]^isIn
x CPE×[r]^-isIn[d]^has DB_X CPE×[r]^-has[d]^has [d]^has
x [r]_isIn DB_X [r]_has
§.§ List vulnerabilities by criticality
In cybersecurity, vulnerabilities are not of equal importance. There is a tendency to focus on the most severe vulnerabilities. It is not uncommon for a CISO to plan enhanced monitoring for critical vulnerabilities. Typically, s/he may request a regular report on vulnerabilities with a score of 9 or more (in CVSS v3.0 notation), or more generally with a score within a range S ⊂ [0.0,10.0]. Query Q6 follows.
[Q]
List vulnerabilities by CVSS score s∈ S ⊂ [0.0,10.0].
As we saw with Q1, pullback can be used to assign a set of row-IDs to a schema element, which seems to do the trick. However, we need an additional ingredient to filter on the values taken by the entries in the CVSS score column. Indeed, migration functors defined above do not operate in the context of schema morphism, but in that of type morphism. We therefore need a notion of typing.
Let S be a schema and A a discrete category (i.e. a category containing only objects and identity morphisms) composed of attribute names. A typing for S is a triplet (A, i, γ) where i is a functor from A to S mapping each attribute to its vertex, and γ is a functor from A to 𝖲𝖾𝗍, mapping each attribute to its type.
Then, i reflects the pairing of the knowledge graph's vertices with the attributes of A and γ reflects the assignment of the attributes of A to their type. Consequently, we call a typed instance a pair (I, δ) where I : S → Set is an instance together with a natural transformation δ : I ∘ i ⇒γ.
Intuitively, δ reflects the assignment of a type to each ID in I. Typically, this could be the assignment of a type or a type, but more generally it can be any type.
Now, as Spivak points out<cit.>, if we go back to pulback, we see that it is possible to adapt migration functors to type-change functors.
Let S be a schema and k : A → B a morphism of typing instances. We refer to the induced functors Δ̂_k : 𝖲-𝖨𝗇𝗌𝗍_/A→𝖲-𝖨𝗇𝗌𝗍_/B and Σ̂_k, Π̂_k : 𝖲-𝖨𝗇𝗌𝗍_/A→𝖲-𝖨𝗇𝗌𝗍_/B as type-change functors. Δ̂_k, Σ̂_k and Π̂_k are respectively called the pullback, the left pushforward and the right pushforward type-change functor.
In the context of Q8, we are therefore dealing with a morphism of typing instances which associates a subtype B with the predefined type A=[0.0,10.0]⊃ B.
§.§ Measuring the attack surface of an IS
The attack surface is a summary of the weak points in a IS that an attacker can exploit to gain access and carry out malicious actions. The more weak points there are, the greater the attack surface and the greater the risk of being attacked. Measuring the attack surface therefore makes it possible to assess the barriers an attacker needs to overcome to exploit the weakness.
[Q]
Measuring the attack surface of an IS X.
There are myriad ways of defining the attack surface of an IS, and just as many ways of measuring it once it has been defined. One of the simplest definitions is based on the CVSS scores of the vulnerabilities present in the IS. From that point on, the attack surface can be measured in different ways, bearing in mind that the CVSS standard is itself a system of metrics based on three metric groups<cit.>[Base, Temporal, and Environmental. The base metrics produce a score ranging from 0 to 10, which can then be modified by scoring the temporal and environmental metrics. In addition to the base score, the CVSS standard is made up of two other groups of measures: temporal scores and environmental scores. The latter are not provided by the NVD, either because they change over time due to events external to the vulnerability (temporal scores), or because they refer to impacts that are relative to the organisation (environmental scores).].
The simplest indicators are :
(i) the list of assets affected by a vulnerability with their associated CVSS score (as many weak points exploitable by an attacker) ;
(ii) the sum of the assets affected by a vulnerability weighted by their CVSS score.
Formally, indicator (i) corresponds to the set of pairs {(DB_X-ID,CVSS-ID)} for any asset DB_X-ID and for any score vulnerability CVSS-ID. It is obtained from the schema morphism CVE CVSS and the pullback DB_X CPE× CVE previously defined as follows:
3pc
DB_X CPE× CVSS [r]^-has CVSS
DB_X CPE× CVE [r]^-has[u]_has[d]^has CVE [u]_has[d]^has
DB_X [r]_has CPE
The product DB_X CPE× CVSS summarises as a simple list the mapping of possible entry points for a potential attacker, with their associated criticality. Seen as the product of DB_X and CVSS, DB_X ×_CPE CVSS can then be used to define the synthetic indicator (ii). Assuming that the assets affected are of equal importance, the synthetic attack surface indicator, 𝖠𝗍𝗍𝖺𝖼𝗄𝖲𝗎𝗋𝖿𝖺𝖼𝖾, is easily obtained as the sum of the CVSS scores projected into the list on the right:
𝖠𝗍𝗍𝖺𝖼𝗄𝖲𝗎𝗋𝖿𝖺𝖼𝖾 := ∑_(x,y)∈ DB_X CPE× CVSS right(x,y)
We note that, despite their equal importance, vulnerable assets do not involve equally important threats (such as attack media). Not only do the assets affected differ in the severity of their vulnerabilities, but they can also differ in the number of vulnerabilities affecting them, and it is not uncommon for an asset to accumulate vulnerabilities. For example, Gitlab 15.8.0 has vulnerabilities CVE-2022-3411, CVE-2022-4138, CVE-2022-3759 and CVE-2023-0518, the last three of which are of high severity.
These indicators obviously give a simplistic view of attack surfaces as they actually characterise IT systems. In reality, the assets of an IS do not have the same sensitivity for a variety of reasons: some assets are exposed to the Internet, others are not; some are in production, others in pre-production, development, decommissioning, etc.; some are constrained to high availability, others are not, etc. However, it is possible to take into account the importance of assets by adding a sensitivity criterion. This criterion is generally incorporated into CMDBs, which include a "CI Importance" property for this purpose, in line with ITIL architecture. If affected assets are of unequal importance, then each asset must be weighted by an importance indicator, i.e. a new IMPT_X data set connected to DB_X must be added to ICAR. In this case, it is sufficient to repeat the previous developments by reasoning about the pullback IMPT_X ×_CPE CVSS :
3pc
IMPT_X CPE× CVSS DB_X CPE× CVSS [l]^-has[r]^-has CVSS
IMPT_X CPE× CVE [u]_has[d]^has DB_X CPE× CVE [l]^-has[r]^-has[u]_has[d]^has CVE [u]_has[d]^has
IMPT_X DB_X [l]^-has[r]_has CPE
Note that an attack surface cannot be interpreted as measures of risk; as the NVD points out<cit.>, "CVSS is not a measure of risk". In risk analysis, risk is always the product of a threat, a vulnerability and a severity. ICAR lacks far too much information to be used as a basis for risk analysis, both in terms of business analysis (business values, feared events, impact of damage suffered) and threat analysis (sources of risk, attractiveness of the IT target, etc.). CVSS metrics can only measure the severity of vulnerabilities, which is only one component of risk.
§.§ List vulnerabilities that can be exploited by a technique or tactic
We now turn to the long paths to examine how vulnerability management is linked to threat management. This link is bidirectional: top-down and bottom-up. We start with the top-down approach. It is natural to ask what vulnerabilities can be exploited by a given technique pursuing a given tactic. This approach makes it possible to map the dangers corresponding to the different tactical stages of the kill chain, which is useful for organisations' defenders, who can prioritise vulnerabilities to be remedied, and for its adversaries, who can investigate their attacks. For example, at the start of an attack, the adversaries apply one or more reconnaissance techniques. They may, for example, target a website or an active directory with the aim of compromising accounts, creating accounts, obtaining capabilities (resource development tactics) or even taking their attack a step further with initial access tactics (remote access to the network, installation of a passive listening system, etc.). The list of vulnerabilities that can be exploited by this tactic can then enable the defender to be more vigilant about the assets that could be targeted by the adversary (i.e. a Wordpress application, an LDAP server, etc.). This knowledge is also useful to the adversaries because it tells them what they should be looking for, an asset or a version number if they already know an asset. So we have request Q8.
[Q]
List vulnerabilities that can be exploited by a technique or tactic
There are several ways of dealing with this query. The simplest is probably to observe techniques and tactics as sieves.
Let v be a technique or a tactic. A sieve on v is a collection S of morphisms such that :
* e ∈ S ⇒cod(e) = v,
* (e ∈ S ∧cod(f) = dom(e)) ⇒ e ∘ f ∈ S.
In other words, a sieve on an object A in ICAR is a collection of arrows of codomain v closed by precomposition of morphisms in ICAR. However, this definition does not correspond exactly to Q8. On the one hand, the universal aspect of the collection of arrows is missing, as we are looking for the list of all vulnerabilities that can be exploited by a technique or tactic. This universality property is provided by the notion of maximal sieve.
A sieve S on v is said to be maximal (or principal) if it contains all the arrows of codomain v. It is denoted ↑ v.
On the other hand, the resulting sieve has too many arrows, since it includes all the precompositions of v-target morphisms. However, what counts for Q8 are only the CVE-domain arrows. To subtract the other arrows (i.e. arrows of CWE, CAPEC or Sub-technique domain), we need a notion of differential sieve. Let S be a sieve on v in ICAR and S' a sieve on v in ICAR', where ICAR' is the subcategory of ICAR without CVEs.
In other words, ICAR' consists of the sub-collection of objects from ICAR such that Ob(ICAR')=Ob(ICAR)-{w∈ CVE}, and the subcollection of morphisms of ICAR such that Mor(ICAR')=Mor(ICAR)-{e∈Mor(ICAR)|src(e)∈ CVE}. In this context, the object satisfying Q8 for techniques is the differential sieve S^T=S\ S'. S^T therefore contains all the arrows whose domain is the set and whose codomain is the set . To give a clearer idea, figure <ref> represents the construction stages of Q8 for technique T1499 (Endpoint Denial of Service), from the subcategory extracted "under technique T1499" (a), to the maximum sieve on T1499 (b), and finally to the differential sieve (c) answering to Q8.
Obviously, the reasoning is the same for the list of vulnerabilities that can be exploited by a tactic. All we have to do is point the sieve construction S^TA to the tactic(s) we want, for example to tactic TA0040 (Impact), which is the tactic performed by technique T1499.
§.§ List techniques and tactics related to a vulnerability
We now turn our attention to the bottom-up approach. From the point of view of the defender, it is natural to ask what attack techniques (and therefore tactics) are associated with its vulnerabilities. This knowledge enables him to focus on the vulnerabilities deemed most dangerous from the point of view of their tactical exploitation. This knowledge is also useful for the adversary if he knows some of the targeted assets or even in the absence of any information about the attacked IS. We therefore have query Q9:
[Q]
List techniques and tactics related to a vulnerability
This is essentially the dual request of Q8. Since category theory is an ideal framework for studying all kinds of dualities, we just have to do use the dual notions of the two notions defined previously. We thus introduce a notion of cosieve.
Let v be a vulnerability. A cosieve on v is a collection coS of morphisms such that :
* e ∈ coS ⇒dom(e) = v,
* (e ∈ coS ∧dom(f) = cod(e)) ⇒ f ∘ e ∈ coS.
We then define the notions of maximal cosieve and differential cosieve as before. The differential cosieve 𝖼𝗈𝖲^TA corresponding to Q8 is then given by the complement of the cosieve on v whose target is not a tactic: 𝖼𝗈𝖲^TA=𝖢_𝖼𝗈𝖲𝖼𝗈𝖲'= 𝖼𝗈𝖲\𝖼𝗈𝖲', where 𝖼𝗈𝖲 and 𝖼𝗈𝖲' are cosieve on a vulnerability v in ICAR and ICAR' respectively. 𝖼𝗈𝖲^TA is the collection of arrows with source v and target Tactics. The construction is the same for techniques. Simply define the set ICAR”=ICAR'-Techniques and repeat the reasoning from the cosieves in ICAR' and ICAR”. Figure <ref> depict the construction of the differential cosieve on vulnerability CVE-2006-5268 (administrative access to the RPC interface) for techniques, from (a) the sub-category of objects and morphisms above CVE-2006-5268 to (b) the final differential cosieve on CVE-2006-5268 satisfying Q9.
§.§ Measuring the threat surface of an IS
The "threat surface" is the set of techniques (or tactics) that an attacker can use to exploit the vulnerabilities of an IS. The threat surface is the counterpart of the attack surface on threat management[The threat surface is strictly speaking an attack surface, but since this name is usually used to describe the vulnerabilities of the IS, we use the term "threat surface"].
[Q]
Measuring the threat surface of IS X
Formally, the threat surface is a simple extension of the differential cosieve used to list the techniques and tactics associated with a vulnerability. We just need to apply the differential cosieve to all the vulnerabilities in the IS, i.e. to the set 𝖵𝗎𝗅𝗇_X.
§ CONCLUSION AND FUTURE WORK
The aim of this article was to provide a mathematical foundation for common queries in cybersecurity management. The proposed ICAR categorical model thus covers vulnerability management, threat management and asset management in a unified framework. However, ICAR is not a method for enriching cybersecurity ontologies. In particular, it does not allow the investigation of relations between vulnerability management and threat management. In this sense, the empirical results of the queries examined here are dependent on the quality of the data they use. Our model therefore underlines the importance of work aimed at more finely meshing the various dictionaries of the NIST and the MITRE corporation. Generally speaking, it is clear that query and visualisation models will be enhanced by AI-based works mentioned above.
This article only gives an overview of possible queries for cybersecurity operations. Others could naturally have been envisaged, such as the search for the shortest attack path (i.e. the path with the fewest breaches to exploit). Other queries will be considered later on. Future work will also address the algorithmic design of queries. In this sense, ICAR model should also be seen as a mathematical foundation for establishing a database schema compatible with the defined categorical schema and associated categorical notions. In other words, the queries dealt with in this article will subsequently be extended in terms of query language (SQL), with the aim of providing a bidirectional dictionary between conceptual categorical queries and database queries.
plain
|
http://arxiv.org/abs/2306.02543v1
|
20230605022819
|
Addressing Budget Allocation and Revenue Allocation in Data Market Environments Using an Adaptive Sampling Algorithm
|
[
"Boxin Zhao",
"Boxiang Lyu",
"Raul Castro Fernandez",
"Mladen Kolar"
] |
cs.LG
|
[
"cs.LG"
] |
[
Addressing Budget Allocation and Revenue Allocation in Data Market Environments Using an Adaptive Sampling Algorithm
Boxin Zhaobooth
Boxiang Lyubooth
Raul Castro FernandezUchiCS
Mladen Kolarbooth
boothBooth School of Business, University of Chicago, Chicago, IL, USA
UchiCSDepartment of Computer Science, University of Chicago, Chicago, IL, USA
Boxin [email protected]
Machine Learning, Data Market, Data Valuation, ICML
0.3in
]
High-quality machine learning models are dependent on access to high-quality training data. When the data are not already available, it is tedious and costly to obtain them. Data markets help with identifying valuable training data: model consumers pay to train a model, the market uses that budget to identify data and train the model (the budget allocation problem), and finally the market compensates data providers according to their data contribution (revenue allocation problem). For example, a bank could pay the data market to access data from other financial institutions to train a fraud detection model. Compensating data contributors requires understanding data's contribution to the model; recent efforts to solve this revenue allocation problem based on the Shapley value are inefficient to lead to practical data markets.
In this paper, we introduce a new algorithm to solve budget allocation and revenue allocation problems simultaneously in linear time. The new algorithm employs an adaptive sampling process that selects data from those providers who are contributing the most to the model. Better data means that the algorithm accesses those providers more often, and more frequent accesses corresponds to higher compensation. Furthermore, the algorithm can be deployed in both centralized and federated scenarios, boosting its applicability. We provide theoretical guarantees for the algorithm that show the budget is used efficiently and the properties of revenue allocation are similar to Shapley's. Finally, we conduct an empirical evaluation to show the performance of the algorithm in practical scenarios and when compared to other baselines. Overall, we believe that the new algorithm paves the way for the implementation of practical data markets.
§ INTRODUCTION
The success of modern machine learning is largely dependent on access to high-quality data.
However, in many scenarios, consumers of machine learning (ML) models do not have the necessary data to train them, and collecting data can be impractical; when it is practical it is often tedious and time-consuming. At the same time, data-rich organizations (providers) may not be aware that their data would be valuable to prospective consumers.
For example, banks could improve fraud detection if they had access to relevant data from other banks with similar financial products.
Data markets are a mechanism to address this problem.
A well-functioning data market compensates providers for the data they supply, and allocates the data to those consumers <cit.> who need it, thus unleashing the value of data for everyone involved in the transaction. Designing such data markets remains an open problem.
Consider a group of 10 regional banks. One of them (the consumer) has detected unusual activity in transactions pertaining to a new credit card product. In response, ML engineers design a fraud detection model for the new product, but realize that they lack training data. These training data could be provided by the other 9 banks, the providers, in exchange for some compensation. Clearly, the consumer would like to invest their budget in high-quality data that will contribute to better models (the budget allocation problem). And providers would like to be compensated according to the value their data brings to the model (the revenue allocation problem); e.g., if a bank's data are more relevant to the task at hand, such bank should be compensated higher than the others. In this example, a well-functioning data market aligns the incentives of consumers with those of providers: it ensures that the budget is used towards obtaining high-quality data.
While there is growing literature on data markets <cit.>, no existing data market solves the budget allocation and revenue allocation problems simultaneously and efficiently. For example, <cit.> deals with budget allocation but does not discuss revenue allocation. And there is a plethora of revenue allocation algorithms based on the Shapley value <cit.> and approximations <cit.> that, although provide good guarantees on the fairness of the allocation, are too computationally expensive to lead to practical solutions.
In this paper, we introduce a new algorithm that solves simultaneously the budget and revenue allocation problems in linear time: much more efficiently than existing revenue allocation approaches. At each training iteration, the algorithm chooses a provider from whom to obtain data. The number of iterations is determined by the budget. Every time the algorithm accesses a provider's data, it compensates that provider using a fixed unit of consumer-provided budget. The key idea behind the algorithm is an Online Stochastic Mirror Descent (OSMD) sampler : an adaptive sampling procedure that ensures that the budget is spent on the providers that supply the best data. This means that providers with better data are accessed more often, and thus compensated higher. Most importantly, the new algorithm trains the model only once and has a complexity of O(n log n), with n being the number of providers. Contrast this with the Shapley value, which must consider the marginal contribution of one provider to all other provider subsets <cit.>, leading to a complexity of 𝒪(2^n), and requiring a model training each time.
The key technical intuition behind the new algorithm is that it uses information contained in the optimization/training process to do both revenue allocation and budget allocation.
While the Shapley value (and approximations) treats the optimization process as a black-box and only relies on the output model to do the revenue allocation, the new algorithm instead uses model updates to measure data quality after each iteration.
This serves as the basis for the new adaptive sampling strategy that ensures that providers with a higher quality history will be accessed more often. By encouraging highly valuable providers to be accessed more, we obtain a better performing model, which then results in better budget allocation.
Crucially, because the new algorithm does not require repeated training, it can be applied to large ML models that are costly to train, which is more practical than previous methods that require multiple re-trainings, and thus are often tested only on simple models, limiting their applicability.
The new algorithm is widely applicable. First, it supports any ML models that are trained in an iterative way. Second, it can be used in centralized and federated scenarios <cit.>. In centralized scenarios, a central data market platform stores data from providers and trains the model on behalf of consumers. In federated scenarios, data never leave the providers' premise, so it may be more adequate when data are sensitive or subject to regulations. The model is mainly trained by the providers, and the market platform is in charge of running the adaptive sampling algorithm and budget allocation. While in the first scenario, the cost of training the model is absorbed by the data market platform, in the second, it is distributed across providers. We do not deal in this paper with data market architectures or with mechanisms for determining how to use budget to compensate costs. Instead, we note that in both cases, reducing the cost of budget and revenue allocation strictly benefits all scenarios, and we focus on proposing an efficient algorithm.
Finally, we provide theoretical guarantees on both revenue allocation and budget allocation.
For revenue allocation, we show that the new algorithm has properties similar to the four axioms of the Shapley value <cit.>, i.e., efficiency, linearity, null player, and symmetry.
For budget allocation, we show that the average regret of our method compared to spending all budget on the most valuable providers is asymptotically zero as the total budget goes to infinity.
We complement the theoretical guarantees that make the algorithm principled with an extensive empirical evaluation that shows the practical applicability of the new method.
Contributions: We propose an algorithm that achieves revenue allocation and budget allocation at the same time. The proposed algorithm is highly-scalable and achieves similar revenue allocation quality as Shapley-based methods but more efficiently (O(n log n)). Additionally, the new algorithm can be applied to federated learning settings and centralized scenarios seamlessly, increasing its applicability. From a technical perspective, the key technique of the algorithm is an adaptive sampling strategy that uses the information contained in the training process, which is generally ignored by previous research. We provide both theoretical and experimental justifications for our method.
To the best of our knowledge, this is the first highly-scalable method that unifies revenue allocation and budget allocation, thus paving the way to practical data markets.
Notation: For two positive sequences (a_n)_n ≥ 1 and (b_n)_n ≥ 1, we use a_n=O(b_n) to denote that there exists a positive constant C>0 such that a_n ≤ C b_n when n is large enough.
We use a_n=o(b_n) when lim_n →∞ (a_n / b_n) = 0, and a_n=o(1) when lim_n →∞ a_n = 0.
For n ≥ 1, we define 𝒫_n-1{ x ∈ℝ^n : ∑^n_i=1 x_i = 1, x_i ≥ 0 for all i=1,…,n } as the (n-1)-dimensional simplex.
Organization: In Section <ref>, we summarize the related work. In Section <ref>, we introduce our assumptions on the market. We then introduce our proposed method in Section <ref>, and discuss its theoretical guarantees on budget allocation and revenue allocation in Section <ref> and Section <ref>, respectively. The experimental results are in Section <ref>. We finally conclude our paper with Section <ref>.
The code of the paper is available at:
<https://github.com/boxinz17/Data-Market-via-Adaptive-Sampling> .
§ RELATED WORK
There have been many studies on building a data market from various communities.
Despite the vast literature and the growing need, there is no practical and principled data market.
We summarize the literature most relevant to our work without attempting to be comprehensive.
<cit.> shares the authors' vision of the data market.
Our project can be regarded as a special case of this vision, where we focus on the data market designed for machine learning applications.
In addition, <cit.> discuss the market design for machine learning, but focus on different aspects from our paper.
<cit.> offer
complementary work on mechanism design.
Data pricing is a related topic with techniques that are complementary to our paper. <cit.> discuss pricing mechanisms for data, queries, and models in various problems. See <cit.> for recent surveys. One may use the pricing methods in the above literature to decide the price of each query of model updates from a data provider or to decide the total number of queries.
Data valuation is used for revenue allocation and, therefore, is closely related to our work <cit.>. A recent line of research reinterprets the data valuation problem as a cooperative game and applies the Shapley value <cit.> to quantify the value of data <cit.>.
In addition, <cit.> proposes a data valuation approach using reinforcement learning (RL).
The major problem with the above approaches is that they are computationally costly, which restricts their application to simple ML models. Furthermore, the RL-based method needs access to the raw data, and thus is not applicable to privacy-sensitive scenarios such as federated learning.
Adaptive sampling via online learning approach has been applied in optimization literatures <cit.>, where people use it to construct unbiased estimators of the full gradient with an online learning objective to minimize the cumulative variance. In this paper, we adopt a similar idea but the objective is to maximize the cumulative utility gain. While some techniques are similar, this paper solves a fundamentally different problem.
Finally, several companies have attempted to create practical data markets, such as Scale AI <cit.>, Dawex <cit.> and Xignite <cit.>. Our method provides a principled algorithmic approach for practical data markets.
§ PRELIMINARIES AND MARKET MODEL
We start by formalizing the three parties involved in the data market: data consumers, data providers, and the market arbiter. We describe their roles and state assumptions on their behavior. Throughout the paper, we focus on serving one data consumer. We discuss extensions to serve multiple consumers in Section <ref>. Furthermore, the design supports a centralized model where providers' data is hosted by the arbiter and the arbiter is in charge of running computation. And it also supports a federated model, where providers' data stay with them and they are responsible for running computation on their own data.
Data Consumer has a machine learning model that needs to be trained. Let f_w denote the model parameterized by w ∈ℝ^d.
The data consumer provides a loss function l that can be used to find a suitable model parameter w by minimizing the loss l(w; 𝒟), where 𝒟 is a data set. The loss function is shared with the arbiter and, in the federated scenario, also with data providers.
In addition to the loss function, the data consumer also provides a utility function U : ℝ^d ↦ [0,1] that can be used to measure the quality of the model f_w.
For example, the utility function can measure the quality of the produced model on a hold-out dataset that the data consumer has. We assume that the utility function is shared with the market arbiter but not with the data providers.
Finally, we assume that the data consumer has a budget B, which is the total number of model updates that they can afford. Note that each model update costs the same amount of budget. We discuss extensions at the end of the section.
Data Providers are the market participants that have data that can be used to train machine learning models.
We assume that there are n data providers.
We do not make assumptions on the type of data the providers have. Instead, we assume that each data provider has a model update oracle. The ith data provider has an oracle function 𝒪_i:ℝ^d ↦ℝ^d
that maps a model parameter w to an update g_i = 𝒪_i(w).
Each time a data provider uses its model update oracle—locally in the federated scenario, or remotely, through an arbiter, in the centralized scenario—it will be compensated by one unit out of the total budget.
There can be many options of 𝒪_i. For example, 𝒪_i(w) might be the negative full gradient or stochastic gradient of the loss function evaluated at w by using the data of provider i; or it can be w^+_i-w, where w^+_i is obtained by running several steps of stochastic gradient descent (SGD) starting from w; furthermore, the provider i may also choose some privacy protection techniques when computing 𝒪_i(w), such as differential privacy <cit.>. We leave the freedom of choosing 𝒪_i's to the data providers. Different choices of 𝒪_i will involve different levels of computational cost and privacy leakage risk. At one extreme, the data provider can return a random vector in ℝ^d and thus induces little computational cost and privacy leakage.
However, the design of our algorithm encourages the data provider to provide high-quality model update.
[t!]
Adaptive Sampling with Online Stochastic Mirror Descent (OSMD) Sampler
[!t]
OSMD Solver
Market Arbiter maintains an iterative training process until the consumer's total budget is exhausted.
Let K be the number of providers chosen at each round and we discuss the choice of K later in Section <ref>.
Then the training process proceeds
in T=⌊ B / K ⌋ iterations.
Let w^0 and p^0=(1/n,…,1/n)^⊤ be the initial model parameter and sampling distribution, respectively.
For iterations t=0,…,T-1, the market arbiter repeats the following process:
(i) the arbiter uses the sampling distribution p^t to sample with replacement a set, S^t ∈ [n]^K, of K data providers;[We choose sampling with replacement to obtain a clearer budget allocation analysis. The key intuition is that when using sampling with replacement, the providers in S^t are pairwise independent, allowing us to analyze them independently. In contrast, if we use sampling without replacement, we need to consider S^t as a whole, which is overly complicated given that there are n K total possible subsets.]
(ii) the arbiter broadcasts the current model parameter w^t to the ith provider if i ∈ S^t;[Here broadcast is a local operation in the central setting where providers' data is directly accessible to the market, and a distributed operation in the federated setting.]
(iii) the ith provider, i ∈ S^t, computes the model update g^t_i = 𝒪_i(w^t);
(iv) the arbiter updates the sampling distribution based on the received marginal utilities {U(w^t+γ^t g^t_i)}_i ∈ S^t to obtain p^t+1, where γ^t is the optimization stepsize;
(v) the arbiter updates the model parameter as w^t+1← w^t + (γ^t / K) ∑_i ∈ S^t g^t_i.
Algorithm <ref> details the process.
We detail the step (iv) in the next section.
The training process executed the market arbiter is the same as the training process in federated learning <cit.> if p^t+1=p^t in step (iv).
Thus our method can be applied to the setting where providers may have privacy concerns or are physically distributed.
Finally, we note that each access costs the same fixed amount of budget. This makes sense because in the context of ML training, a single update does not have a major impact on the result. However, it is conceivable to introduce pricing techniques to price each access differently. We do not deal with this scenario in this paper.
§ ONLINE STOCHASTIC MIRROR DESCENT SAMPLER
We detail step (iv) of Algorithm <ref> where the sampling distribution is updated. The sampling distribution is a crucial ingredient in the design of Algorithm <ref> as it connects to budget and revenue allocation.
We start by examining the expected utility gain in the round t.
We would like to choose the set of providers S^t to maximize the utility gain
Δ U^t U ( w^t+1) - U ( w^t )
= U ( w^t + γ^t/K∑_i ∈ S^t g^t_i ) - U ( w^t ).
Maximizing the utility gain over S^t is hard as it is a combinatorial problem. More specifically, the utility gain of selecting one provider depends not only on the provider itself but also on the other providers chosen in S^t. As a result, one needs to consider each subset of [n], leading to a total of n K choices. The size of this space induces significant challenge.
To overcome this issue, one can consider the surrogate loss defined by cumulative marginal utility gain:
Δ U^t_cm∑_i ∈ S^t U ( w^t + γ^t g^t_i ) - U ( w^t ).
The advantage of marginal utility is that when selecting one provider, the utility gain depends solely on that provider and is independent of other providers. In this way, we can treat each provider independently, resulting in a total of n choices, which is much smaller than n K for K > 1. The surrogate loss simplifies the analysis and also leads to more efficient algorithm design. When K=1, we have Δ U^t_cm=Δ U^t; when K > 1, a larger Δ U^t_cm also generally implies a larger Δ U^t.
Let Δ U^t_i = U ( w^t + γ^t g^t_i ) - U ( w^t ) and p^t=(p^t_1,…,p^t_n)^⊤. Recall that the set S^t is obtained by sampling with replacement from p^t and, thus, we have
𝔼_S^t[ Δ U^t_cm] = K ∑^n_i=1 p^t_i Δ U^t_i,
where 𝔼_S^t[·] denotes the expectation taken with respect to S^t. Therefore, the distribution p^t can be chosen to maximize 𝔼_S^t [ Δ U^t_cm ] when Δ U^t_1,…,Δ U^t_n are known.
In practice, obtaining Δ U^t_1,…,Δ U^t_n requires access to the model update oracle of each provider, which is too costly.
Furthermore, since S^t is sampled based on p^t, the information will not be revealed until we make a decision on the sampling distribution.
In each round, the arbiter has a partial information history {{Δ U^l_i }_i ∈ S^l}^t-1_l=0 and {p^l}^t-1_l=0, based on which it updates the sampling distribution. We do not make any assumptions on how the utility is generated; thus it can be dependent on or even adversarial to the arbiter's decision.
The sampling process can be viewed as an online learning task with partial information feedback, where a game is played between the arbiter and the providers.
The goal of the online learning task is to choose a sequence of sampling distributions to maximize 𝔼 [ ∑^T-1_t=0Δ U^t_cm ].
A key concern about this online learning task is the non-stationarity of the utility process {Δ U^t_i }_t ≥ 1, i ∈ [n]. More specifically, data providers that are useful in the early stage of the training process are not necessarily as useful in the later stage. To understand why, suppose that the model parameter is randomly initialized and far away from a good model parameter. At this stage, even a noisy data set can help improve the model by providing a rough guidance — the additional advantage of a high-quality data set may not be obvious at this stage as improving the model is easy. However, as the training proceeds, the model will achieve reasonably good performance, and further improvement of the model will require higher quality data — the difference between the utilities of valuable and noisy providers now becomes more distinctive.
Based on the above discussion, we propose to use online stochastic mirror descent (OSMD) <cit.> to make update of the sampling distribution.
Let D_Φ(x ‖ y) = Φ (x) - Φ (y) - ⟨∇Φ (y), x - y ⟩ for x,y ∈ℝ^n_+ be the Bregman divergence, where Φ(x)=∑^n_i=1 x_i log x_i with 0log0 defined as 0, is the unnormalized negative entropy.
Then the key updating rule is achieved by solving the following optimization problem:
p^t+1 = _q ∈𝒜 - η⟨ q, û^t ⟩ + D_Φ (q ‖ p^t )
where p^t is the current sampling distribution, û^t is an unbiased estimate of the utility vectors (Δ U^t_1,…, Δ U^t_n)^⊤, and η is a chosen learning rate.
We restrict the sampling distribution in a clipped simplex 𝒜=𝒫_n-1∩[α/n,∞)^n, where α∈ [0,1] is a tuning parameter.
By restricting the sampling distribution to be away from zero, we avoid the algorithm from committing too hard on a single provider, which otherwise may prevent the discovery of change points in a non-stationary environment.
The first part of the objective encourages the sampling distribution to maximize the most recent utility gain, while the second part of the objective prevents it from deviating too far away from the previous sampling distribution. The detailed algorithm is described in Algorithm <ref>. The problem in (<ref>) can be solved efficiently by Algorithm <ref> (Proposition <ref> in Appendix <ref>). The most costly step of Algorithm <ref> is sorting {p̃^t+1_i }^n_i=1, which has a computational complexity of O(nlog n). However,
noting that most of the entries in p̃^t+1 are sorted in the previous round, we may use an adaptive sorting algorithm to achieve a much faster running time <cit.>.
§.§ Analysis of Budget Allocation
We provide theoretical guarantees on budget allocation in the form of a regret bound when comparing against an oracle competitor.
At the beginning of round t, we allow the competitor to know the provider with the largest Δ U^t_i; thus, to maximize Δ U^t_cm, the competitor will choose it K times in round t.
Due to the non-stationary nature of the problem, we also allow the oracle to change its decision from iteration to iteration, with the total number of changes not exceeding m-1 times, where m is a chosen tuning parameter. Note that with a larger m, we are competing against a more flexible and harder oracle.
For m ≥ 1, the action space Γ_m is defined as
Γ_m {( a^0,…,a^T-1) ∈ [n]^T : .
. ∑^T-1_t=11{ a^t ≠ a^t-1}≤ m - 1 }.
The space Γ_m contains all action sequences where the action changes at most m-1 times.
This space contains the action sequence of an oracle that knows the identities of the top m providers and the right order to access them.
We define regret of the budget allocation of Algorithm <ref> as
R_B 𝔼[ max_ (a^t) ∈Γ_m K ∑^T-1_t=0Δ U^t_a^t - ∑^T-1_t=0Δ U^t_cm],
where Δ U^t_cm is defined in (<ref>) and the expectation is taken with respect to all sources of randomness.
From the consumer's perspective, a larger utility gain means a better budget allocation and, therefore, regret R_B measures the loss of budget due to the lack of quality information about providers.
We have the following guarantee on the regret of Algorithm <ref>.
Assume B=K T and U(·) ∈ [0, 1].
Let {p^t}_t=0^T-1 be the sequence of sampling distribution generated by Algorithm <ref>.
The regret R_B defined in (<ref>) is bounded by
R_B ≤α B + η n B/2 + m K log(n/α)/η.
If α=√(n/B) and η=√(m log(n B)/(n T)), it follows that R_B / B = O( √(n m K log (n B) / B)).
Treating n, K, m as constants, we have that the asymptotic average regret is zero, R_B/B=o(1), which is used as a standard to measure budget allocation <cit.>. By Theorem <ref>, we see that a larger K will induce a larger regret within the same budget; however, a larger K is helpful in reducing the variance of the model parameter. Thus, a good choice of K should maintain a trade-off between these two concerns.
Note that we do not provide specific quantification of how the variance depends on K. The reason is that the variance of the model parameter depends on numerous factors, such as the loss function, the learning rate, and the model update oracle. We intentionally leave these factors unspecified to maintain generality. That being said, a simple example for reference is the use of mini-batch SGD or local SGD as the model update oracle. In this case, the convergence analysis from <cit.> can be adopted. It then becomes evident that the number of providers we select in each round will influence the convergence speed through the variance.
§.§ Analysis of Revenue Allocation
Data providers furnish their data with the expectation of receiving compensation, thus the equitable distribution of revenue among providers is a crucial consideration for data markets. This differs from the budget allocation issue, which we framed as an online optimization problem. Revenue allocation, on the other hand, falls within the scope of a "fair division" problem <cit.>. Generally, there is no universally acknowledged standard for resolving revenue division dilemmas, as opposed to online optimization scenarios. A prevalent strategy is to first establish each participant's "entitlement" and subsequently distribute the revenue in accordance with this assigned entitlement.
Existing research primarily applies the Shapley value <cit.> to determine entitlement, given its attributes that are considered essential for equitable allocation in cooperative games. The Shapley value is widely acknowledged to singularly satisfy four axioms of a fair reward distribution in cooperative game theory, namely: symmetry, efficiency, null player, and linearity <cit.>. Nevertheless, the computation of the Shapley value is considerably resource-intensive. Consequently, the total funds available for allocation are reduced by the computational expense incurred in deriving the Shapley value itself. Furthermore, the relevance and necessity of these axioms for revenue allocation within real-world data markets remain a matter of debate.
In this paper, we introduce a novel concept of entitlement referred to as the number of accesses to the model update oracle. By correlating the compensation a data provider receives with the quantity of model update oracles it contributes, the compensation aligns with the level of service provided. We posit that this methodology offers a fresh perspective on the study of revenue allocation.
Given that our model does not frame revenue allocation as a cooperative game, our approach diverges from the typical analytical framework associated with the Shapley value. Nevertheless, we demonstrate that our revenue allocation method possesses certain attributes that echo the underlying principles of the Shapley value's axioms.
* Symmetry: The symmetry axiom basically requires the evaluation result is invariant to random shuffling of the providers. Note that by the design of Algorithm <ref>, it is clear that the distribution of N^T(i) for any i ∈ [n] will not change by shuffling the providers, thus our revenue allocation approach enjoys the symmetry property.
* Efficiency: Efficiency axiom requires that the sum of values of all providers should be equal to the value of the whole dataset. In our case, by the definition of N^T(i), we have ∑^n_i=1N^T(i) equal the budget for training. Thus, our revenue allocation approach also shares certain efficiency property.
* Null Player: This axiom requires to assign zero value to data providers that make no contribution. In our case, any data provider has a chance to get compensation no matter how much contribution it makes. However, our revenue allocation approach ensures that providers making no contribution will get asymptotically minimum compensation. To see this, assuming that there exists an i ∈ [n], such that U(w +γ𝒪_i(w)) ≤ U(w) for any w ∈ℝ^d and γ>0 (which indicates that provider i cannot make positive contribution), then by the design of Algorithm <ref>, we will have p^1_i≥ p^2_i ≥…≥α/n. This means that the probability of accessing provider i is asymptotically decreasing to the lower bound. When α is small, the compensation received by provider i will be negligible.
By the intrinsic design of our algorithm, it is hard for our revenue allocation approach to satisfy the linearity axiom; however, it is unclear why linearity matters in practical applications.
In fact, previous research also tries to remove some of the four axioms for a more practical solution. For example, <cit.> removes the efficiency axiom and proposes a variant of Shapley value which has better performance on many practical machine learning tasks.
Here we relax the linearity axiom for a similar reason.
§ EVALUATION
We evaluate our method empirically on both real-world data and synthetic data experiments.
We concentrate on three main questions.
First, for revenue allocation, we ask whether our method provides similar allocation quality as the state-of-the-art methods based on Shapley value. For this purpose, we choose two Shapley value based methods as the baseline methods: Gradient Shapley (G-Shapley) <cit.> and Federated Shapley value with permutation sampling estimation (FedSV-PS) <cit.>. These two methods do not require re-training of the model multiple times, as required by most of the Shapley value based methods, and are suitable for modern large-scale machine learning models like Deep Neural Networks (DNN).
Second, for budget allocation, we ask whether our method delivers better performing model than the standard training procedure based on uniform sampling.
Finally, we study the runtime of our method. We compare it with uniform sampling, G-Shapley and FedSV-PS on real-world data sets.
§.§ Main Results
We implement the experiments on four image classification task data sets: MNIST <cit.>, Fashion-MNIST (FMNIST) <cit.>, Kuzushiji-MNIST (KMNIST) <cit.>, and CIFAR-10 <cit.>. We create 100 data providers with each provider having 400 training samples. We then use another 1000 samples from the training set as the hold-out set to compute the utility, which is defined as the negative loss on the hold-out set. Finally, we use the testing set which contains 10,000 samples to measure the accuracy of the model.
See Appendix <ref> for more details about data preparation.
For MNIST, FMNIST and KMNIST, we train a two-layer neural network with one-hidden layer of 300 neurons; for CIFAR-10, we train a Convolutional Neural Network (CNN) with one convolutional layer and two feed-forward layers. Each method is repeated 10 times with different random seeds.
To create quality discrepancies across providers, we corrupt β% of the providers' samples by changing a sample's correct label to a randomly chosen wrong label. For provider 1–10, β=0; for provider 11–20, β=20; for provider 21–30, β=50; for the rest of the providers, β=90. Additional details are provided in Appendix <ref>.
Revenue Allocation: We first examine whether our method achieves similar revenue allocation results as Shapley-based mehtods (we will show later it solves the problem much faster). For our method and uniform sampling, the revenue is allocated proportional to the number of accesses, while for G-Shapley and FedSV-PS, the revenue is allocated proportional to the positive part of Shapley value, defined as max{SV,0}, where SV denotes the Shapley value. The result is shown in Figure <ref>. We see that OSMD has a similar revenue allocation performance as G-Shapley and FedSV-PS on MNIST, KMNIST and FMNIST. OSMD tends to allocate more revenue on the most valuable providers while G-Shapley and FedSV-PS seem to keep a balance between different quality providers. Interestingly, we see that G-Shapley underperforms on CIFAR-10. This might be because the task is difficult, and with a high portion of data being corrupted, the one-step gradient approximation does not work well. The take-away message is that the new method's allocation closely resembles that of Shapley-based methods.
Budget Allocation: We study the performance of budget allocation. We show the model accuracy on the hold-out test set along the training process. The results in Figure <ref> show that OSMD successfully helps the model obtain a better performance than uniform sampling by identifying the valuable providers and adjusting the sampling distribution accordingly: the new algorithm uses the budget efficiently.
Computational Cost: We also compare the runtime (wall clock time) of the four methods mentioned above. Higher runtimes imply higher costs on the arbiter side, thus reducing runtime is a crucial concern in designing practical data market, as the higher the costs, the higher the proportion of budget that needs to be used to cover those costs and thus the worse the models delivered to consumers. To measure the scalability of different methods, we fix the number of samples that each provider has to 100 and increase the number of providers from 50 to 400. The result is shown in Figure <ref>. Compared with G-Shapley and FedSV-PS, our method is much faster, only marginally more costly than uniform sampling and highly scalable. This result indicates that our method is more suitable to modern ML applications where the size of both dataset and model is large.
§.§ Mixture Linear Regression
In the previous section, the difference of data quality between providers is created by corrupting the label. In this section, we present a different approach to create such difference by creating distribution shift of ℙ(y | x).
More specifically, we implement a mixture linear regression experiment.
Assume that data consumer wants to train a linear regression model. Both data consumer's and data providers' features are drawn from the same distribution ℙ_X; however, the response variable is generated by the following mixture linear regression model:
x_ij∈ℝ^d ∼ℙ_X, j = 1, …, m_i, i=0,1,…,n,
z_0,z_1,…,z_n ∼ uniform([K]),
y_ij = ⟨ w_z_i, x_ij⟩ + ε_ij, ε_iji.i.d.∼ N(0,0.5^2),
j = 1, …, m_i, i=0,1,…,n,
where z_0 is the latent group label of the consumer, and z_i with i ≥ 1 is the latent group label of data provider i.
The set of regression parameters is {w_1,…,w_K}.
This way, data providers with z_i=z_0 should have the same true parameter as the data consumer, thus is more valuable.
We set K=4 and z_0=1.
We let d=1000 and generate each entry of w_k i.i.d. from U[0.5(k-1), 0.5 k] for k=1,…,4. In addition, we generate entries of x_ij i.i.d. from N(0,1) and
we initialize the parameter from (-1,…,-1)^⊤.
See details in Appendix <ref>.
We compare the budget allocation and revenue allocation of our method with other baselines.
The results are shown in Figure <ref>. Regarding budget allocation, we see that both OSMD and FedSV-PS generate fair revenue allocation, by which we mean that data providers with smaller ‖ w_z_i - w_z_0‖ are compensated higher. Interestingly, we see that the allocation result of G-Shapley is opposite to the truth. We conjecture this is because providers with larger ‖ w_z_i - w_z_0‖ will make gradients evaluated at the initial point have larger norm, and thus achieve more utility gain at the initial stage of the training. This is another example in addition to the previous CIFAR-10 case where G-Shapley fails. Regarding budget allocation, we see that there is a clear phase transition. Initially, since providers from all groups help, the parameter estimates get closer to the true parameter. All providers seem to have similar values, thus OSMD and uniform sampling perform similarly. As the parameter estimation gets closer to the true parameter, there is an increasing difference between providers, and OSMD starts to tell apart the “right” providers from the “wrong” providers. This experiment demonstrates the point raised in Section <ref> about non-stationarity: that providers whose data is useful at a point in time may not be useful later. The take-away message is that taking the non-stationarity of the problem into consideration is necessary for the algorithm to perform well.
§ CONCLUSION
We proposed a new algorithm based on adaptive sampling that solves the problems of budget allocation and revenue allocation in linear time, thus paving the way to the implementation of practical data markets. The new algorithm achieves state-of-the-art revenue allocation performance while ensuring efficient budget allocation performance at the same time. The key insight of the algorithm design is to open the black box of the training process, and use an adaptive sampling technique to encourage more valuable providers to make more contributions. We offer theoretical guarantees and an empirical evaluation of the algorithm.
There are many interesting future directions.
First, the problem of pricing each query of model update remains open.
Second, while our algorithm can be easily extended to serve multiple consumers, it is worth noting that in practice, different consumers may cooperate or compete with each other. For example, the value of a machine learning model possessed by a company may not only depend on its own accuracy, but may also depend on the performance of the model possessed by the company's competitor. The study of such interactions between multiple consumers provides a fruitful research direction.
In addition, one may also consider the effect of strategic behavior, especially if providers can collude.
Finally, combining differential privacy with our method can also be a promising direction.
§ ACKNOWLEDGEMENTS
The research of MK is supported in part by NSF Grant ECCS-2216912.
This work was completed in part with resources provided by the University of Chicago Booth’s Mercury Computing Cluster.
icml2023
§ DETAILS ABOUT EXPERIMENTS
§.§ Details about Real-World Experiments
Dataset: We implement the experiments on four image classification task datasets: MNIST <cit.>, Fashion-MNIST (FMNIST) <cit.>, Kuzushiji-MNIST (KMNIST) <cit.>, and CIFAR-10 <cit.>. All datasets are composed of training set and testing set. The sample size information is summarized in Table <ref>. Both training set and testing set of all datasets are perfectly balanced, that is, each label contains the equal number of samples.
We create 100 data providers with each provider having 400 training samples. We then use 1000 samples from the rest of the training set as the hold-out set to compute the utility, which is defined as the negative loss on the validation set. Finally, we use the testing set to measure the accuracy of the model.
Machine Learning Model: For MNIST, FMNIST and KMNIST, we train a two-layer neural network with one-hidden layer of 300 neurons; for CIFAR-10, we train a Convolutional Neural Network (CNN) with one convolutional layer and two feed-forward layers. We use the cross-entropy loss as the loss function.
Model Update Oracle: For a given model parameter w, we choose 𝒪_i(w)=w^+_i - w, where w^+_i is obtained by running mini-batch SGD on the provider i's data for one epoch. More specifically, we divide the provider's data into mini batches, that is, 𝒟_i = ∪^L_l=1ℬ_l, and then let w^0=w, w^l+1 = w^l - (γ_local/B)∇ l(w^l, ℬ_l) for l=0,…,L-1, where B is the size of the mini-batches and γ_local is the local step size. Finally, we let w^+_i=w^L.
Parameter Choice: We choose the optimization stepsizes {γ_t}^T-1_t=0 in Algorithm <ref> as γ^t=γ_global=0.1 for all t=0,…,T-1. Besides, we let α=0.01, and η=1.0. We set the total number of communication rounds between the arbiter and providers as T=500 for MNIST, FMNIST and KMNIST, and as T=1000 for CIFAR10. Furthermore, we have K=0.1 n.
For G-Shapley, we choose the same convergence criterion as in <cit.>; however, we set the back check length to be 30 for CIFAR-10 and to be 10 for the rest of tasks, and we set the convergence threshold as 0.01. For FedSV-PS, we use the same convergence criterion to decide the determination in each communication round. The back check length is 10 and the convergence threshold is 0.05. In addition, each method is repeated 10 times with different random seeds.
Corruption: To create quality discrepancies across providers, we corrupt β% of the providers' samples by changing a sample's correct label to a randomly chosen wrong label. For provider 1–10, β=0; for provider 11–20, β=20; for provider 21–30, β=50; for the rest of the providers, β=90.
Computational Cost Experiment: To compare the computational cost of our method with other competitors, we fix the number of training samples possessed by each provider as 100, and set the number of providers as 50, 100, 200 and 400, and record the runtime.
§.§ Details about Mixture Linear Regression Experiments
Data Generation Process: Both data consumer's and data providers' features are drawn from the same distribution ℙ_X; however, the response variable is generated by the following mixture linear regression model:
x_ij∈ℝ^d ∼ℙ_X for all j = 1, …, m_i, i=0,1,…,n,
z_0,z_1,…,z_n ∼ [K] uniformly random,
y_ij = ⟨ w_z_i, x_ij⟩ + ε_ij, ε_iji.i.d.∼ N(0,0.5^2) for all j = 1, …, m_i, i=0,1,…,n,
where z_0 is the latent group label of the consumer, and z_i with i ≥ 1 is the latent group label of data provider i.
The set of regression parameters is {w_1,…,w_K}.
This way, data providers with z_i=z_0 should have the same true parameter as the data consumer, thus is more valuable.
We set K=4 and z_0=1. Besides, we let d=1000 and generate each entry of w_k i.i.d. from U[0.5(k-1), 0.5 k] for k=1,…,4. In addition, we generate entries of x_ij i.i.d. from N(0,1) and
we initialize the parameter from (-1,…,-1)^⊤. Given a parameter estimation ŵ, the estimation error is measured by ‖ŵ - w_1 ‖.
Training Process: For model update oracle, we let 𝒪_i(w)=-∇ l (w, 𝒟_i), where 𝒟_i={(x_ij, y_ij)}^m_i_j=1 and
l (w, 𝒟_i)=1/2 m_i∑^m_i_j=1( y_ij - ⟨ w,x_ij⟩)^2.
Besides, we choose the optimization stepsizes {γ_t}^T-1_t=0 in Algorithm <ref> as γ^t=γ_global=0.01 for all t=0,…,T-1. Besides, we let α=0.01, and η=0.001. We set the total number of communication rounds between the arbiter and providers as T=1500. Furthermore, we have K=0.1 n. The parameter setting of G-Shapley and FedSV-PS are the same as in Section <ref>.
§ TECHNICAL PROOFS
§.§ Efficient Solution to (<ref>)
The following proposition justifies Algorithm <ref>.
Let
p̃^t+1_i = p^t_i exp( ηû^t_i ), i ∈ [n].
Let π:[n]↦[n] be a permutation such that p̃^t+1_π(1)≤…≤p̃^t+1_π(n). Let i^t_⋆ be the smallest integer i such that
p̃^t+1_π(i)( 1 - i-1/nα) > α/n∑^n_j=ip̃^t+1_π(j).
Then the solution of (<ref>) is
p^t+1_i =
α / n if π^-1(i) < i^t_⋆
((1 - ((i^t_⋆-1) / n) α ) p̃^t+1_i)/(∑^M_j=i^t_⋆p̃^t+1_π(j)) otherwise.
First, we show that the solution of (<ref>), p^t+1, can be found as
p̌^t+1 = min_ q ∈ℝ^n_+ - η⟨ q, û^t ⟩ + D_Φ (q ‖ p^t),
p^t+1 = min_ q ∈𝒜 D_Φ (q ‖ p̌^t+1)
The optimality condition for p̌^t+1 implies that
-ηû^t + ∇Φ( p̌^t+1) - ∇Φ( p^t ) = 0
By Lemma <ref>, the optimality condition for p^t+1 implies that
⟨ p - p^t+1, ∇Φ (p^t+1) - ∇Φ (p̌^t+1) ⟩≥ 0, for all p ∈𝒜.
Combining the last two displays, we have
⟨ p - p^t+1, -ηû^t + ∇Φ( p^t+1) - ∇Φ( p^t ) ⟩≥ 0, for all p ∈𝒜.
By Lemma <ref>, this is the optimality condition for p^t+1 to be the solution of (<ref>).
Note that (<ref>) implies that
- ηû^t_i + logp̌^t+1_i - log p^t_i = 0 for all i ∈ [n].
Therefore,
p̌^t+1_i = p^t_i exp( ηû^t_i ) = p̃^t+1_i for all i ∈ [n].
and the final result follows from Lemma <ref>.
§.§ Proof of Theorem <ref>
Our proof is based on <cit.>.
Recall that we have
R_B = 𝔼[ max_ (a^t) ∈Γ_m K ∑^T-1_t=0Δ U^t_a^t - ∑^T_t=0Δ U^t_cm]
= 𝔼[ max_ (a^t) ∈Γ_m ∑^T-1_t=0∑^K-1_k=0Δ U^t_a^t - ∑^T_t=0∑_i ∈ S^tΔ U^t_i ]
= 𝔼[ max_ (a^t) ∈Γ_m ∑^T-1_t=0∑^K-1_k=0( Δ U^t_a^t - Δ U^t_i^t,k) ].
Let
( a^0_⋆,…,a^T-1_⋆) = _(a^t) ∈Γ_m ∑^T-1_t=0Δ U^t_a^t ,
then there exists 0=t_1<t_2<…<t_m<t_m+1= T-1 such that for any j=1,…,m, a^t_⋆ is a constant when t_j ≤ t <t_j+1.
To simplify the notation, we denote a̅^j_⋆=a^t_j_⋆, we then decompose the regret as
R_B = 𝔼[ ∑^m_j=1∑^t_j+1-1_t=t_j∑^K-1_k=0( Δ U^t_a̅^j_⋆ - Δ U^t_i^t,k) ] = ∑^m_j=1𝔼[𝔼[ ∑^t_j+1-1_t=t_j∑^K-1_k=0( Δ U^t_a̅^j_⋆ - Δ U^t_i^t,k) | p^t_j] ].
Let u^t=(Δ U^t_1,…,Δ U^t_n)^⊤. Note that since U(·) ∈ [0,1], we have Δ U^t_i ∈ [-1,1] for all i ∈ [n]. Besides, we have
𝔼[ ∑^t_j+1-1_t=t_j∑^K-1_k=0( Δ U^t_a̅^j_⋆ - Δ U^t_i^t,k) | p^t_j]
= 𝔼[ max_p ∈𝒫_n-1∑^t_j+1-1_t=t_j∑^K-1_k=0⟨ p - p^t, u^t ⟩ | p^t_j]
≤𝔼[ max_p ∈𝒜∑^t_j+1-1_t=t_j∑^K-1_k=0⟨ p - p^t, u^t ⟩ | p^t_j] + K α/n(n-1) (t_j+1-t_j)
= 𝔼[ max_p ∈𝒜 K ∑^t_j+1-1_t=t_j⟨ p - p^t, û^t ⟩ | p^t_j] + K α( 1 - 1/n) (t_j+1-t_j).
Since
D_Φ(x_1 ‖ x_2 ) = D_Φ(x_3 ‖ x_2 ) + D_Φ(x_1 ‖ x_3 ) + ⟨∇Φ(x_2) - ∇Φ(x_3), x_3 - x_1 ⟩,
x_1,x_2,x_3∈𝒟,
then by (<ref>), we have
⟨ p - p^t+1, û^t ⟩ ≤1/η⟨ p - p^t+1, ∇Φ( p^t+1) - ∇Φ( p^t ) ⟩
= 1/η( D_Φ( p ‖ p^t ) - D_Φ( p ‖ p^t+1) - D_Φ( p^t+1 ‖ p^t ) ).
Thus, we have
∑^t_j+1-1_t=t_j⟨ p - p^t, û^t ⟩
= ∑^t_j+1-1_t=t_j⟨ p^t+1 -p^t, û^t ⟩ + ∑^t_j+1-1_t=t_j⟨ p -p^t+1, û^t ⟩
≤∑^t_j+1-1_t=t_j⟨ p^t+1 -p^t, û^t ⟩ + 1/η∑^t_j+1-1_t=t_j D_Φ( p ‖ p^t ) - D_Φ( p ‖ p^t+1) - D_Φ( p^t+1 ‖ p^t )
= ∑^t_j+1-1_t=t_j⟨ p^t+1 -p^t, û^t ⟩ + 1/η( D_Φ( p ‖ p^t_j) - D_Φ( p ‖ p^t_j+1) ) - 1/η∑^t_j+1-1_t=t_j D_Φ( p^t+1 ‖ p^t )
≤∑^t_j+1-1_t=t_j⟨ p^t+1 -p^t, û^t ⟩ + 1/η D_Φ( p ‖ p^t_j) - 1/η∑^t_j+1-1_t=t_j D_Φ( p^t+1 ‖ p^t )
By (<ref>), we have
û^t = 1/η(∇Φ( p̃^t+1) - ∇Φ( p^t ) ),
and since (<ref>),
we have
⟨ p^t+1 -p^t, û^t ⟩ = 1/η⟨ p^t+1 -p^t, ∇Φ( p̃^t+1) - ∇Φ( p^t ) ⟩
= 1/η( D_Φ( p^t ‖ p̃^t+1) + D_Φ( p^t+1 ‖ p^t ) - D_Φ( p^t+1 ‖ p̃^t+1) )
≤1/η( D_Φ( p^t ‖ p̃^t+1) + D_Φ( p^t+1 ‖ p^t ) ).
Combine (<ref>) and (<ref>), we have
∑^t_j+1-1_t=t_j⟨ p -p^t, û^t ⟩≤1/η( D_Φ( p ‖ p^t_j) + ∑^t_j+1-1_t=t_j D_Φ( p^t ‖ p̃^t+1) ).
Note that
D_Φ( p^t ‖ p̃^t+1) = ∑^n_i=1 p^t_i ( exp( ηû^t_i ) - 1 - ηû^t_i ) ≤η^2/2∑^n_i=1 p^t_i (û^t_i)^2 ≤η^2 n/2,
thus we have
1/η∑^t_j+1-1_t=t_j D_Φ( p^t ‖ p̃^t+1) ≤η n (t_j+1-t_j)/2.
Combine (<ref>), (<ref>), and (<ref>), we have
𝔼[ ∑^t_j+1-1_t=t_j∑^K-1_k=0( Δ U^t,k_a̅^j_⋆ - Δ U^t,k_i^t,k) | p^t_j] ≤ K α( 1 - 1/n) (t_j+1-t_j) + η n K (t_j+1-t_j)/2
+ K 𝔼[ max_p ∈𝒜 D_Φ( p ‖ p^t_j) /η | p^t_j].
Note that p^t_j∈𝒜, thus we have p^t_j_i ≥α / n for all i ∈ [n] and
D_Φ( p ‖ p^t_j) = ∑^n_i=1 p_i log p_i + ∑^n_i=1 p_i log( 1/p^t_j_i) ≤log (n/α),
which implies that
𝔼[ ∑^t_j+1-1_t=t_j∑^K-1_k=0( Δ U^t,k_a̅^j_⋆ - Δ U^t,k_i^t,k) | p^t_j] ≤ K α( 1 - 1/n) ( t_j+1-t_j ) + η n K (t_j+1-t_j)/2 + K log(n/α)/η,
and
R_B ≤α K T + η n K T/2 + m K log(n/α)/η.
The final result then follows by B=KT.
§ USEFUL LEMMAS
Suppose that f is a differentiable convex function defined on domf, and 𝒳⊆domf is a closed convex set. Then x is the minimizer of f on 𝒳 if and only if
∇ f (x)^⊤ (y-x) ≥ 0 for all y ∈𝒳.
See Section 4.2.3 of <cit.>.
Let α∈ [0,1], 𝒜=𝒫_n-1∩ [ α/n, 1 ]^n. For y ∈ [0,∞)^n, let x=min_v ∈𝒜 D_Φ(v ‖ y). Suppose y_1≤ y_2 ≤…≤ y_n. Let i^⋆ be the smallest value such that
y_i^⋆( 1 - i^⋆-1 /nα) > α/n∑^n_i=i^⋆ y_i.
Then
x_i =
α / n
if i < i^⋆
(1 - i^⋆ -1/nα ) y_i/∑^n_j=i^⋆ y_j otherwise.
Consider the following constrained optimization problem:
min_ u ∈ [0,∞)^n ∑^n_i=1 u_i logu_i/y_i,
s.t. ∑^n_i=1 u_i = 1,
u_i ≥α/n, i ∈ [n].
Since x is the solution to this problem, by the optimality condition, there exists λ,ν_1,…,ν_n ∈ℝ such that
logx_i/y_i + 1- λ - ν_i = 0,
i ∈ [n],
∑^n_i=1 x_i = 1,
x_i - α/n ≥ 0, i ∈ [n],
ν_i ≥ 0, i ∈ [n],
ν_i ( x_i - α/n) = 0,
i ∈ [n].
By (<ref>), we have x_i=y_i exp ( -1 + λ + ν_i ). By (<ref>) and (<ref>), when x_i=α/n, we have x_i=y_i exp ( -1 + λ + ν_i ) ≥ y_i exp ( -1 + λ ); when x_i>α/n, we have x_i = y_i exp ( -1 + λ ).
Assume that x_1 = … = x_i^⋆-1 = α/n < x_i^⋆≤…≤ x_n. Then
1 = ∑^n_i=1 x_i = ( i^⋆ -1 ) α/n + exp (-1+λ) ·∑^n_i=i^⋆ y_i,
which implies that
exp (-1+λ) = 1 - ( i^⋆ -1 ) α/n/∑^n_i=i^⋆ y_i .
Thus, we have
x_i^⋆ = y_i^⋆exp (-1+λ) = y_i^⋆ 1 - ( i^⋆ -1 ) α/n/∑^n_i=i^⋆ y_i > α/n,
which implies that
y_i^⋆( 1 - i^⋆-1 /nα) > α/n∑^n_i=i^⋆ y_i .
To complete the proof, we then only need to show that
y_i^'( 1 - i^'-1 /nα) ≤α/n∑^n_i=i^' y_i
for all 1 ≤ i^'≤ i^⋆-1. The result then follows from (<ref>) and (<ref>).
To prove(<ref>), recall that for any 1 ≤ i^'≤ i^⋆-1, we have α/n=y_i^'exp ( -1 + λ + ν_i^' ), and because y_1 ≤⋯≤ y_n, we have ν_1 ≥⋯≥ν_i^⋆-1. This way, we have
( i^⋆-i^') α/n = ∑^i^⋆-1_i=i^' y_i exp( -1 + λ + ν_i ) ≤exp( -1 + λ + ν_i^') ∑^i^⋆-1_i=i^' y_i.
On the other hand, by (<ref>), we have
1 - ( i^⋆ -1 ) α/n = exp (-1+λ) ∑^n_i=i^⋆ y_i ≤exp( -1 + λ + ν_i^') ∑^n_i=i^⋆ y_i.
Combining (<ref>) and (<ref>), we have
1 - ( i^' -1 ) α/n/∑^n_i=i^' y_i =
1 - ( i^⋆ -1 ) α/n + ( i^⋆ -i^' ) α/n/∑^n_i=i^' y_i
≤exp( -1 + λ + ν_i^') ( ∑^i^⋆-1_i=i^' y_i + ∑^n_i=i^⋆ y_i ) /∑^n_i=i^' y_i
= exp( -1 + λ + ν_i^')
= α/n/ y_i^',
which then implies (<ref>).
|
http://arxiv.org/abs/2306.01546v1
|
20230602135030
|
Publicly available datasets of breast histopathology H&E whole-slide images: A systematic review
|
[
"Masoud Tafavvoghi",
"Lars Ailo Bongo",
"Nikita Shvetsov",
"Lill-Tove Rasmussen Busund",
"Kajsa Møllersen"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"cs.LG",
"68T01 General topics in artificial intelligence",
"I.2.0"
] |
Publicly available datasets of breast histopathology H&E whole-slide images: A systematic review
Masoud Tafavvoghi1*,
Lars Ailo Bongo2,
Nikita Shvetsov2,
Lill-Tove Rasmussen Busund 3,
Kajsa Møllersen1
1 Department of Community Medicine, UiT The Arctic University of Norway, Tromsø, Norway
2 Department of Computer Science, UiT The Arctic University of Norway, Tromsø, Norway
3 Department of Medical Biology, UiT The Arctic University of Norway, Tromsø, Norway
* [email protected]
§ ABSTRACT
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this systematic review, we identified the publicly available datasets of breast H&E stained whole-slide images (WSI) that can be used to develop deep learning algorithms. We systematically searched nine scientific literature databases and nine research data repositories. We found twelve publicly available datasets, containing 5153 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled a list of patch and private datasets that were used in the included articles as a supplementary resource for researchers. Notably, 22% of the included articles utilized multiple datasets, and only 12% of the articles used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 47.4% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
§ INTRODUCTION
One important area of active research in pathology is the use of deep learning for analyzing H&E histopathology whole slide images (WSI)- the gold standard for the clinical diagnosis of cancer <cit.>. Deep learning algorithms can identify complex patterns in billion-pixel microscope images that may not be readily apparent to human experts (an example is shown in Fig <ref>). For instance, deep learning models have been used to predict breast cancer recurrence <cit.> and classify breast cancer subtypes <cit.> by using histopathological WSIs.
The lack of adequately labeled datasets is a big limitation in computational pathology for breast cancer, as the performance of deep learning algorithms depends on the availability of sufficient high-quality training and validation data. The use of large and diverse training datasets allows algorithms to identify complex patterns and nonlinear relationships more accurately. In addition, using large and independent validation datasets can increase the reliability of models and mitigate overfitting risk, which in turn improves the generalizability of the models <cit.>.
With advances in technology to produce and utilize data, there is a growing recognition of the benefits of data sharing, so greater openness in scientific research is being advocated for by scientists. The FAIR principles <cit.>, which were developed in response to this push for open-access data, provide a framework for making research data more Findable, Accessible, Interoperable, and Reusable. Based on the FAIR principles, medical data should be easily findable by both humans and machines by using a standardized and well-documented approach for metadata and data description, and by making use of appropriate metadata standards, taxonomies, and ontologies. Data should be made accessible to all researchers who are authorized to access it in accordance with relevant ethical and legal frameworks. It should be feasible to integrate the data with other datasets and software tools in a seamless manner. Interoperability can be facilitated by using open data formats, data models, and data dictionaries. In addition, data should be designed with the intention of supporting data reuse, allowing other researchers to build upon the data and reproduce the results. To support data reuse, data should be accompanied by complete and accurate documentation, licensing information, and data citation. This approach enriches access to larger and more diverse datasets <cit.>, enhances faster development of deep learning models <cit.>, and improves their accuracy and performance <cit.>, which can consequently lead to better patient outcomes and improved quality of care <cit.>.
Like other medical data, histopathology images are protected by ethical and legal regulations related to privacy, security, and consent. To share medical data, data owners should adhere to relevant regulatory requirements, such as the General Data Protection Regulation (GDPR) in the European Union and the Health Insurance Portability and Accountability Act (HIPAA) in the United States. After the EU GDPR revised the new regulations in May 2018 <cit.>, data sharing in general has become more challenging due to concerns over maintaining the confidentiality of patient health information, especially in light of high-profile data breaches and incidents of data misuse. In addition to privacy and security concerns, intellectual property and regulatory issues can pose significant barriers to sharing medical data. Resource constraints can also limit the ability of researchers to share medical data. The costs associated with collecting, storing, and curating medical data can be substantial, and many researchers may lack the necessary resources or expertise to manage and share their data effectively.
As the field of big data analysis continues to expand, having access to summaries of existing data has become more advantageous for researchers. Such overviews can assist in identifying relevant datasets without having to begin a new data collection process. Additionally, having a comprehensive and standardized set of public datasets can facilitate the reproducibility and comparability of research findings across different studies. A systematic review of available public datasets can help to identify gaps and limitations in the existing datasets and opportunities for improving the quality and diversity of available datasets. To address this, Hulsen <cit.> has conducted a systematic review, providing an overview of publicly available patient-centered datasets of prostate cancer, presented in imaging, clinical, and genomics categories. He identified 42 publicly available datasets that can efficiently support prostate cancer researchers in selecting appropriate data resources. He found that most of the datasets do not follow the FAIR principles, as some have legacy issues and need decoding work that might increase the possibility of human error.
In <cit.>, the authors have systematically reviewed characteristics of publicly available datasets of skin cancer images, which can be leveraged for the advancement of machine learning algorithms for skin cancer diagnosis. They have reported 21 open-access datasets and 17 open-access atlases available for data extraction. They came to the conclusion that there is inconsistency in reporting image metadata, and population representations are limited in open-access datasets of skin cancer. Leung et al. <cit.> have reviewed the datasets available for machine learning in genomic medicine, including an overview of available omic datasets. They suggested using multiple data sources to rectify problems arising from the missing information from individual datasets.
This systematic review aims to identify and assess the characteristics of all publicly available datasets of breast H&E WSIs to reduce the demand for setting up new studies to collect data for the development of deep learning algorithms. This overview helps to identify potential data sources, the suitability of each dataset for specific tasks in computational pathology, and their quality and biases to ensure the generalizability of machine learning models. To the best of our knowledge, there has not been any study specifically targeting the available datasets of breast H&E WSIs. However, several studies <cit.> have mentioned a small number of such datasets, suggesting the necessity for a comprehensive overview of all available datasets in this field.
§ METHODS
We conducted a systematic review based on the PRISMA guidelines <cit.> to identify all publicly available breast H&E WSI datasets appropriate for deep learning (S1_Checklist). Since this systematic review does not evaluate direct health outcomes, it was not eligible to be registered with PROSPERO <cit.>.
Our inclusion criteria were papers using, reviewing, or mentioning any publicly available dataset of human breast H&E WSIs. These may be introduced in machine learning challenges and contests, or published for research purposes. A search was conducted in March 2022 using the following criteria: ("deep learning" OR "machine learning") AND ("whole slide images" OR WSI) AND (breast) AND (histology OR histopathology OR pathology) AND (data OR dataset OR "data set"). In total, nine scientific literature databases were queried: Pubmed, Medline, MDPI, Web of Science, Science Direct, Semantic Scholar, IEEE-Explore, Association for Computing Machinery (ACM) digital library, and the dbpl computer science bibliography. To ensure a manageable scope for this review, results were limited to full-text articles in English published between the years 2015-2022.
Additionally, the following exclusion criteria were applied:
* not of human breast tissue, for instance, use of histology images of canine or mouse
* using other modalities like CT or MRI instead of histology images
* images of other organs (like lung, skin, etc.) rather than breast
* only patch or image tile datasets instead of whole slide images
* non-image data like genomics or clinical data
* tissues not stained with H&E, e.g. immunohistochemistry (IHC) images
* not publicly available datasets
* use of unoriginal or subset datasets derived entirely from public datasets
Fig <ref> summarizes the workflow diagram of the data collection. Of the 1713 articles from the search results, 476 articles were removed as duplicates. The remaining 1237 identified articles were then screened by title, abstract, and full-text, respectively, by two independent reviewers (MT and (KM or LAB or NS)) and the relevant studies were selected based on the inclusion criteria. In the event of disagreement on the inclusion, a third co-author (KM or LB) checked the article to make the final decision on whether it should be included or not. Out of 1237 articles, 619, 176, and 303 articles were excluded by their title, abstract and full-text assessment, respectively. This resulted in 139 articles that were included in this review.
In addition to the scientific literature databases, we searched nine online databases and repositories known to contain public datasets. We used search strings in the S1_Appendix to find breast histopathology datasets in The Cancer Imaging Archive, US National Cancer Institute (NIH), Google Data Research, Zenodo, Figshare, Github, Kaggle, Grand Challenge, and the Papers with Code platforms. All the relevant search results that featured breast histopathological images were reviewed by the first reviewer (MT), and their associated metadata and documentation were examined to find other public datasets of breast H&E WSIs that were not included in the selected papers.
§ RESULTS
The 139 included papers used or reviewed one or more of 11 public datasets of breast H&E WSIs (Table <ref>). In addition, we found the Post-Nat-BRCA dataset manually in the Cancer Imaging Archive repository, which was not used in any of the selected articles. These 12 public datasets comprise 5153 breast H&E WSIs appropriate for machine learning use. Moreover, we found 28 datasets of breast H&E image tiles (S2_Appendix) and 31 private datasets of breast WSIs (S3_Appendix), 18 of which claimed to be accessible upon an agreement with the data owners. We did not attempt to obtain access to the private datasets because they were not within the scope of this systematic review.
=1.15pt
Publicly available datasets of breast histological H&E WSIs.
Dataset AKA Source Year WSIs Patients Annotations Clinical data Scanner Image size (pixels) Size (GB) Image format
ANHIR - Spain 2019 5 - Coordinates of tumor area - Aperio AT2 ≈65k×60k 0.2 JPG
BACH ICIAR 2018 Portugal 2018 30 - 10 WSIs have coordinates of ROIs, labeled pixel-wise - Leica SCN400 ≈65k×42k 7 SVS, TIFF
BRACS - Italy 2020 547 189 Labels for 6 different subtypes - Aperio AT2 ≈100k×100k 1100 SVS
Camelyon16 - Netherlands 2016 399 399 ROI polygons, pN-stage labels - Pannoramic 250, NanoZoomer-XR ≈100k×200k 1160 TIFF
Camelyon17 - Netherlands 2017 1000 200 ROI polygons, pN-stage labels - Pannoramic 250, NanoZoomer-XR, Philips IntelliSite ≈100k×200k 2950 TIFF
CPTAC-BRCA CPTAC USA 2021 642 134 - - ≈90k×45k 113 SVS
DRYAD - USA 2018 584 - ROI polygons, binary masks of invasive regions - - ≈3k×3k 4.6 PNG
HER2 Warwick UK 2016 86 86 HER2 score, percentage of cells with complete membrane staining - NanoZoomer C9600 ≈100k×80k 20 SVS
HEROHE - Portugal 2020 500 360 Binary labels of HER2+/- status - Pannoramic 1000 ≈85k×65k 820 MRXS
Post-NAT -BRCA - Canada 2021 96 54 Cellularity and cell label Aperio ≈61k×35k 43 SVS
SLN- Breast MSK USA 2021 130 78 Binary labels of metastasis status Aperio ≈70k×40k 53 SVS
TCGA- BRCA TCGA USA - 1133 1062 - - ≈50k×50k 1148 SVS
There are also two public datasets of breast histopathological images that have acquired all or part of the images from other publicly available datasets: TUPAC16 <cit.> with 821 and DRYAD dataset with 195 WSIs from the TCGA-BRCA (derived datasets of image tiles can be found in S2_Appendix). Therefore, there is a risk of obtaining an overly optimistic performance estimate for trained models if derived datasets are used for the validation. This is because the model has already seen some of the data during training, and using derived datasets for validation may lead to an overestimation of the model's ability to generalize to new, unseen data. However, such derived datasets may be published with extra information that is not provided in the original datasets. For example, TUPAC16 has 500 WSIs in the training set, all derived from the TCGA-BRCA, that have corresponding tumor proliferation and molecular proliferation scores as ground truth which were not included in the original TCGA-BRCA dataset. The inclusion of such extra information would be highly advantageous in the development of models in breast computational pathology. Table <ref> shows details of the derived datasets of breast histopathology WSIs.
-1.35in0in
§.§ Datasets description
ANHIR dataset <cit.>: This dataset is from the Automatic Non-rigid Histological Image Registration (ANHIR) challenge, which was part of the IEEE International Symposium on Biomedical Imaging (ISBI) 2019. ANHIR contains whole slide images of different types of tissue, including breast. Breast WSIs are stained with H&E and IHC and scanned with Leica Biosystems Aperio AT2 with 40× magnitude and 0.253 µm/pixel resolution. Images are marked manually with landmarks that have standard ImageJ structure and coordinate frame.
BACH <cit.>: The BreAst Cancer Histology images dataset is from the challenge held as part of the International Conference on Image Analysis and Recognition (ICIAR 2018). The dataset includes H&E stained WSIs and patches. There are 400 patches with 2048×1536 resolution, image-wise labeled in four different classes, along with annotations produced by two medical experts. BACH consists of 30 WSIs, acquired by Leica SCN400 scanner in SVS format, out of which 10 WSIs have coordinates of benign, in situ carcinoma, and invasive carcinoma regions, labeled pixel-wise by two pathologists.
BRACS dataset <cit.>: BReAst Carcinoma Subtyping dataset is collected at the Istituto Nazionale dei Tumori, Italy using an Aperio AT2 scanner at 0.25 µm/pixel for 40× resolution. BRACS contains 547 WSIs of 189 patients, labeled in 7 classes. Benign tumors are labeled Normal, pathological benign, and usual ductal hyperplasia. Atypia tumors are labeled flat epithelial atypia and atypical ductal hyperplasia, and malignant tumors have ductal carcinoma in situ and invasive carcinoma labels. In addition, 4539 regions of interest acquired from 387 WSIs are labeled and provided in .png files.
The Camelyon 16 and 17 datasets <cit.>: The Cancer Metastases in Lymph Nodes Challenge 2016 consists of 399 WSIs of H&E stained lymph node sections collected in two centers in the Netherlands. Images are annotated with a binary label, and the ground truth for images containing metastases is available in WSI binary masks and plain text files in .xml format, providing the contour vertices of the metastases area. The dataset has 269 images in normal and metastasis classes for training and 130 WSIs for testing. The Camelyon 17 is the extended version of Camelyon 16 comprising 1399 unique H&E stained WSIs, with an additional 1000 images added to the previous dataset. These 1000 WSIs are collected equally at five medical centers in the Netherlands, each providing 200 images from 40 patients (five slides per patient). In Camelyon 17, images of 100 patients are provided for training, and images of 100 other patients for testing. This dataset has detailed contours of metastasis boundaries on a lesion level for 50 WSIs and pN-stage labels for the patients in training data.
CPTAC-BRCA dataset <cit.>: The Clinical Proteomic Tumor Analysis Consortium Breast Invasive Carcinoma Collection consists of 642 whole slide images of 134 patients, scanned at a 20× magnification. In addition to the slides, clinical, proteomics, and genomic data are available for researchers.
DRYAD dataset <cit.>: This dataset consists of 4 different cohorts: The Cancer Genome Atlas (TCGA), Cancer Institute of New Jersey (CINJ), Case Western Reserve University (CWRU), and Hospital at the University of Pennsylvania (HUP) each allotting 195, 40, 110 and 239 down-sized WSIs of breast tissue with binary masks for annotated invasive regions. In <cit.>, the authors did not mention any use of the DRYAD dataset, but they used the CINJ and HUP datasets that are included in DRYAD. Therefore, papers using these two datasets are included in our study, supposing that they have used part of the DRYAD dataset; as to our knowledge, the CINJ and HUP datasets are not separately available to the public in any database.
HER2-Warwick dataset <cit.>: The data is part of the HER2 scoring contest organized by the University of Warwick, the University of Nottingham, and the Academic–Industrial Collaboration for Digital Pathology consortium. The dataset consists of 86 H&E stained WSIs of invasive breast carcinomas acquired from 86 patients. IHC stained images and the ground truth data in the form of HER2 scores and the percentage of cells with complete membrane staining are also provided in this dataset.
HEROHE dataset <cit.>: This dataset is presented in the HER2 on hematoxylin and eosin (HEROHE) challenge, aimed at predicting HER2 status in breast cancer by using only H&E stained WSIs. This dataset entails 360 invasive breast cancer cases (144 HER2+ and 216 HER2-) for training and 150 cases (60 HER2+ and 90 HER2-) for testing. The WSIs in training and test sets are from different patients to maintain the independence between the two datasets. WSIs are scanned by 3D Histech Pannoramic 1000 in .mrxs format. Only a binary classification indicating positive or negative HER2 status is available for the HEROHE dataset and the location of the invasive carcinoma is not annotated.
Post-NAT-BRCA <cit.>: The Post neoadjuvant therapy (NAT) breast cancer dataset is from a cohort with residual invasive breast cancer following NAT. The dataset is composed of 96 WSIs from 54 patients. The slides were scanned by an Aperio scanner at 20× magnification at Sunnybrook Health Sciences Centre in Canada. Clinical data, including patients' age, ER, PR, and HER2 status, is also made available together with tumor cellularity and cell label annotations.
SLN-Breast <cit.>: The Breast Metastases to Axillary Lymph Nodes dataset consists of 130 H&E WSIs of axillary lymph nodes from 78 patients, among them 36 WSIs have metastatic breast carcinoma. Slides were scanned with a Lecia Aperio scanner at 20× magnification at Memorial Sloan Cancer Center in the US. Images are labeled in two classes, positive or negative breast cancer metastases.
TCGA-BRCA <cit.>: The Cancer Genome Atlas (TCGA) Breast Cancer study is an inclusive, experimental study of breast invasive carcinoma, coordinated and updated regularly by the US National Cancer Institute. This dataset entails 1133 H&E stained WSIs of breast cancer from 1062 patients. The TCGA-BRCA includes matched H&E WSIs, gene expression data, and clinical information.
§.§ Datasets descriptive statistics
117 <cit.> out of the 139 included papers used public datasets of breast H&E WSIs actively for different algorithm development purposes such as segmentation, classification, prognostic predictions, and color normalization of histology images. The remaining 22 articles <cit.> have reviewed or mentioned these datasets. Fig <ref>A shows the frequency of active use of breast histopathology WSI datasets. Almost half of the studies (47.4%) have used the TCGA-BRCA actively, which highlights the significance of this dataset in breast computational pathology and its value as a resource for future studies.
The TCGA-BRCA is the only dataset used for the development of prediction models using breast WSIs, which might be explained by the fact that TCGA has the largest cohort among these datasets and it comes with clinical and genomics data. TCGA-BRCA has the largest contribution in all the task categories except for the color normalization of the histopathological images where the Camelyon dataset prevails (Fig <ref>B).
The available ground truth is a limiting factor in using public datasets of breast H&E WSIs for computational pathology, especially when employing supervised algorithms for training the models. Only 33.6% of the images have annotations of regions of interest as ground truth, 23.8% have binary labels of breast cancer metastasis, 12.3% of images are provided by HER2 status labels or scores, 11.5% of WSIs have labels for breast cancer subtypes, and 34.4% of the images do not have any annotations, which restrains the use of these datasets for specific tasks like classification of breast cancer subtypes. Nonetheless, the available WSIs can be utilized for training self-supervised and semi-supervised models.
Camelyon 16 and 17, HEROHE, ICIAR 2018, HER2-Warwick, and ANHIR datasets are published in challenges and contests and comprise 39% of the publicly available breast H&E WSIs. The other six datasets: TCGA-BRCA, SLN-Breast, Post-Nat-BRCA, BRACS, DRYAD, and CPTAC-BRCA are collected and made available for research purposes (Fig <ref>A). One intriguing aspect of the identified datasets is the number of patients included, which varies considerably between studies and may have important implications for the generalizability and reliability of trained models. The TCGA-BRCA is the largest open-access dataset of breast H&E WSIs with 1133 images from 1062 patients (Fig <ref>B), which is extended regularly by adding new slides to the dataset.
§ DISCUSSION
The present study aimed to investigate the availability and suitability of open-access histopathology datasets for the development of deep learning algorithms in breast tissue analysis. We identified 12 publicly available datasets of breast H&E WSIs. It may appear that 5153 open-access breast WSIs are a substantial amount for developing deep learning algorithms. However, it is important to note that the publicly available datasets of breast H&E often lack detailed metadata descriptions (Fig <ref>). For instance, the number of patients is not reported in three datasets. Furthermore, the clinical data necessary for the development of prognostic tools are only available for four datasets: TCGA-BRCA, CPTAC-BRCA, Post-Nat-BRCA, and SLN-Breast, all of which were collected in the US. Since the last three datasets were published for the public in 2021, solely the TCGA-BRCA was utilized for the development of predictive models in the included articles. None of the included papers or web pages hosting the breast H&E WSI datasets provided an explicit statement of adherence to the FAIR principles, and the level of metadata and documentation provided by the dataset publishers varied. This variability in metadata and documentation could potentially affect the findability and reusability of the datasets, highlighting the need for improved adherence to the FAIR principles to enhance their accessibility and usability. Furthermore, inconsistencies in the data format and structure were found across the available datasets, which could limit the interoperability of the datasets.
The WSIs of the TCGA-BRCA and the Camelyon datasets are widely employed in breast computational pathology. This widespread usage has implications for the generalizability of machine learning algorithms. As the TCGA dataset is collected in the United States, it may not be representative of the breast cancer population in other regions or countries. In addition, TCGA-BRCA has a high proportion of white women compared to American Indian and Hispanic patients (Fig <ref>A) and a high proportion of patients with infiltrating duct carcinoma (70%). Additionally, this dataset includes a large percentage of samples from younger women, which may not be representative of the entire breast cancer population as the disease is more common in older women (Fig <ref>B).
The issue of biases is not exclusive to the TCGA-BRCA dataset; it has been observed in other studies as well. A study on the representativeness of the TCGA bladder cancer cohort <cit.> revealed biases in this dataset. The authors found that Patients captured in the TCGA-BCa cohort demonstrate a higher risk disease profile compared to the reference cystectomy series, and consequently, their rates of overall survival and disease-specific survival are lower. Another study <cit.> found that black Americans are not adequately represented in the majority of cancer cases within the TCGA datasets when compared to clinical and mortality datasets. They also stated that Asian Americans are overrepresented in the TCGA dataset for most cancers. These biases are significant factors that should be acknowledged during the validation of computer-assisted tools, playing a vital role in maintaining the robustness, applicability, and transferability of the models across different patient breast cancer cohorts.
Deep learning models can benefit from external datasets to improve their ability to generalize to new data and enhance their performance on a specific task. Using external data to validate trained algorithms can help ensure their generalizability, identifying overfitting and assessing their performance across different datasets. So, the real measure of a model's predictive ability lies in its performance on an independent dataset that was not employed in its initial development <cit.>, as the performance of these models often diminishes when applied to a new cohort beyond the original development population <cit.>. Typically, there is a lack of external validation in the algorithms developed for breast computational pathology. Among the 139 papers analyzed in our study, only 30 papers incorporated multiple datasets, and 14 of those integrated private datasets alongside public ones for model development. Moreover, just 16 studies used an external validation set (Table <ref>), implying that the performance of other developed models could be subject to overestimation. In 11 papers, the TCGA-BRCA dataset was combined with a single private dataset to develop the models. One paper employed the combination of the Camelyon 17 with a private dataset, and one paper integrated the TCGA-BRCA, Camelyon 16 and 17, and CPTAC-BRCA datasets with a private dataset for model development.
The potential benefits of using private datasets can make it well worth the effort to get access to such datasets. Incorporating private datasets in both the training and validation processes can improve the diversity, representativeness, and overall performance of machine learning models. Private datasets can also contain unique or hard-to-obtain data that might not be available through public sources. For example, WSIs acquired from patients who have taken specific treatments like immunotherapy and the response to this treatment does not exist in any publicly available datasets of breast WSIs.
One potential limitation in this systematic review is the possibility of missing relevant datasets due to the search strategy or selection criteria used. To mitigate this limitation, a comprehensive and well-defined search strategy was developed to ensure that all relevant datasets are captured. Nevertheless, the Post-Nat-BRCA dataset was not found in any of the papers identified in our selected literature databases, and we found it in the Cancer Imaging Archive during the manual screening of data repositories. Another limitation is the use of only English language search terms and inclusion criteria. This approach may have resulted in the exclusion of relevant datasets that were unavailable in English or primarily in other languages. Future systematic reviews of publicly available datasets of breast H&E WSIs could benefit from broader search strategies that include searches in multiple languages. This would increase the likelihood of identifying relevant datasets that are not primarily in English and thus reduce potential language-related bias. However, the feasibility of such an approach will depend on the availability of resources, expertise in multiple languages, and the research question being addressed. Additionally, changes in the availability of datasets over time may further restrict the relevance and applicability of systematic reviews of publicly available datasets. To address this limitation, the review should be conducted within a well-defined time frame, and the publication date of included studies should be clearly reported.
§ CONCLUSION
In summary, our study examined the availability and suitability of publicly available datasets of H&E stained histopathology WSIs that can be used in breast computational pathology. This data overview can save significant time and effort by providing a starting point for research without needing to set up a data collection study.
Despite the significant number of WSIs, we found limitations in metadata descriptions, inadequate clinical data, and inconsistencies in the format and structure of datasets. Additionally, the presence of biases within widely used datasets, such as TCGA-BRCA, raises concerns regarding the generalizability of models. Therefore, it is crucial to improve adherence to FAIR principles, enhance metadata descriptions, and address biases. Moreover, incorporating diverse datasets, including private and external sources, holds promise for improving model performance and generalizability.
§ SUPPORTING INFORMATION
*S1 Checklist.
The PRISMA 2020 checklist.
*S1 Appendix.
Full search strings used in data repositories.
*S2 Appendix.
List of breast H&E image tiles (patches) datasets.
*S3 Appendix.
List of private datasets of breast H&E whole slide images.
§
10
bib3
Liu L, Feng W, Chen C, Liu M, Qu Y, Yang J. Classification of breast cancer histology images using MSMV-PFENet. Sci Rep. 2022;12: 17447. doi:10.1038/s41598-022-22358-y
bib1
Yang J, Ju J, Guo L, Ji B, Shi S, Yang Z, et al. Prediction of HER2-positive breast cancer recurrence and metastasis risk from histopathological images and clinical information via multimodal deep learning. Computational and Structural Biotechnology Journal. 2022;20: 333–342. doi:10.1016/j.csbj.2021.12.028
bib2
Srikantamurthy MM, Rallabandi VPS, Dudekula DB, Natarajan S, Park J. Classification of benign and malignant subtypes of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning. BMC Medical Imaging. 2023;23: 19. doi:10.1186/s12880-023-00964-0
bib4
Hinton, G., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Improving neural networks by preventing co-adaptation of feature detectors. ArXiv Preprint ArXiv:1207.0580. (2012)
bib10
Wilkinson MD, Dumontier M, Aalbersberg IJJ, Appleton G, Axton M, Baak A, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3: 160018. doi:10.1038/sdata.2016.18
bib5
Gargeya R, Leng T. Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology. 2017;124: 962–969. doi:10.1016/j.ophtha.2017.02.008
bib6
Ching T, Himmelstein DS, Beaulieu-Jones BK, Kalinin AA, Do BT, Way GP, et al. Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interface. 2018;15: 20170387. doi:10.1098/rsif.2017.0387
bib7
Rajkomar A, Dean J, Kohane I. Machine Learning in Medicine. N Engl J Med. 2019;380: 1347–1358. doi:10.1056/NEJMra1814259
bib8
Shickel B, Tighe PJ, Bihorac A, Rashidi P. Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis. IEEE J Biomed Health Inform. 2018;22: 1589–1604. doi:10.1109/JBHI.2017.2767063
bib11
Simell BA, Törnwall OM, Hämäläinen I, Wichmann H-E, Anton G, Brennan P, et al. Transnational access to large prospective cohorts in Europe: Current trends and unmet needs. New Biotechnology. 2019;49: 98–103. doi:10.1016/j.nbt.2018.10.001
bib16
Hulsen T. An overview of publicly available patient-centered prostate cancer datasets. Transl Androl Urol. 2019;8: S64–S77. doi:10.21037/tau.2019.03.01
bib17
Wen D, Khan SM, Ji Xu A, Ibrahim H, Smith L, Caballero J, et al. Characteristics of publicly available skin cancer image datasets: a systematic review. Lancet Digit Health. 2022;4: e64–e74. doi:10.1016/S2589-7500(21)00252-1
bib18
Leung MKK, Delong A, Alipanahi B, Frey BJ. Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets. Proceedings of the IEEE. 2016;104: 176–197. doi:10.1109/JPROC.2015.2494198
included10
Brancati N, Anniciello AM, Pati P, Riccio D, Scognamiglio G, Jaume G, et al. BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images. arXiv; 2021. doi:10.48550/arXiv.2111.04740
included117
Zeiser FA, da Costa CA, Roehe AV, Righi R da R, Marques NMC. Breast cancer intelligent analysis of histopathological data: A systematic review. Applied Soft Computing. 2021;113: 107886. doi:10.1016/j.asoc.2021.107886
included123
Duggento A, Conti A, Mauriello A, Guerrisi M, Toschi N. Deep computational pathology in breast cancer. Seminars in Cancer Biology. 2021;72: 226–237. doi:10.1016/j.semcancer.2020.08.006
included126
Hamidinekoo A, Denton E, Rampun A, Honnor K, Zwiggelaar R. Deep learning in mammography and breast histology, an overview and future trends. Medical Image Analysis. 2018;47: 45–67. doi:10.1016/j.media.2018.03.006
included130
Liew XY, Hameed N, Clos J. A Review of Computer-Aided Expert Systems for Breast Cancer Diagnosis. Cancers. 2021;13: 2764. doi:10.3390/cancers13112764
bib21
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372: n71. doi:10.1136/bmj.n71
bib22
Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, et al. The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Systematic Reviews. 2012;1: 2. doi:10.1186/2046-4053-1-2
TUPAC16
Tumor Proliferation Assessment Challenge - Grand Challenge. In: grand-challenge.org [Internet]. [cited 13 Feb 2023]. Available: https://tupac.grand-challenge.org/
ANHIR
Automatic Non-rigid Histological Image Registration (ANHIR); 2019 [cited 13 Feb 2022]. Database: Grand Challenge [Internet]. Available: https://anhir.grand-challenge.org/
ICIAR2018
ICIAR 2018 Grand Challenge on Breast Cancer Histology Images; 2018 [cited 13 Feb 2022]. Database: Grand Challenge [Internet]. Available from: https://iciar2018-challenge.grand-challenge.org/
BRACS
BRACS: BReAst Carcinoma Subtyping; 2020. [cited 13 Feb 2022]. Database: BRACS [Internet]. Available from: https://www.bracs.icar.cnr.it/
Camelyon16
CAMELYON16 - Grand Challenge. In: grand-challenge.org [Internet]. [cited 13 Feb 2023]. Available from: https://camelyon16.grand-challenge.org/
Camelyon17
CAMELYON17 - Grand Challenge. In: grand-challenge.org [Internet]. [cited 13 Feb 2023]. Available: https://camelyon17.grand-challenge.org/
CPTAC-BRCA
National Cancer Institute Clinical Proteomic Tumor Analysis Consortium (CPTAC). The Clinical Proteomic Tumor Analysis Consortium Breast Invasive Carcinoma Collection (CPTAC-BRCA). The Cancer Imaging Archive; 2020. doi:10.7937/TCIA.CAEM-YS80. Available: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70227748
DRYAD
Cruz-Roa A, Gilmore H, Basavanhally A, Feldman M, Ganesan S, Shih N, et al. Data from: High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: application to invasive breast cancer detection. Dryad; 2018. doi:10.5061/DRYAD.1G2NT41. Dataset available: https://datadryad.org/stash/dataset/doi:10.5061%2Fdryad.1g2nt41
included5
Cruz-Roa A, Basavanhally A, González F, Gilmore H, Feldman M, Ganesan S, et al. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. Medical Imaging 2014: Digital Pathology. SPIE; 2014. p. 904103. doi:10.1117/12.2043872
included13
Celik Y, Talo M, Yildirim O, Karabatak M, Acharya UR. Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images. Pattern Recognition Letters. 2020;133: 232–239. doi:10.1016/j.patrec.2020.03.011
included25
Ektefaie Y, Yuan W, Dillon DA, Lin NU, Golden JA, Kohane IS, et al. Integrative multiomics-histopathology analysis for breast cancer classification. NPJ Breast Cancer. 2021;7: 147. doi:10.1038/s41523-021-00357-y
HER2
Her2 Scoring Contest. [cited 13 Feb 2023]. Available from: https://warwick.ac.uk/fac/cross_fac/tia/data/her2contest/
HEROHE
HEROHE - Grand Challenge. In: grand-challenge.org [Internet]. [cited 13 Feb 2023]. Available from: https://grand-challenge.org/forums/forum/herohe-415/
Post-Nat-BRCA
Martel A, Nofech-Mozes S, Salama S, Akbar S, Peikari M. Assessment of Residual Breast Cancer Cellularity after Neoadjuvant Chemotherapy using Digital Pathology. The Cancer Imaging Archive; 2019. doi:10.7937/TCIA.2019.4YIBTJNO. Available from: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=52758117
SLN-Breast
Campanella G, Hanna MG, Brogi E, Fuchs TJ. Breast Metastases to Axillary Lymph Nodes. The Cancer Imaging Archive; 2019. doi:10.7937/TCIA.2019.3XBN2JCC. Available from: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=52763339
TCGA-BRCA
The Cancer Genome Atlas (TCGA). In: Genomic Data Commons Data Portal (GDC) [Internet]. [cited 13 Feb 2023]. Available: https://portal.gdc.cancer.gov/projects/TCGA-BRCA
included1
Ahmed S, Tariq M, Naveed H. PMNet: A probability map based scaled network for breast cancer diagnosis. Computerized Medical Imaging and Graphics. 2021;89: 101863. doi:10.1016/j.compmedimag.2021.101863
included3Amgad M, Elfandy H, Hussein H, Atteya LA, Elsebaie MAT, Abo Elnasr LS, et al. Structured crowdsourcing enables convolutional segmentation of histology images. Murphy R, editor. Bioinformatics. 2019;35: 3461–3467. doi:10.1093/bioinformatics/btz083
included4Anand D, Kurian NC, Dhage S, Kumar N, Rane S, Gann PH, et al. Deep Learning to Estimate Human Epidermal Growth Factor Receptor 2 Status from Hematoxylin and Eosin-Stained Breast Tissue Images. J Pathol Inform. 2020;11: 19. doi:10.4103/jpi.jpi_10_20
included6Aresta G, Araújo T, Kwok S, Chennamsetty SS, Safwan M, Alex V, et al. BACH: Grand challenge on breast cancer histology images. Medical Image Analysis. 2019;56: 122–139. doi:10.1016/j.media.2019.05.010
included7Bándi P, Geessink O, Manson Q, Van Dijk M, Balkenhol M, Hermsen M, et al. From Detection of Individual Metastases to Classification of Lymph Node Status at the Patient Level: The CAMELYON17 Challenge. IEEE Transactions on Medical Imaging. 2019;38: 550–560. doi:10.1109/TMI.2018.2867350
included8Ehteshami Bejnordi B, Veta M, Johannes van Diest P, van Ginneken B, Karssemeijer N, Litjens G, et al. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer. JAMA. 2017;318: 2199. doi:10.1001/jama.2017.14585
included9Bokor P, Hudec L, Fabian O, Benesova W. Weighted multi-level deep learning analysis and framework for processing breast cancer WSIs. arXiv; 2021. doi:10.48550/arXiv.2106.14708
included11Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med. 2019;25: 1301–1309. doi:10.1038/s41591-019-0508-1
included12Çelik G, Talu MF. Resizing and cleaning of histopathological images using generative adversarial networks. Physica A: Statistical Mechanics and its Applications. 2020;554: 122652. doi:10.1016/j.physa.2019.122652
included14Chaudhury S, Shelke N, Sau K, Prasanalakshmi B, Shabaz M. A Novel Approach to Classifying Breast Cancer Histopathology Biopsy Images Using Bilateral Knowledge Distillation and Label Smoothing Regularization. Computational and Mathematical Methods in Medicine. 2021;2021: e4019358. doi:10.1155/2021/4019358
included15Chen J, Jiao J, He S, Han G, Qin J. Few-Shot Breast Cancer Metastases Classification via Unsupervised Cell Ranking. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2021;18: 1914–1923. doi:10.1109/TCBB.2019.2960019
included16Cho SY, Lee JH, Ryu JM, Lee JE, Cho EY, Ahn CH, et al. Deep learning from HE slides predicts the clinical benefit from adjuvant chemotherapy in hormone receptor-positive breast cancer patients. Sci Rep. 2021;11: 17363. doi:10.1038/s41598-021-96855-x
included17Ciga O, Martel AL. Learning to segment images with classification labels. Medical Image Analysis. 2021;68: 101912. doi:10.1016/j.media.2020.101912
included18Ciga O, Xu T, Martel AL. Self supervised contrastive learning for digital histopathology. Machine Learning with Applications. 2022;7: 100198. doi:10.1016/j.mlwa.2021.100198
included19Cruz-Roa A, Gilmore H, Basavanhally A, Feldman M, Ganesan S, Shih N, et al. High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: Application to invasive breast cancer detection. PLOS ONE. 2018;13: e0196828. doi:10.1371/journal.pone.0196828
included20Cruz-Roa A, Gilmore H, Basavanhally A, Feldman M, Ganesan S, Shih NNC, et al. Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent. Sci Rep. 2017;7: 46450. doi:10.1038/srep46450
included21de Matos J, Ataky STM, de Souza Britto A, Soares de Oliveira LE, Lameiras Koerich A. Machine Learning Methods for Histopathological Image Analysis: A Review. Electronics. 2021;10: 562. doi:10.3390/electronics10050562
included22Dhillon A, Singh A. eBreCaP: extreme learning-based model for breast cancer survival prediction. IET syst biol. 2020;14: 160-169. doi:10.1049/iet-syb.2019.0087
included23Diao JA, Wang JK, Chui WF, Mountain V, Gullapally SC, Srinivasan R, et al. Human-interpretable image features derived from densely mapped cancer pathology slides predict diverse molecular phenotypes. Nat Commun. 2021;12: 1613. doi:10.1038/s41467-021-21896-9
included24Eddy JA, Thorsson V, Lamb AE, Gibbs DL, Heimann C, Yu JX, et al. CRI iAtlas: an interactive portal for immuno-oncology research. F1000Research; 2020. Available: https://f1000research.com/articles/9-1028
included26Elsharawy KA, Gerds TA, Rakha EA, Dalton LW. Artificial intelligence grading of breast cancer: a promising method to refine prognostic classification for management precision. Histopathology. 2021;79: 187–199. doi:10.1111/his.14354
included27Elsharawy KA, Toss MS, Raafat S, Ball G, Green AR, Aleskandarany MA, et al. Prognostic significance of nucleolar assessment in invasive breast cancer. Histopathology. 2020;76: 671–684. doi:10.1111/his.14036
included28Fernández Blanco R, Rosado P, Vegas E, Reverter F. Medical image editing in the latent space of Generative Adversarial Networks. Intelligence-Based Medicine. 2021;5: 100040. doi:10.1016/j.ibmed.2021.100040
included29Fernández-Carrobles MM, Serrano I, Bueno G, Déniz O. Bagging Tree Classifier and Texture Features for Tumor Identification in Histological Images. Procedia Computer Science. 2016;90: 99–106. doi:10.1016/j.procs.2016.07.030
included30Fischer W, Moudgalya SS, Cohn JD, Nguyen NTT, Kenyon GT. Sparse coding of pathology slides compared to transfer learning with deep neural networks. BMC Bioinformatics. 2018;19: 489. doi:10.1186/s12859-018-2504-8
included31Ghazvinian Zanjani F, Zinger S, Piepers B, Mahmoudpour S, Schelkens P, de With PHN. Impact of JPEG 2000 compression on deep convolutional neural networks for metastatic cancer detection in histopathological images. J Med Imaging (Bellingham). 2019;6: 027501. doi:10.1117/1.JMI.6.2.027501
included32Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, et al. Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Medical Image Analysis. 2019;58: 101563. doi:10.1016/j.media.2019.101563
included33Guo Z, Liu H, Ni H, Wang X, Su M, Guo W, et al. A Fast and Refined Cancer Regions Segmentation Framework in Whole-slide Breast Pathological Images. Sci Rep. 2019;9: 882. doi:10.1038/s41598-018-37492-9
included34He B, Bergenstråhle L, Stenbeck L, Abid A, Andersson A, Borg Å, et al. Integrating spatial gene expression and breast tumour morphology via deep learning. Nat Biomed Eng. 2020;4: 827–834. doi:10.1038/s41551-020-0578-x
included35Hegde N, Hipp JD, Liu Y, Emmert-Buck M, Reif E, Smilkov D, et al. Similar image search for histopathology: SMILY. npj Digit Med. 2019;2: 1–9. doi:10.1038/s41746-019-0131-z
included36Howard FM, Dolezal J, Kochanny S, Schulte J, Chen H, Heij L, et al. The impact of site-specific digital histology signatures on deep learning model accuracy and bias. Nat Commun. 2021;12: 4423. doi:10.1038/s41467-021-24698-1
included37Choudhary A, Wu H, Tong L, Wang MD. Learning to Evaluate Color Similarity for Histopathology Images using Triplet Networks. ACM BCB. 2019;2019: 466–474. doi:10.1145/3307339.3342170
included38Jaber MI, Song B, Taylor C, Vaske CJ, Benz SC, Rabizadeh S, et al. A deep learning image-based intrinsic molecular subtype classifier of breast tumors reveals tumor heterogeneity that may affect survival. Breast Cancer Research. 2020;22: 12. doi:10.1186/s13058-020-1248-3
included39Jiao Y, Yuan J, Qiang Y, Fei S. Deep embeddings and logistic regression for rapid active learning in histopathological images. Computer Methods and Programs in Biomedicine. 2021;212: 106464. doi:10.1016/j.cmpb.2021.106464
included40Kalra S, Tizhoosh HR, Shah S, Choi C, Damaskinos S, Safarpoor A, et al. Pan-cancer diagnostic consensus through searching archival histopathology images using artificial intelligence. npj Digit Med. 2020;3: 1–15. doi:10.1038/s41746-020-0238-2
included41Kanavati F, Tsuneki M. Breast Invasive Ductal Carcinoma Classification on Whole Slide Images with Weakly-Supervised and Transfer Learning. Cancers (Basel). 2021;13: 5368. doi:10.3390/cancers13215368
included42Khened M, Kori A, Rajkumar H, Krishnamurthi G, Srinivasan B. A generalized deep learning framework for whole-slide image segmentation and analysis. Sci Rep. 2021;11: 11579. doi:10.1038/s41598-021-90444-8
included43Kim Y-G, Kim S, Cho CE, Song IH, Lee HJ, Ahn S, et al. Effectiveness of transfer learning for enhancing tumor classification with a convolutional neural network on frozen sections. Sci Rep. 2020;10: 21899. doi:10.1038/s41598-020-78129-0
included44Krithiga R, Geetha P. Deep learning based breast cancer detection and classification using fuzzy merging techniques. Machine Vision and Applications. 2020;31: 63. doi:10.1007/s00138-020-01122-0
included45Kumar A, Prateek M. Localization of Nuclei in Breast Cancer Using Whole Slide Imaging System Supported by Morphological Features and Shape Formulas. Cancer Manag Res. 2020;12: 4573–4583. doi:10.2147/CMAR.S248166
included46Kumar N, Verma R, Sharma S, Bhargava S, Vahadane A, Sethi A. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology. IEEE Trans Med Imaging. 2017;36: 1550–1560. doi:10.1109/TMI.2017.2677499
included47La Barbera D, Polónia A, Roitero K, Conde-Sousa E, Della Mea V. Detection of HER2 from Haematoxylin-Eosin Slides Through a Cascade of Deep Learning Classifiers via Multi-Instance Learning. J Imaging. 2020;6: 82. doi:10.3390/jimaging6090082
included48Le H, Gupta R, Hou L, Abousamra S, Fassler D, Torre-Healy L, et al. Utilizing Automated Breast Cancer Detection to Identify Spatial Distributions of Tumor-Infiltrating Lymphocytes in Invasive Breast Cancer. The American Journal of Pathology. 2020;190: 1491–1504. doi:10.1016/j.ajpath.2020.03.012
included49Lee S, Amgad M, Mobadersany P, McCormick M, Pollack BP, Elfandy H, et al. Interactive Classification of Whole-Slide Imaging Data for Cancer Researchers. Cancer Research. 2021;81: 1171–1177. doi:10.1158/0008-5472.CAN-20-0668
included50Lei G, Xia Y, Zhai D-H, Zhang W, Chen D, Wang D. StainCNNs: An efficient stain feature learning method. Neurocomputing. 2020;406: 267–273. doi:10.1016/j.neucom.2020.04.008
included51Levy-Jurgenson A, Tekpli X, Kristensen VN, Yakhini Z. Spatial transcriptomics inferred from pathology whole-slide images links tumor heterogeneity to survival in breast and lung cancer. Sci Rep. 2020;10: 18802. doi:10.1038/s41598-020-75708-z
included52Li C, Lu X. Computer-Aided Detection Breast Cancer in Whole Slide Image. 2021 International Conference on Computer, Control and Robotics (ICCCR). 2021. pp. 193–198. doi:10.1109/ICCCR49711.2021.9349391
included54Li H, Qiu L, Wang M. Informed Attentive Predictors: A Generalisable Architecture for Prior Knowledge-Based Assisted Diagnosis of Cancers. Sensors. 2021;21: 6484. doi:10.3390/s21196484
included55Li H, Bera K, Toro P, Fu P, Zhang Z, Lu C, et al. Collagen fiber orientation disorder from H&E images is prognostic for early stage breast cancer: clinical trial validation. NPJ Breast Cancer. 2021;7: 104. doi:10.1038/s41523-021-00310-z
included56Lin H, Chen H, Graham S, Dou Q, Rajpoot N, Heng P-A. Fast ScanNet: Fast and Dense Analysis of Multi-Gigapixel Whole-Slide Images for Cancer Metastasis Detection. IEEE Transactions on Medical Imaging. 2019;38: 1948–1958. doi:10.1109/TMI.2019.2891305
included57Litjens G, Bandi P, Ehteshami Bejnordi B, Geessink O, Balkenhol M, Bult P, et al. 1399 H&E-stained sentinel lymph node sections of breast cancer patients: the CAMELYON dataset. Gigascience. 2018;7: giy065. doi:10.1093/gigascience/giy065
included58Liu Y, Kohlberger T, Norouzi M, Dahl GE, Smith JL, Mohtashamian A, et al. Artificial Intelligence–Based Breast Cancer Nodal Metastasis Detection: Insights Into the Black Box for Pathologists. Archives of Pathology & Laboratory Medicine. 2019;143: 859–868. doi:10.5858/arpa.2018-0147-OA
included59López-Pérez M, Amgad M, Morales-Álvarez P, Ruiz P, Cooper LAD, Molina R, et al. Learning from crowds in digital pathology using scalable variational Gaussian processes. Sci Rep. 2021;11: 11612. doi:10.1038/s41598-021-90821-3
included60Lu MY, Chen RJ, Kong D, Lipkova J, Singh R, Williamson DFK, et al. Federated learning for computational pathology on gigapixel whole slide images. Medical Image Analysis. 2022;76: 102298. doi:10.1016/j.media.2021.102298
included61Lu MY, Williamson DFK, Chen TY, Chen RJ, Barbieri M, Mahmood F. Data-efficient and weakly supervised computational pathology on whole-slide images. Nat Biomed Eng. 2021;5: 555–570. doi:10.1038/s41551-020-00682-w
included62Lu Z, Zhan X, Wu Y, Cheng J, Shao W, Ni D, et al. BrcaSeg: A Deep Learning Approach for Tissue Quantification and Genomic Correlations of Histopathological Images. Genomics, Proteomics & Bioinformatics. 2021;19: 1032–1042. doi:10.1016/j.gpb.2020.06.026
included63Mi W, Li J, Guo Y, Ren X, Liang Z, Zhang T, et al. <p>Deep Learning-Based Multi-Class Classification of Breast Digital Pathology Images</p>. CMAR. 2021;13: 4605–4617. doi:10.2147/CMAR.S312608
included64Monjo T, Koido M, Nagasawa S, Suzuki Y, Kamatani Y. Efficient prediction of a spatial transcriptomics profile better characterizes breast cancer tissue sections without costly experimentation. Sci Rep. 2022;12: 4133. doi:10.1038/s41598-022-07685-4
included65Mukundan R. Analysis of Image Feature Characteristics for Automated Scoring of HER2 in Histology Slides. J Imaging. 2019;5: 35. doi:10.3390/jimaging5030035
included66Mukundan R. Image Features Based on Characteristic Curves and Local Binary Patterns for Automated HER2 Scoring. Journal of Imaging. 2018;4: 35. doi:10.3390/jimaging4020035
included67Munien C, Viriri S. Classification of Hematoxylin and Eosin-Stained Breast Cancer Histology Microscopy Images Using Transfer Learning with EfficientNets. Computational Intelligence and Neuroscience. 2021;2021: e5580914. doi:10.1155/2021/5580914
included68Muñoz-Aguirre M, Ntasis VF, Rojas S, Guigó R. PyHIST: A Histological Image Segmentation Tool. PLOS Computational Biology. 2020;16: e1008349. doi:10.1371/journal.pcbi.1008349
included69Naik N, Madani A, Esteva A, Keskar NS, Press MF, Ruderman D, et al. Deep learning-enabled breast cancer hormonal receptor status determination from base-level H&E stains. Nat Commun. 2020;11: 5727. doi:10.1038/s41467-020-19334-3
included70Noorbakhsh J, Farahmand S, Foroughi pour A, Namburi S, Caruana D, Rimm D, et al. Deep learning-based cross-classifications reveal conserved spatial behaviors within tumor histological images. Nat Commun. 2020;11: 6367. doi:10.1038/s41467-020-20030-5
included71Oliveira SP, Ribeiro Pinto J, Gonçalves T, Canas-Marques R, Cardoso M-J, Oliveira HP, et al. Weakly-Supervised Classification of HER2 Expression in Breast Cancer Haematoxylin and Eosin Stained Slides. Applied Sciences. 2020;10: 4728. doi:10.3390/app10144728
included72Oner MU, Chen J, Revkov E, James A, Heng SY, Kaya AN, et al. Obtaining spatially resolved tumor purity maps using deep multiple instance learning in a pan-cancer study. PATTER. 2022;3. doi:10.1016/j.patter.2021.100399
included73Öztürk Ş, Akdemir B. HIC-net: A deep convolutional neural network model for classification of histopathological breast images. Computers & Electrical Engineering. 2019;76: 299–310. doi:10.1016/j.compeleceng.2019.04.012
included74Pantanowitz L, Michelow P, Hazelhurst S, Kalra S, Choi C, Shah S, et al. A Digital Pathology Solution to Resolve the Tissue Floater Conundrum. Archives of Pathology & Laboratory Medicine. 2021;145: 359–364. doi:10.5858/arpa.2020-0034-OA
included75Park J, Chung YR, Kong ST, Kim YW, Park H, Kim K, et al. Aggregation of cohorts for histopathological diagnosis with deep morphological analysis. Sci Rep. 2021;11: 2876. doi:10.1038/s41598-021-82642-1
included76Patil SM, Tong L, Wang MD. Generating Region of Interests for Invasive Breast Cancer in Histopathological Whole-Slide-Image. 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). 2020. pp. 723–728. doi:10.1109/COMPSAC48688.2020.0-174
included77Pérez-Bueno F, Serra JG, Vega M, Mateos J, Molina R, Katsaggelos AK. Bayesian K-SVD for H and E blind color deconvolution. Applications to stain normalization, data augmentation and cancer classification. Computerized Medical Imaging and Graphics. 2022;97: 102048. doi:10.1016/j.compmedimag.2022.102048
included78Pérez-Bueno F, Vega M, Sales MA, Aneiros-Fernández J, Naranjo V, Molina R, et al. Blind color deconvolution, normalization, and classification of histological images using general super Gaussian priors and Bayesian inference. Computer Methods and Programs in Biomedicine. 2021;211: 106453. doi:10.1016/j.cmpb.2021.106453
included79Phan NN, Huang C-C, Tseng L-M, Chuang EY. Predicting Breast Cancer Gene Expression Signature by Applying Deep Convolutional Neural Networks From Unannotated Pathological Images. Front Oncol. 2021;11: 769447. doi:10.3389/fonc.2021.769447
included80Qu H, Zhou M, Yan Z, Wang H, Rustgi VK, Zhang S, et al. Genetic mutation and biological pathway prediction based on whole slide images in breast carcinoma using deep learning. NPJ Precis Oncol. 2021;5: 87. doi:10.1038/s41698-021-00225-9
included81Riasatian A, Babaie M, Maleki D, Kalra S, Valipour M, Hemati S, et al. Fine-Tuning and training of densenet for histopathology image representation using TCGA diagnostic slides. Medical Image Analysis. 2021;70: 102032. doi:10.1016/j.media.2021.102032
included82Ruan J, Zhu Z, Wu C, Ye G, Zhou J, Yue J. A fast and effective detection framework for whole-slide histopathology image analysis. PLOS ONE. 2021;16: e0251521. doi:10.1371/journal.pone.0251521
included83Runz M, Rusche D, Schmidt S, Weihrauch MR, Hesser J, Weis C-A. Normalization of HE-stained histological images using cycle consistent generative adversarial networks. Diagnostic Pathology. 2021;16: 71. doi:10.1186/s13000-021-01126-y
included84Saltz J, Gupta R, Hou L, Kurc T, Singh P, Nguyen V, et al. Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images. Cell Reports. 2018;23: 181-193.e7. doi:10.1016/j.celrep.2018.03.086
included85Schmauch B, Romagnoni A, Pronier E, Saillard C, Maillé P, Calderaro J, et al. A deep learning model to predict RNA-Seq expression of tumours from whole slide images. Nat Commun. 2020;11: 3877. doi:10.1038/s41467-020-17678-4
included86Schmitz R, Madesta F, Nielsen M, Krause J, Steurer S, Werner R, et al. Multi-scale fully convolutional neural networks for histopathology image segmentation: From nuclear aberrations to the global tissue architecture. Medical Image Analysis. 2021;70: 101996. doi:10.1016/j.media.2021.101996
included87Shao W, Wang T, Sun L, Dong T, Han Z, Huang Z, et al. Multi-task multi-modal learning for joint diagnosis and prognosis of human cancers. Medical Image Analysis. 2020;65: 101795. doi:10.1016/j.media.2020.101795
included88Sheikh TS, Lee Y, Cho M. Histopathological Classification of Breast Cancer Images Using a Multi-Scale Input and Multi-Feature Network. Cancers (Basel). 2020;12: 2031. doi:10.3390/cancers12082031
included89Shi X, Su H, Xing F, Liang Y, Qu G, Yang L. Graph temporal ensembling based semi-supervised convolutional neural network with noisy labels for histopathology image analysis. Medical Image Analysis. 2020;60: 101624. doi:10.1016/j.media.2019.101624
included90Srinidhi CL, Kim SW, Chen F-D, Martel AL. Self-supervised driven consistency training for annotation efficient histopathology image analysis. Medical Image Analysis. 2022;75: 102256. doi:10.1016/j.media.2021.102256
included91Srivastava A, Kulkarni C, Huang K, Parwani A, Mallick P, Machiraju R. Imitating Pathologist Based Assessment With Interpretable and Context Based Neural Network Modeling of Histology Images. Biomed Inform Insights. 2018;10: 117822261880748. doi:10.1177/1178222618807481
included92Sui D, Guo M, Zhang Y, Zhang L. Pyramid Deconvolution Net: Breast Cancer Detection Using Tissue and Cell Encoding Information. Proceedings of the 4th International Conference on Big Data Research. New York, NY, USA: Association for Computing Machinery; 2021. pp. 84–88. doi:10.1145/3445945.3445960
included93Sui D, Liu W, Chen J, Zhao C, Ma X, Guo M, et al. A Pyramid Architecture-Based Deep Learning Framework for Breast Cancer Detection. BioMed Research International. 2021;2021: e2567202. doi:10.1155/2021/2567202
included94Sun D, Li A, Tang B, Wang M. Integrating genomic data and pathological images to effectively predict breast cancer clinical outcome. Computer Methods and Programs in Biomedicine. 2018;161: 45–53. doi:10.1016/j.cmpb.2018.04.008
included95Sun P, He J, Chao X, Chen K, Xu Y, Huang Q, et al. A Computational Tumor-Infiltrating Lymphocyte Assessment Method Comparable with Visual Reporting Guidelines for Triple-Negative Breast Cancer. eBioMedicine. 2021;70. doi:10.1016/j.ebiom.2021.103492
included96Tellez D, Litjens G, Bándi P, Bulten W, Bokhorst J-M, Ciompi F, et al. Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical Image Analysis. 2019;58: 101544. doi:10.1016/j.media.2019.101544
included97Thagaard J, Stovgaard ES, Vognsen LG, Hauberg S, Dahl A, Ebstrup T, et al. Automated Quantification of sTIL Density with H&E-Based Digital Image Analysis Has Prognostic Potential in Triple-Negative Breast Cancers. Cancers (Basel). 2021;13: 3050. doi:10.3390/cancers13123050
included98Uchida S, Kojima T, Sugino T. Clinicopathological Features, Tumor Mutational Burden, and Tumour-Infiltrating Lymphocyte Interplay in ERBB2-Mutated Breast Cancer: In Silico Analysis. Pathol Oncol Res. 2021;0. doi:10.3389/pore.2021.633243
included99Vale-Silva LA, Rohr K. Long-term cancer survival prediction using multimodal deep learning. Sci Rep. 2021;11: 13505. doi:10.1038/s41598-021-92799-4
included100Valieris R, Amaro L, Osório CAB de T, Bueno AP, Rosales Mitrowsky RA, Carraro DM, et al. Deep Learning Predicts Underlying Features on Pathology Images with Therapeutic Relevance for Breast and Gastric Cancer. Cancers. 2020;12: 3687. doi:10.3390/cancers12123687
included101Valkonen M, Kartasalo K, Liimatainen K, Nykter M, Latonen L, Ruusuvuori P. Metastasis detection from whole slide images using local features and random forests: Metastasis Detection From Whole Slide Images. Cytometry. 2017;91: 555–565. doi:10.1002/cyto.a.23089
included102Venet L, Pati S, Feldman MD, Nasrallah MP, Yushkevich P, Bakas S. Accurate and Robust Alignment of Differently Stained Histologic Images Based on Greedy Diffeomorphic Registration. Appl Sci (Basel). 2021;11: 1892. doi:10.3390/app11041892
included103Vizcarra J, Place R, Tong L, Gutman D, Wang MD. Fusion In Breast Cancer Histology Classification. Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics. New York, NY, USA: Association for Computing Machinery; 2019. pp. 485–493. doi:10.1145/3307339.3342166
included105Wang L, Sun L, Zhang M, Zhang H, Ping W, Zhou R, et al. Exploring Pathologist Knowledge for Automatic Assessment of Breast Cancer Metastases in Whole-slide Image. Proceedings of the 29th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery; 2021. pp. 255–263. doi:10.1145/3474085.3475489
included106Wang Y, Acs B, Robertson S, Liu B, Solorzano L, Wählby C, et al. Improved breast cancer histological grading using deep learning. Annals of Oncology. 2022;33: 89–98. doi:10.1016/j.annonc.2021.09.007
included107Wang Y, Kartasalo K, Weitz P, Ács B, Valkonen M, Larsson C, et al. Predicting Molecular Phenotypes from Histopathology Images: A Transcriptome-Wide Expression–Morphology Analysis in Breast Cancer. Cancer Research. 2021;81: 5115–5126. doi:10.1158/0008-5472.CAN-21-0482
included108Wodzinski M, Skalski A. Multistep, automatic and nonrigid image registration method for histology samples acquired using multiple stains. Phys Med Biol. 2020 [cited 6 Feb 2023]. doi:10.1088/1361-6560/abcad7
included109Wollmann T, Eijkman CS, Rohr K. Adversarial domain adaptation to improve automatic breast cancer grading in lymph nodes. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). 2018. pp. 582–585. doi:10.1109/ISBI.2018.8363643
included110Wu C, Ruan J, Ye G, Zhou J, He S, Wang J, et al. Identifying Tumor in Whole-Slide Images of Breast Cancer Using Transfer Learning and Adaptive Sampling. 2019 Eleventh International Conference on Advanced Computational Intelligence (ICACI). 2019. pp. 167–172. doi:10.1109/ICACI.2019.8778616
included111Wulczyn E, Steiner DF, Xu Z, Sadhwani A, Wang H, Flament-Auvigne I, et al. Deep learning-based survival prediction for multiple cancer types using histopathology images. PLoS One. 2020;15: e0233678. doi:10.1371/journal.pone.0233678
included112Xing F, Xie Y, Shi X, Chen P, Zhang Z, Yang L. Towards pixel-to-pixel deep nucleus detection in microscopy images. BMC Bioinformatics. 2019;20: 472. doi:10.1186/s12859-019-3037-5
included113Xu S, Lu Z, Shao W, Yu CY, Reiter JL, Feng Q, et al. Integrative analysis of histopathological images and chromatin accessibility data for estrogen receptor-positive breast cancer. BMC Medical Genomics. 2020;13: 195. doi:10.1186/s12920-020-00828-4
included114Xu Z, Verma A, Naveed U, Bakhoum SF, Khosravi P, Elemento O. Deep learning predicts chromosomal instability from histopathology images. iScience. 2021;24. doi:10.1016/j.isci.2021.102394
included115Yang J, Ju J, Guo L, Ji B, Shi S, Yang Z, et al. Prediction of HER2-positive breast cancer recurrence and metastasis risk from histopathological images and clinical information via multimodal deep learning. Computational and Structural Biotechnology Journal. 2022;20: 333–342. doi:10.1016/j.csbj.2021.12.028
included116Ye J, Luo Y, Zhu C, Liu F, Zhang Y. Breast Cancer Image Classification on WSI with Spatial Correlations. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2019. pp. 1219–1223. doi:10.1109/ICASSP.2019.8682560
included118Zhang H, Liu J, Yu Z, Wang P. MASG-GAN: A multi-view attention superpixel-guided generative adversarial network for efficient and simultaneous histopathology image segmentation and classification. Neurocomputing. 2021;463: 275–291. doi:10.1016/j.neucom.2021.08.039
included119Zhang W, Zhu C, Liu J, Wang Y, Jin M. Cancer Metastasis Detection Through Multiple Spatial Context Network. Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition. New York, NY, USA: Association for Computing Machinery; 2020. pp. 221–225. doi:10.1145/3373509.3373567
included120Zheng Y, Jiang Z, Zhang H, Xie F, Shi J, Xue C. Adaptive color deconvolution for histological WSI normalization. Computer Methods and Programs in Biomedicine. 2019;170: 107–120. doi:10.1016/j.cmpb.2019.01.008
included121Cooper LA, Demicco EG, Saltz JH, Powell RT, Rao A, Lazar AJ. PanCancer insights from The Cancer Genome Atlas: the pathologist’s perspective: Insights from TCGA: PanCancer and digital pathology analyses. J Pathol. 2018;244: 512–524. doi:10.1002/path.5028
included122Dai B, Wu K, Wu T, Li K, Qu Y, Xie Y, et al. Faster-PPN: Towards Real-Time Semantic Segmentation with Dual Mutual Learning for Ultra-High Resolution Images. Proceedings of the 29th ACM International Conference on Multimedia. New York, NY, USA: Association for Computing Machinery; 2021. pp. 1957–1965. doi:10.1145/3474085.3475352
included124Gu H, Huang J, Hung L, Chen X “Anthony.” Lessons Learned from Designing an AI-Enabled Diagnosis Tool for Pathologists. Proc ACM Hum-Comput Interact. 2021;5: 10:1-10:25. doi:10.1145/3449084
included125Hägele M, Seegerer P, Lapuschkin S, Bockmayr M, Samek W, Klauschen F, et al. Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. Sci Rep. 2020;10: 6423. doi:10.1038/s41598-020-62724-2
included127Jansen C, Annuscheit J, Schilling B, Strohmenger K, Witt M, Bartusch F, et al. Curious Containers: A framework for computational reproducibility in life sciences with support for Deep Learning applications. Future Generation Computer Systems. 2020;112: 209–227. doi:10.1016/j.future.2020.05.007
included128Lee K, Lockhart JH, Xie M, Chaudhary R, Slebos RJC, Flores ER, et al. Deep Learning of Histopathology Images at the Single Cell Level. Frontiers in Artificial Intelligence. 2021;4. Available: https://www.frontiersin.org/articles/10.3389/frai.2021.754641
included129Li X, Li C, Rahaman MM, Sun H, Li X, Wu J, et al. A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev. 2022;55: 4809–4878. doi:10.1007/s10462-021-10121-0
included131Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Medical Image Analysis. 2017;42: 60–88. doi:10.1016/j.media.2017.07.005
included132López de Maturana E, Pineda S, Brand A, Van Steen K, Malats N. Toward the integration of Omics data in epidemiological studies: still a “long and winding road.” Genet Epidemiol. 2016;40: 558–569. doi:10.1002/gepi.21992
included133Graziani M, Andrearczyk V, Marchand-Maillet S, Müller H. Concept attribution: Explaining CNN decisions to physicians. Computers in Biology and Medicine. 2020;123: 103865. doi:10.1016/j.compbiomed.2020.103865
included134Qaiser T, Mukherjee A, Reddy PB C, Munugoti SD, Tallam V, Pitkäaho T, et al. HER2 challenge contest: a detailed assessment of automated HER2 scoring algorithms in whole slide images of breast cancer tissues. Histopathology. 2018;72: 227–238. doi:10.1111/his.13333
included135Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Computers in Biology and Medicine. 2021;128: 104129. doi:10.1016/j.compbiomed.2020.104129
included136Schneider L, Laiouar-Pedari S, Kuntz S, Krieghoff-Henning E, Hekler A, Kather JN, et al. Integration of deep learning-based image analysis and genomic data in cancer pathology: A systematic review. European Journal of Cancer. 2022;160: 80–91. doi:10.1016/j.ejca.2021.10.007
included137Shahid AH, Singh MP. Computational intelligence techniques for medical diagnosis and prognosis: Problems and current developments. Biocybernetics and Biomedical Engineering. 2019;39: 638–672. doi:10.1016/j.bbe.2019.05.010
included138Sobhani F, Robinson R, Hamidinekoo A, Roxanis I, Somaiah N, Yuan Y. Artificial intelligence and digital pathology: Opportunities and implications for immuno-oncology. Biochimica et Biophysica Acta (BBA) - Reviews on Cancer. 2021;1875: 188520. doi:10.1016/j.bbcan.2021.188520
included139Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Medical Image Analysis. 2021;67: 101813. doi:10.1016/j.media.2020.101813
included140Steiner DF, Chen P-HC, Mermel CH. Closing the translation gap: AI applications in digital pathology. Biochimica et Biophysica Acta (BBA) - Reviews on Cancer. 2021;1875: 188452. doi:10.1016/j.bbcan.2020.188452
included141Tripathi S, Singh SK, Lee HK. An end-to-end breast tumour classification model using context-based patch modelling – A BiLSTM approach for image classification. Computerized Medical Imaging and Graphics. 2021;87: 101838. doi:10.1016/j.compmedimag.2020.101838
included142Zeiser FA, da Costa CA, Ramos G de O, Bohn HC, Santos I, Roehe AV. DeepBatch: A hybrid deep learning model for interpretable diagnosis of breast cancer in whole-slide images. Expert Systems with Applications. 2021;185: 115586. doi:10.1016/j.eswa.2021.115586
Seiler
Seiler R, Black PC, Thalmann G, Stenzl A, Todenhöfer T. Is The Cancer Genome Atlas (TCGA) bladder cancer cohort representative of invasive bladder cancer? Urologic Oncology: Seminars and Original Investigations. 2017;35: 458.e1-458.e7. doi:10.1016/j.urolonc.2017.01.024
Kim
Kim J, Sarkar IN. Racial Representation Disparity of Population-Level Genomic Sequencing Efforts. MEDINFO 2019: Health and Wellbeing e-Networks for All. 2019; 974–978. doi:10.3233/SHTI190369
USBCR
United States Cancer Statistics (USCS). [cited 25 Nov 2022]. Database: U.S. Cancer Statistics Breast Cancer Stat Bite [Internet]. Retrieved from: https://www.cdc.gov/cancer/uscs/about/stat-bites/stat-bite-breast.htm
NORBCR
Cancer Registry of Norway. [cited 25 Nov 2022]. Database: The Registries: Cancer Statistics [Internet]. Retrieved from: https://www.kreftregisteret.no/en/The-Registries/Cancer-Statistics/
Ivanescu
Ivanescu AE, Li P, George B, Brown AW, Keith SW, Raju D, et al. The importance of prediction model validation and assessment in obesity and nutrition research. Int J Obes. 2016;40: 887–894. doi:10.1038/ijo.2015.214.
Moons
KGM, Altman DG, Reitsma JB, Ioannidis JPA, Macaskill P, Steyerberg EW, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med. 2015;162: W1-73. doi:10.7326/M14-0698
|
http://arxiv.org/abs/2306.10973v2
|
20230619143436
|
GUT origins of general electroweak multiplets and their oblique parameters
|
[
"Liang Chen",
"Ta-Wei Chan",
"Thomas W. Kephart",
"Wai Yee Keung",
"Tzu-Chiang Yuan"
] |
hep-ph
|
[
"hep-ph"
] | |
http://arxiv.org/abs/2306.06877v1
|
20230612053009
|
Boosting Breast Ultrasound Video Classification by the Guidance of Keyframe Feature Centers
|
[
"AnLan Sun",
"Zhao Zhang",
"Meng Lei",
"Yuting Dai",
"Dong Wang",
"Liwei Wang"
] |
cs.CV
|
[
"cs.CV"
] |
for Breast Ultrasound Video Classification
Anlan et al.
Yizhun Medical AI Co., Ltd
{anlan.sun, yuting.dai}@yizhun-ai.com Center for Data Science, Peking University
{zhangzh, leimeng}@stu.pku.edu.cn National Key Laboratory of General Artificial Intelligence,
School of Intelligence Science and Technology, Peking University
{wangdongcis, wanglw}@pku.edu.cn
Boosting Breast Ultrasound Video Classification by the Guidance of Keyframe Feature Centers
Anlan SunEqual contribution.1 Zhao Zhang^⋆2 Meng Lei2 Yuting Dai1 Dong Wang3 Liwei Wang2, 3
July 31, 2023
===============================================================================================
Breast ultrasound videos contain richer information than ultrasound images, therefore it is more meaningful to develop video models for this diagnosis task. However, the collection of ultrasound video datasets is much harder. In this paper, we explore the feasibility of enhancing the performance of ultrasound video classification using the static image dataset. To this end, we propose and coherence loss. The adopts both video clips and static images to train the network. The coherence loss uses the feature centers generated by the static images to guide the frame attention in the video model. Our boosts the performance on the public BUSV dataset by a large margin. The visualization results of frame attention prove the explainability of our method. The codes and model weights of our method will be made publicly available.
§ INTRODUCTION
Breast cancer is a life-threatening disease that has surpassed lung cancer as leading cancer in some countries and regions <cit.>. Breast ultrasound is the primary screening method for diagnosing breast cancer, and accurately distinguishing between malignant and benign breast lesions is crucial. This task is also an essential component of computer-aided diagnosis.
Since each frame in an ultrasound video can only capture a specific view of a lesion, it is essential to aggregate information from the entire video to perform accurate automatic lesion diagnosis. Therefore, in this study, we focus on the classification of breast ultrasound videos for detecting malignant and benign breast lesions.
Despite the fact that ultrasound videos contain more information than static images, most previous studies have focused on static image classification <cit.>. One major difficulty in using ultrasound videos for diagnosis lies in the collection of video data with pathology gold standard results. Firstly, during general ultrasound examinations, sonographers usually only record keyframe images and not entire videos. Secondly, for prospectively collected videos, additional effort must be made to track the corresponding pathological results. As a result, while there are many breast ultrasound image datasets <cit.>, video datasets are scarce. Currently, there is only one breast video dataset <cit.> available, which is relatively small, containing only 188 videos.
Given the difficulties in collecting ultrasound video data, we investigate the feasibility of enhancing the performance of ultrasound video classification using a static image dataset. To achieve this, we first analyze the relationship between ultrasound videos and images. The images in the ultrasound dataset are keyframes of a lesion that exhibit the clearest appearance and most typical symptoms, making them more discriminative for diagnosis. Although ultrasound videos provide more information, the abundance of frames may introduce redundancy or vagueness that could disrupt classification.
From the aspect of feature distribution, as shown in Fig. <ref>, the feature points of static images are more concentrated, while the feature of video frames sometimes are away from the class centers. Frames far from the centers are harder to classify.
Therefore, it is a promising approach to guide the video model to pay more attention to important frames close to the class center with the assistance of static keyframe images. Meanwhile, our approach aligns with the diagnosis of ultrasound physicians, automatically evaluates the importance of frames, and diagnoses based on the information of key frames. Additionally, our method provides interpretability through key frames.
In this paper, we propose a novel Keyframe Guided Attention Network () to boost ultrasound video classification. Our approach leverages both image (keyframes) and video datasets to train the network. To classify videos, we use frame attention to predict feature weights for all frames and aggregate them to make the final classification. The feature weights determine the contribution of each frame for the final diagnosis.
During training, we construct category feature centers for malignant and benign examples respectively using center loss <cit.> on static image inputs and use the centers to guide the training of video frame attention.
Specifically, we propose coherence loss, which promotes the frames close to the centers to have high attention weights and decreases the weights for frames far from the centers.
Due to the feature centers being generated by the larger scale image dataset, it provides more accurate and discriminative feature centers which can guide the video frame attention to focus on important frames, and finally leads to better video classification.
Our experimental results on the public BUSV dataset <cit.> show that our significantly outperforms other video classification models by using an external ultrasound image dataset.
Additionally, we visualized attention values guided by the coherence loss. The frames with clear diagnostic characteristics are given higher attention values. This phenomenon makes our method more explainable and provides a new perspective for selecting keyframes from video.
In conclusion, our contributions are as follows:
* We analyze the relationship between ultrasound video data and image data, and propose the coherence loss to use image feature centers to guide the training of frame attention.
* We propose , which adopts a static image dataset to boost the performance of ultrasound video classification. significantly outperforms other video baselines on the BUSV dataset.
* The qualitative analysis of the frame attention verifies the explainability of our method and provides a new perspective for selecting keyframes.
§ RELATED WORKS
Breast Ultrasound Classification. Breast ultrasound (BUS) plays an important supporting role in the diagnosis of breast-related diseases. Recent research demonstrated the potential of deep learning for breast lesion classification tasks <cit.>. <cit.> design ensemble methods to integrate the features of multiple models to obtain higher accuracy. <cit.> utilize multi-task learning to improve the model performance. However, all of them are based on image datasets, such as BUSI <cit.>, while few works focus on the video modality. <cit.> designed a pre-training model based on contrastive learning for ultrasound video classification. <cit.> develop a keyframe extraction model for ultrasound videos and utilized the extracted keyframes to perform various classification tasks. However, these methods rely on keyframe supervision, which limits their applicability. Fortunately, the recent publicly available dataset BUSV <cit.> has made the research on the task of BUS video-based classification possible. In this paper, we build our model based on this dataset.
Video recognition based on neural networks. Traditional methods are based on Two-stream networks <cit.>. Since I3D <cit.> was proposed, 3D CNNs have dominated video understanding for a long time. <cit.> decompose 3D convolution in different ways to reduce computation complexity without losing performance. <cit.> designed two branches to focus on temporal information and spatial features, respectively. However, 3D CNNs have a limited receptive field, and thus struggle to capture long-range dependency. Vision Transformers <cit.> have become popular due to their excellent capability of aggregating spatial-temporal information. In order to reduce computational complexity brought by global attention, MViT <cit.> used hierarchical structure by reducing spatial resolution and Video Swin <cit.> introduced 3D shifted window attention. Our proposed KGA-Net is a simple framework that aggregates multi-frame features based on the frame attention module.
§ METHODOLOGY
As shown in Fig. <ref>, our takes the video inputs and static image inputs simultaneously to train the network. The coherence loss is proposed to guide the frame attention by using the feature centers generated by the images.
We will then elaborate on each component in the following sections.
§.§ Video and Image Classification Network
The video classification network is illustrated in Fig. <ref> (a). The model is composed of a 2D CNN backbone, a frame attention module, and a classification head.
For an input video clip V composed of N frames, it is first processed by the backbone network and the feature vectors of the frames { F_i }_i=1^N are obtained. Then, the frame attention module predicts the attention weight for each frame using a FC and sigmoid layer, and then the features are aggregated by the weights to form an integrated feature vector. Formally,
w_i = Sigmoid ( FC ( F_i ) )
where w_i denotes the weight for the i_th frame and FC is the fully-connected layer. Then, the features are aggregated by F_V = ∑_i=1^N w_i · F_i. Finally, the classification head is applied to the final result of lesion classification. To train the model, the cross-entropy loss (CE Loss) is applied to the classification prediction of the video.
The image classification network is used to assist in training the video model. We use the same 2D CNN as the backbone network in the video classification network. The model weights are shared for the two backbones for better generalization. To promote the formation of feature centers, we apply the center loss <cit.> to the image model besides the cross-entropy loss. In addition, the frame-level cross-entropy loss is also applied to the video frames to facilitate training.
§.§ Training with Coherence Loss
In this section, we introduce the coherence loss to guide the frame attention with the assistance of the category feature centers. We use the same method as center loss <cit.> to obtain the feature centers for the malignancy and benign lesions, which are denoted as 𝒞^mal and 𝒞^benign, respectively.
The distances of frame features and the feature centers can measure the quality of the frames. The frame features close to the centers are more discriminative for the classification task. Therefore, we use these distances to guide the generation of frame attention. Specifically, we push the frames close to the centers to have higher attention weights and decrease the weights far from the centers. To do this, for each video frame with feature F_i, we first calculate the feature distance from its corresponding class center. Formally,
d_i = F_i - 𝒞^Y _2,
where Y ∈{mal, benign} is the label of the video V and d_i is the computed distance of frame i.
Afterward, we apply coherence loss to the attention weights w=[ w_1, w_2, ..., w_N ]^⊺ to make them have a similar distribution with the feature distances d=[ d_1, d_2, ..., d_N ]^⊺. To supervise the distribution, the coherence loss is defined as the L2 loss of the gram matrix of these two vectors
L_Coh = Gram_w - Gram_d_2,
where
Gram_w = (1 - w) · (1 - w)^⊺/1 - w_2^2
is the gram matrix of normalized attention weights, and
Gram_d = d·d^⊺/d_2^2
is the gram matrix of normalized feature distances. Note that lower distances correspond to stronger attention, hence we use the opposite of w to get Gram_w.
§.§ Total Training Loss
To summarize, the total training loss of our
L_total = L_CE^V + L_CE^I + L_Center + λ· L_Coh.
L_CE^V and L_CE^I denote the cross-entropy for video classification and image and frame classification. L_Center means the center loss. λ is the weight for coherence loss. Empirically, we set λ=1 in our experiments.
During inference, to perform classification on video data, the video classification network can be utilized individually for prediction.
§ EXPERIMENTS
§.§ Implementation Details
Datasets. We use the public BUSV dataset <cit.> for video classification and the BUSI dataset <cit.> as the image dataset. BUSV consists of 113 malignant videos and 75 benign videos. BUSI contains 445 images of benign lesions and 210 images of malignant lesions. For the BUSV dataset, we use the official data split in <cit.>. All images of the BUSI dataset are adopted to train our .
Model Details. ResNet-50 <cit.> pretrained on ImageNet <cit.> is used as backbone. We use SGD optimizer with an initial learning rate of 0.005, which is reduced by 10× at the 4,000th and 6,000th iteration. The total learning iteration number is 8,000. The learning rate warmup is used in the first 1,000 iterations.
For each batch, the video clips and static images are both sampled and sent to the network. We use a total batchsize of 16 and the sample probability of video clips and images is 1:1. We implement the model based on Pytorch and train it with NVIDIA Titan RTX GPU cards.
During inference, we use the video classification network individually. In order to satisfy the fixed video length requirement of MViT <cit.>, we sample up to 128 frames of each video to form a video clip and predict its classification result using all the models in experiments.
§.§ Comparison with Video Models
In this section, we compare our with other competitive video classification models. Comparing with ultrasound-video-based work presents difficulty. <cit.> is not accompanied by open-source code and relies on private datasets, making comparisons exceedingly challenging. <cit.> relies on a private dataset with keyframe annotations for supervised training. The released code does not include keyframe detection, which makes direct comparison impossible. Since the research on ultrasound video classification is uncomparable, we compare our method with other strong video baselines on natural images. The CNN-based models including I3D <cit.>, SlowFast <cit.>, R(2+1)D <cit.> and CSN <cit.> are involved. Meanwhile, the recently popular transformer-based model (MViT <cit.>) is also adopted. For a fair comparison, we use both the video and image data to train these models. The images are regarded as static videos to train the networks.
During evaluation, we report the metrics on the test set of BUSV.
As shown in Table. <ref>, by leveraging the guidance of the image dataset, our significantly surpasses all other models on all of the metrics. The video classification model of our is composed of a standard 2D ResNet-50 and a light feature attention module, while the baseline models are with net structures carefully designed for video analysis. Therefore, the success of our lies in the correct usage of the image guidance. The feature centers formed by the image dataset with larger data size and clear appearance effectively improve the accuracy of frame attention hence boosting the video classification performance.
§.§ Ablation Study
In this section, we ablate the contribution of each key design in our . We observe their importance by removing these key components from the whole network. The results are shown in Table <ref>. The results of are shown in the last row in Table <ref>, while the components are ablated in the first three rows. We use the same training schedule for all of the experiments.
Image guidance is the main purpose of our method. To portray the effect of using the image dataset, we train the using BUSV dataset alone in the first row of Table <ref>. Without the image dataset, we generate the feature centers from the video frames. As a result, the performance significantly drops due to the decrease in dataset scale. It also shows that the feature centers generated by the image dataset are more discriminative than that of the video dataset. It is not only because the lesion number of BUSI is larger than BUSV, but also because the images in BUSI are all the keyframes that contain typical characteristics of lesions.
Frame attention and coherence loss are two essential modules of our KGA-Net. We train a without the coherence loss in the third row of Table <ref>. In the second row, we further replace the feature attention module with feature averaging of video frames. It can be seen that both of these two modules contribute to the overall performance according to AUC and ACC. It is worth noting that these two models without coherence loss obtain very low sensitivity and high specificity, which means the model predictions are imbalanced and intend to make benign predictions. It is because that clear malignant appearances usually only exist in limited frames in a malignant video. Without our coherence loss or frame attention, it is difficult for the model to focus on typical frames that possess malignant features. This phenomenon certifies the effectiveness of our to prevent false negatives in diagnosis.
§.§ Visual Analysis
In Fig. <ref>, we illustrate video frames with their corresponding frame attention weights predicted by . Overall speaking, the frames with high attention weights do have clear image appearances for diagnosis. For example, the first three frames in Fig. <ref>(b) clearly demonstrate the edge micro-lobulation and irregular shapes, which lead to malignant judgment. Furthermore, we plot the relationships between the predicted attention values and the feature distances to the centers. As shown in Fig. <ref>(e), these two variables are linearly related, which indicates that the attention weights are effectively guided by the feature distances.
The qualitative analysis proves the interpretability of our method, which will benefit clinical usage. Moreover, the attention weights reveal the importance of each frame for lesion diagnosis. Therefore, it can provide a new perspective for the keyframe extraction task of ultrasound videos.
§ CONCLUSION
We propose , a novel video classification model for breast ultrasound diagnosis. Our takes as input both the video data and image data to train the network. We propose the coherence loss to guide the training of the video model by the guidance of feature centers of the images. Our method significantly exceeds the performance of other competitive video baselines. The visualization of the attention weights validates the effectiveness and interpretability of our .
10
busi
Al-Dhabyani, W., Gomaa, M., Khaled, H., Fahmy, A.: Dataset of breast ultrasound
images. Data in brief 28, 104863 (2020)
ori_9
Byra, M.: Breast mass classification with transfer learning based on scaling of
deep representations. Biomedical Signal Processing and Control 69,
102828 (2021)
i3d
Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the
kinetics dataset. In: proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition. pp. 6299–6308 (2017)
deng2009imagenet
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A
large-scale hierarchical image database. In: 2009 IEEE conference on computer
vision and pattern recognition. pp. 248–255. Ieee (2009)
vit
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X.,
Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.:
An image is worth 16x16 words: Transformers for image recognition at scale.
arXiv preprint arXiv:2010.11929 (2020)
ori_5
Eroğlu, Y., Yildirim, M., Çinar, A.: Convolutional neural networks
based classification of breast ultrasonography images by hybrid method with
respect to benign, malignant, and normal using mrmr. Computers in biology and
medicine 133, 104407 (2021)
mvit
Fan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., Feichtenhofer,
C.: Multiscale vision transformers. In: Proceedings of the IEEE/CVF
International Conference on Computer Vision. pp. 6824–6835 (2021)
slowfast
Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video
recognition. In: Proceedings of the IEEE/CVF international conference on
computer vision. pp. 6202–6211 (2019)
st_resnet
Feichtenhofer, C., Pinz, A., Wildes, R.: Spatiotemporal residual networks for
video action recognition. In: Advances in Neural Information Processing
Systems (NIPS). pp. 3468–3476 (2016)
twostream
Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network
fusion for video action recognition. In: Conference on Computer Vision and
Pattern Recognition (CVPR) (2016)
ori_7
Gheflati, B., Rivaz, H.: Vision transformers for classification of breast
ultrasound images. In: 2022 44th Annual International Conference of the IEEE
Engineering in Medicine & Biology Society (EMBC). pp. 480–483. IEEE (2022)
he2016deep
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image
recognition. In: Proceedings of the IEEE conference on computer vision and
pattern recognition. pp. 770–778 (2016)
bus-video2
Huang, R., Ying, Q., Lin, Z., Zheng, Z., Tan, L., Tang, G., Zhang, Q., Luo, M.,
Yi, X., Liu, P., et al.: Extracting keyframes of breast ultrasound video
using deep reinforcement learning. Medical Image Analysis 80,
102490 (2022)
bus-video1
Lin, Z., Huang, R., Ni, D., Wu, J., Luo, B.: Masked video modeling with
correlation-aware contrastive learning for breast cancer diagnosis in
ultrasound. In: Resource-Efficient Medical Image Analysis: First MICCAI
Workshop, REMIA 2022, Singapore, September 22, 2022, Proceedings. pp.
105–114. Springer (2022)
busv
Lin, Z., Lin, J., Zhu, L., Fu, H., Qin, J., Wang, L.: A new dataset and a
baseline model for breast lesion detection in ultrasound videos. In: Medical
Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th
International Conference, Singapore, September 18–22, 2022, Proceedings,
Part III. pp. 614–623. Springer (2022)
swin
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin
transformer: Hierarchical vision transformer using shifted windows. In:
Proceedings of the IEEE/CVF international conference on computer vision. pp.
10012–10022 (2021)
videoswin
Liu, Z., Ning, J., Cao, Y., Wei, Y., Zhang, Z., Lin, S., Hu, H.: Video swin
transformer. In: Proceedings of the IEEE/CVF conference on computer vision
and pattern recognition. pp. 3202–3211 (2022)
ori_4
Moon, W.K., Lee, Y.W., Ke, H.H., Lee, S.H., Huang, C.S., Chang, R.F.:
Computer-aided diagnosis of breast ultrasound images using ensemble learning
from convolutional neural networks. Computer methods and programs in
biomedicine 190, 105361 (2020)
ori_12
Podda, A.S., Balia, R., Barra, S., Carta, S., Fenu, G., Piano, L.:
Fully-automated deep learning pipeline for segmentation and classification of
breast ultrasound images. Journal of Computational Science 63,
101816 (2022)
statistic
Siegel, R.L., Miller, K.D., Fedewa, S.A., Ahnen, D.J., Meester, R.G., Barzi,
A., Jemal, A.: Colorectal cancer statistics, 2017. CA: a cancer journal for
clinicians 67(3), 177–193 (2017)
csn
Tran, D., Wang, H., Torresani, L., Feiszli, M.: Video classification with
channel-separated convolutional networks. In: Proceedings of the IEEE/CVF
International Conference on Computer Vision. pp. 5552–5561 (2019)
r2p1d
Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer
look at spatiotemporal convolutions for action recognition. In: Proceedings
of the IEEE conference on Computer Vision and Pattern Recognition. pp.
6450–6459 (2018)
ori_10
Wang, J., Zheng, Y., Ma, J., Li, X., Wang, C., Gee, J., Wang, H., Huang, W.:
Information bottleneck-based interpretable multitask network for breast
cancer classification and segmentation. Medical Image Analysis 83,
102687 (2023)
tsn
Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Val Gool, L.:
Temporal segment networks: Towards good practices for deep action
recognition. In: ECCV (2016)
ori_13
Wang, Y., Li, Z., Cui, X., Zhang, L., Luo, X., Yang, M., Chang, S.: Key-frame
guided network for thyroid nodule recognition using ultrasound videos. In:
Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th
International Conference, Singapore, September 18–22, 2022, Proceedings,
Part IV. pp. 238–247. Springer (2022)
ori_14
Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning
approach for deep face recognition. In: Computer Vision–ECCV 2016: 14th
European Conference, Amsterdam, The Netherlands, October 11–14, 2016,
Proceedings, Part VII 14. pp. 499–515. Springer (2016)
ori_11
Zhang, G., Zhao, K., Hong, Y., Qiu, X., Zhang, K., Wei, B.: Sha-mtl: soft and
hard attention multi-task learning for automated breast cancer ultrasound
image segmentation and classification. International Journal of Computer
Assisted Radiology and Surgery 16, 1719–1725 (2021)
busis
Zhang, Y., Xian, M., Cheng, H.D., Shareef, B., Ding, J., Xu, F., Huang, K.,
Zhang, B., Ning, C., Wang, Y.: Busis: a benchmark for breast ultrasound image
segmentation. In: Healthcare. vol. 10, p. 729. MDPI (2022)
|
http://arxiv.org/abs/2306.05702v1
|
20230609064607
|
Variable screening using factor analysis for high-dimensional data with multicollinearity
|
[
"Shuntaro Tanaka",
"Hidetoshi Matsui"
] |
stat.ME
|
[
"stat.ME",
"stat.CO",
"62J05, 62J07",
"G.3"
] |
Variable screening using factor analysis for high-dimensional data with multicollinearity
Shuntaro Tanakaa, b
CONTACT Shuntaro Tanaka. Email: [email protected]
and
Hidetoshi Matsuic
CONTACT Hidetoshi Matsui. Email: [email protected]
aThe Japan Research Institute,Ltd., Japan;
bGraduate School of Data Science, Shiga University 1-1-1 Banba Hikone Shiga 522-8522 Japan;
cFaculty of Data Science, Shiga University 1-1-1 Banba Hikone Shiga 522-8522 Japan;
July 31, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Screening methods are useful tools for variable selection in regression analysis when the number of predictors is much larger than the sample size.
Factor analysis is used to eliminate multicollinearity among predictors, which improves the variable selection performance.
We propose a new method, called Truncated Preconditioned Profiled Independence Screening (TPPIS), that better selects the number of factors to eliminate multicollinearity.
The proposed method improves the variable selection performance by truncating unnecessary parts from the information obtained by factor analysis.
We confirmed the superior performance of the proposed method in variable selection through analysis using simulation data and real datasets.
Key Words: Variable selection, Screening, High dimensional data,
Multicollinearity, Factor analysis
§ INTRODUCTION
Recent developments in the field of communication technology have generated data in a variety of fields, including finance, medicine, and agriculture.
Appropriate analysis of such data enables us to reveal the relationships inherent in the complex phenomena.
Regression analysis is one of the most widely used statistical methods to do this.
For example, if we want to understand the regularity of the sales of a product, we set the sales as the response and the product attributes (price, color, size, etc.) as the predictors.
To understand the correct relationship between the predictors and the response, it is necessary to select and analyze important variables from the large amount of data that appear to be strongly related to a given response.
Variable selection is used in several fields.
In finance, variables related to corporate accounting data are selected to construct a statistical model that predicts the risk of corporate bankruptcy <cit.>.
Another example is the selection of variables that relate to data on macroeconomic indicators to estimate volatility, which is used to select which company to invest in and to make decisions about the timing of investments <cit.>.
Variable selection is also used in clinical models that predict possible future diseases <cit.> and in near-infrared spectroscopy analysis to measure food compositions <cit.>.
It is difficult to apply the classical variable selection techniques such as stepwise regression to high-dimensional data.
Methods using L_1-type regularization also fail to select variables for ultra-high dimensional data.
While more recently, Sure Independence Screening (SIS) was proposed to greatly reduce the dimension of the predictors and select important variables <cit.>.
SIS selects predictors in the order of their Pearson's correlations with the response in linear regression models.
Although this is a simple technique, the probability that the set of variables selected by SIS contains a set of truly important variables converges to 1 as the sample size increases.
Several extensions of SIS have been proposed.
<cit.> extended the idea of SIS to generalized linear models, and
<cit.> extended it to high-dimensional additive models.
In addition, there are screening methods that use non-linear correlations instead of Pearson correlations.
<cit.> proposed a method that is robust to outliers that uses Kendall's rank correlation coefficient.
<cit.> used distance correlation, and
<cit.> used the Hilbert-Schmidt Independence Criterion (HSIC).
With these criteria, we can apply the screening methods without assuming any distribution for the variables.
<cit.> also proposed a method for censored data.
The development of screening methods was summarized in
<cit.>.
However, most of these screening methods have the problem that their performance degrades in the presence of multicollinearity.
To solve this problem, <cit.> proposed a method called High-dimensional Ordinary Least squares Projection (HOLP), which accommodates highly multicollinear predictors by selecting variables in the order of their relations estimated by high-dimensional ordinary least squares.
Factor Profiled Sure Independence Screening (FPSIS) proposed by
<cit.> transforms the data for predictors by applying factor analysis, which reduces multicollinearity.
Then we can select appropriate variables by applying SIS to the transformed data that correspond to unique factors.
Preconditioned Profiled Independence Screening (PPIS) proposed by
<cit.> improved the FPSIS transformation process to better reduce multicollinearity.
PPIS eliminates unnecessary information from the predictors by using all of the common factors obtained from applying factor analysis to the predictors, whereas FPSIS uses only a subset of common factors.
However, PPIS seems to eliminate more information about predictors than necessary, which can degrade variable selection performance.
To overcome this issue, we propose a method to improve the effectiveness of removing multicollinearity by modifying PPIS to select variables more accurately.
We truncate some of the common factors eliminated in the PPIS transformation process to prevent excessive loss of information for variable screening.
We call our proposed method Truncated PPIS (TPPIS).
The reason why TPPIS improves the variable selection performance can be explained by a model based on the distribution of eigenvalues.
The truncation part is determined objectively using the BIC-type criterion proposed by
<cit.>.
SIS is then applied to the data whose multicollinearity has been removed by the transformation process.
Through analysis of simulated and real data, we show that TPPIS can transform data appropriately.
The remainder of this paper is organized as follows.
Section 2 describes existing screening methods, and then the proposed method is described in Section 3.
In Section 4, we confirm the performance of the screening method through a simulated data analysis, and then report the results of real data analysis in Section 5.
Section 6 summarizes the main points.
§ SCREENING METHODS UTILIZING FACTOR ANALYSIS
Suppose we have n sets of observations { (y_i, x_i), i=1,…,n }, where y_i∈ℝ is a response and x_i = (x_i1, …, x_ip)^T∈ℝ^p is a vector of predictors.
In particular, we assume that n<p and x_i is standardized and y_i is centered.
The relationship between y_i and x_i is assumed to be represented by the following linear model.
y_i = x_i^Tβ + ε_i,
where β = (β_1, …, β_p)^T∈ℝ^p are regression coefficients and ε_i∈ℝ is independent and identically distributed (i.i.d.) random noise following N(0, σ^2).
Let y = (y_1, …, y_n)^T∈ℝ^n, X = (x_1, …, x_n)^T∈ℝ^n × p, and ε = (ε_1, …, ε_n)^T∈ℝ^n.
Then the above linear model can be expressed as
y = X β + ε.
Let ω = ( ω_1, …, ω_p )^T = X^Ty∈ℝ^p and define the importance of the j-th variable as |ω_j| (1 ≤ j ≤ p).
SIS excludes predictors that are considered to be unnecessary by selecting the j-th variables in order of increasing |ω_j|.
However, SIS does not work well in the presence of strong multicollinearity.
For example, |ω_j| becomes smaller even for important variables or |ω_j| becomes larger even for unimportant variables.
In FPSIS <cit.>, SIS is applied after a transformation process to remove multicollinearity by applying factor analysis.
Let Z ∈ℝ^n × d be a matrix of vectors of d (<n) common factors of X, B ∈ℝ^p × d be factor loadings, and X̌∈ℝ^n × p be a matrix composed of unique factors.
Then we can express their relationships as X = Z B^T + X̌, where the columns of X̌ are independent each other.
Although Z is not uniquely determined due to the rotation invariance, a solution for Z can be obtained by singular value decomposition.
Let μ_1, …, μ_n be n singular values of X, where μ_1≥…≥μ_n > 0, since we assume n<p here.
The singular value decomposition of X gives
X = U D V ^T,
where U = (u_1, …, u_n) ∈ℝ^n × n,
u_l = (u_1l, …, u_nl)^T∈ℝ^n,
D = diag(μ_1, …, μ_n) ∈ℝ^n × n,
V = (v_1, …, v_n) ∈ℝ^p × n,
v_l = (v_1l, …, v_pl)^T∈ℝ^p (l=1,…,n),
and
U^T U = V^T V = I_n.
Let U_1 = (u_1, … , u_d) ∈ℝ^n × d denote the first d columns of the matrix U in (<ref>).
Then U_1 can be regarded as one of the solutions of Z.
<cit.> decided the value of d by the following equation using the ratio of the singular values of X:
d = 1 ≤ l ≤ n-1argmaxμ_l^2/μ_l+1^2.
The projection matrix onto the orthogonal complement of the linear subspace spanned by the column vectors of the matrix U_1 is given by
Q_F = I_n - U_1 ( U_1^T U_1 )^-1 U_1^T.
Left-multiplying both sides of (<ref>) by Q_F gives
Q_Fy = Q_F X β + Q_Fε.
Let ŷ = (ŷ_1, …, ŷ_n)^T= Q_Fy and X̂ = (x̂_1, … , x̂_n)=Q_FX.
X̂ is an approximation of the unique factors X̌.
The use of X̂ instead of X enables us to eliminate multicollinearity and to select appropriate variables.
FPSIS calculates ω = (ω_1, …, ω_p)^T
= X̂^Tŷ∈ℝ^p, and then selects variables where |ω_j| is large in order.
PPIS <cit.> improved the FPSIS transformation process.
First, after applying SVD to X as in (<ref>), they divided each of the matrices U,D,V into two parts at the d-th column:
U_1 = (u_1, … , u_d) ∈ℝ^n × d,
U_2 = (u_d+1, …u_n) ∈ℝ^n × (n-d),
D_1 = diag(μ_1, … , μ_d) ∈ℝ^d × d,
D_2 = diag(μ_d+1, … , μ_n) ∈ℝ^(n-d) × (n-d),
V_1 = (v_1, … , v_d) ∈ℝ^p × d,
V_2 = (v_d+1, …v_n) ∈ℝ^p × (n-d).
Let
Q_P = U_2D_2U_2^T{ I_n - U_1 ( U_1^T U_1 ) U_1^T}
and replace Q_F with Q_P in (<ref>).
This is based on the Puffer transformation <cit.>.
PPIS calculates ω = X̂^Tŷ as in FPSIS, where ŷ = Q_P y X̂ = Q_PX, and then selects variables in order of the size of |ω_j|.
The number of dimensions d of U_1 is determined by (<ref>) using the ratio of the singular values of X.
We explain the reasonableness of PPIS in Section <ref> using a model based on the distribution of eigenvalues.
However, if the magnitudes of the singular values after the d-th are not sufficiently small compared to those before the d-th, X̂ is still multicollinear when we simply remove from X the effects that are related to the first d common factors of X.
Therefore, by removing the influence of the n common factors of X including the information after the d-th factor that is not used in FPSIS, X̂ becomes closer to the unique factors X̌, which leads to the elimination of more multicollinearity.
§ PROPOSED METHOD
§.§ TPPIS
We propose selecting the number of factors to eliminate more multicollinearity by modifying the transformation process in PPIS.
Let α be a tuning parameter that satisfies α∈ (0,1] and d<[n α].
After applying SVD to X, as in (<ref>), we divide U,D,V into three parts at the d-th column and the [nα]-th column:
U = (U_1, U_2a, U_2b),
D = diag (μ_1, …, μ_n),
V = (V_1, V_2a, V_2b),
U_1 = (u_1, …, u_d),
U_2a = (u_d+1, …, u_[nα]),
U_2b = (u_[nα]+1, …, u_n),
D_1 = diag(μ_1, …, μ_d),
D_2a = diag(μ_d+1, …, μ_[nα]),
D_2b = diag(μ_[nα]+1, …, μ_n),
V_1 = (v_1, …, v_d),
V_2a = (v_d+1, …, v_[nα]), and
V_2b = (v_[nα]+1, …, v_n).
Then we define the following projection matrix
Q_T = U_2a D_2a^-1 U_2a^T{ I_n - U_1 ( U_1^T U_1 ) U_1^T}.
Using X̂ = Q_T X rather than Q_P X, we can eliminate multicollinearity more accurately since Q_T leaves the information that corresponds to the unique factors by truncating U_2b and D_2b from U_2 and D_2, respectively.
TPPIS calculates ŷ, X̂, and ω using the equation that replaces Q_F with Q_T in (<ref>), and then selects variables where |ω_j| is large in order.
Denote a set of k selected variables as
M_k = { 1 ≤ j ≤ p : |ω_j| is among the first k largest of all }
and denote predictors whose columns are composed of M_k as X(M_k)∈ℝ^n × k.
We predict the response using y = X(M_k) β̂(M_k), where β̂(M_k) is the least squares estimator of the regression coefficient of X̂(M_k), that is,
β̂(M_k)
= {X̂(M_k)^TX̂(M_k) } ^-1X̂(M_k)^Tŷ.
§.§ Reasons why TPPIS improves the effectiveness of removing multicollinearity
We discuss the reason why TPPIS improves the effectiveness of removing multicollinearity and the variable selection performance.
<cit.> indicates that the transformation process using Q_P of (<ref>) works well for data that follow a highly multicollinear spike model.
The spike model has the property that some eigenvalues of the variance-covariance matrix are larger than others.
Suppose that the eigenvalues of a variance-covariance matrix X, denoted by Σ_p, can be divided into three size categories: large, medium, and small.
Among p eigenvalues, let d be the number of large eigenvalues, m be the number of medium eigenvalues, and p-d-m be the number of small eigenvalues.
Then the spike model assumes that
Σ_p is represented as
Σ_p
=∑_r=1^d (λ_r+σ_0^2) u^*_ru^*_r^T
+∑_s=1^m (ω_s+σ_0^2) u^*_d+su^*^T_d+s
+∑_t=1^p-d-mσ_0^2u^*_d+m+tu^*^T_d+m+t
,
where λ_1≥…≥λ_d > ω_1≥…≥ω_m > 0, σ_0^2 is a positive constant, and {u^*_1, …,u^*_p} constitute an orthonormal basis of ℝ^p.
In this case, X can be expressed as
X = ∑^d_r=1√(λ_r)z_ru^*_r^T
+ ∑^m_s=1√(ω_s)z_d+su^*_d+s^T
+ σ_0^2Λ
,
where z_w∈ℝ^n (w=1,…,d+m) are i.i.d. N( 0, I_n) vectors and Λ∈ℝ^n × p has i.i.d. N(0, 1) elements.
The vectors z_r and u^*_r respectively represent a common factor and a factor loading of X, and σ_0^2Λ represents a unique factor of X.
Let X_1, X_2, X_3 be the first, second, and third terms of (<ref>), respectively; that is, we can express (<ref>) as X = X_1 + X_2 + X_3.
Since Q_F in (<ref>) is the projection matrix onto the orthogonal complement of the linear subspace spanned by the column vector U_1∈ℝ^n × d, Q_F can remove the effect of d common factors.
That is,
Q_F X = Q_F (X_1 + X_2 + X_3)
≈ X_2 + X_3 .
The PPIS transformation process using Q_P in (<ref>) can remove the effect of X_1 and X_2.
However, since U_2 and D_2 in Q_P use all column vectors after the d-th column, some extra information seems to have been removed from the unique factor X_3 that should have been left behind.
Q_T in (<ref>), which truncates U_2b from U_2 and D_2b from D_2, can improve variable selection performance by leaving the unique factors more accurately.
§.§ Selection of tuning parameter
The performance of the proposed method strongly depends on the dimension d of U_1, the tuning parameter α, and the number k of selected variables.
We have to decide appropriate values for them.
To do this, we use the BIC-type criterion adapted to high-dimensional data proposed by <cit.>.
Using β̂(M_k) in (<ref>), the BIC-type criterion is given by
BIC(M_k) = log{|| y - X(M_k) β̂(M_k) || ^2} + (n^-1log p) |M_k| log n.
We use grid search to find the optimal d, α, and k,
selecting the values with which make BIC smallest as the optimal parameters.
§ SIMULATION EXAMPLES
To investigate the effectiveness of the proposed TPPIS method, we compare TPPIS with the existing methods.
After calculating the importance of each predictor on the response for each method, the number of variables is determined using the BIC-type criterion (<ref>), and then the variable selection performance is verified.
§.§ Settings for simulated data
We conducted four examples.
The sample size n and the number of predictors p are set as n= 100, 300, and p = 1000 as common values for each example, respectively.
For the TPPIS parameter d, we examined six patterns: 0.2n, 0.4n, 0.6n, 0.8n, 1.0n, and the value given by (<ref>).
In addition, we examined five values ranging from 0.2 to 1.0 in increments of 0.2 for α.
For the number of variables, k, we examined p values ranging from 1 to p.
We then select the d, α, and k giving the smallest BIC as the optimal parameters.
* Example 1
For each i in 1 ≤ i ≤ n,
y_i = 5 x_i1 + 5 x_i2 + 5 x_i3 - 15 x_i4 + ε_i,
where ε_i are i.i.d. errors following N(0,1), x_i=(x_i1, … , x_ip)^T are i.i.d. predictors following N( 0, Σ) and the variance-covariance matrix Σ=(Σ_jk)^p_j,k=1 satisfies
Σ_jj = 1,
Σ_jk = φ (j ≠ k, j ≠ 4, k ≠ 4),
Σ_4,k = Σ_j,4= √(φ) (j, k ≠ 4)
.
We investigated three values for the parameter φ: 0.5, 0.7, and 0.9.
* Example 2
For each i in 1 ≤ i ≤ n,
y_i = 5 x_i1 + 5 x_i2 + 5 x_i3 - 15 x_i4 + 5 x_i5 + ε_i .
The setting is similar to that in Example 1, but the fifth variable is added.
In addition, the variance-covariance matrix Σ of the predictor satisfies Σ_5,j=Σ_j,5= 0 (j≠ 5).
* Example 3
For each i in 1 ≤ i ≤ n,
y_i = 5 x_i1 + 5 x_i2 + 5 x_i3 - 15 x_i4 + 5 x_i5 + ε_i .
The regression model is the same as in Example 2, except that the sixth variable, which is not included in the regression model, satisfies x_i6 = 0.8 x_i5 + δ_i, where δ_i follows i.i.d. N(0, 0.01).
Compared to Example 2, the data for the predictors are more multicollinear.
* Example 4
We consider the case where X follows a spike model (<ref>), given by
X = ∑^d_r=1z_rb_r^T
+ ∑^m_s=1 n^-(s+9)/m+10z_d+sb_d+s^T
+ X̌
,
where z_k∈ℝ^n (k=1, …, d+m) are i.i.d. vectors following N( 0, I_n), b_k∈ℝ^p is a vector of i.i.d. N(0, 1) elements, and X̌ = (x̌_1, …, x̌_n)^T∈ℝ^n × p with x̌_i = (x̌_i1, …, x̌_ip)^T∈ℝ^p, E(x̌_ij)=0, and cov(x̌_ij_1, x̌_ij_2)=I_p.
This case corresponds to equation (<ref>) with √(λ_r) = 1 (1 ≤ r ≤ d), √(ω_s) = n^-(s+9)/m+10 (1 ≤ s ≤ m), and σ_0^2 = 1.
This model is the same as that used in the simulation by <cit.>.
In this example, d is set to 3 and m is set according to 4 patterns: 0.2n, 0.4n, 0.6n, and 0.8n.
The regression model is given by
y_i = 5 x_i1 + 4 x_i2 + 3 x_i3 + 2 x_i4 + ε_i,
where
ε_i are i.i.d. errors following N(0, σ^2) with σ^2 = var(X β) / 5 and β=(5,4,3,2,0,…,0)^T∈ℝ^p.
In each example, we generate datasets 100 times for each combination of parameters.
For each dataset, the numbers of selected predictors and the least squares estimator (<ref>) is calculated.
The number of variables is determined using the BIC in (<ref>).
§.§ Comparison methods
The proposed TPPIS method is compared with the existing SIS, FPSIS, and PPIS methods.
In addition to the original FPSIS which selects the value of d using the ratio of eigenvalues (<ref>), we also compared a modified FPSIS where d is selected by the BIC in (<ref>) rather than (<ref>).
We denote this method as FPSIS_BIC.
We test the values of d in FPSIS_BIC with six patterns, as in the case of TPPIS.
§.§ Score metric for screening
We evaluate the variable selection performance of the screening methods using the score based on the number of correctly and incorrectly selected variables.
We refer to necessary predictors as Positive (P) and unnecessary variables as Negative (N) in the regression model.
Since the true regression coefficients of the simulated data are known, we can calculate True Positive (TP), False Positive (FP), True Negative (TN), False Negative (FN), Recall (TP/(TP+FN)), and Precision (TP/(TP+FP)).
The weighted F-score is weighted on the Recall side by the importance θ as follows:
Fθ-score =
1 + θ^2/1/Precision + θ^2/Recall
=
(1 + θ^2)(Precision×Recall)/Recall + θ^2×Precision,
where Precision = TP/(TP+FP) and Recall = TP/(TP+FN).
Since the screening needs to select as many important variables with non-zero regression coefficients as possible, we use the F2-score, which treats Recall as important.
§.§ Simulation results
The results of the variable selection for Example 1 are shown in Table <ref>.
The numbers in the x_(j) column represent the total number of times that the j-th predictor variable is selected.
For all settings, SIS never selected x_(4).
This is because y=(y_1, … , y_n)^T and (x_14, … , x_n4)^T are uncorrelated due to the generation mechanism of the data, which gives a smaller |ω_4|.
For other methods than SIS, the value of |ω_4| is larger than that for SIS due to the transformation process by factor analysis.
In particular, the proposed TPPIS obtains the largest x_(4).
F2-scores for TPPIS are the highest under all settings.
Although the best α of TPPIS is 1 for the case φ=0.5, the F2-scores for TPPIS are better than those for PPIS because TPPIS selects d by BIC.
We confirmed that the performance of TPPIS in variable selection is improved compared to the existing methods.
Figure <ref> shows values of BIC and F2-scores for fixed d and different α in TPPIS.
This figure demonstrates that α is selected appropriately by BIC.
The results for Example 2 are shown in Table <ref>.
The table shows that in many cases the numbers in x_(5) are close to 100 because the fifth variable is uncorrelated with the other predictors.
F2-scores for TPPIS are the highest in all cases.
Table <ref> summarizes the result for Example 3.
This shows that the numbers in x_(5) and the value of F2-score are smaller than those of Example 2 due to the addition of the sixth variable, which is highly correlated with the fifth variable.
For the cases with φ=0.9, FPSIS_BIC and TPPIS, which determine d by BIC, give lower x_(6) values.
It seems to be useful to use BIC to select d for data with multicollinearity.
TPPIS gives the highest F2-score among all methods.
Table <ref> shows the results for Example 4.
In this example, the variables with large regression coefficients tend to be more important, resulting in x_(1)≥ x_(2)≥ x_(3)≥ x_(4) under many settings.
F2-scores for PPIS and TPPIS are high because these methods are effective for the spike model.
In particular, TPPIS gives the highest F2-scores for all settings.
§ REAL DATA ANALYSIS
We apply the proposed screening methods to the analysis of two real data sets.
For both datasets, we investigated TPPIS parameters d and α, as in Section <ref>, and then the d and α values giving the lowest BIC are selected as the optimal parameters.
§.§ Condition monitoring of hydraulic systems
We applied the screening methods to data on condition monitoring of a hydraulic system <cit.>.
This dataset was obtained experimentally using a hydraulic test rig to measure values such as pressure, volumetric flow, and temperature while varying the settings of four different hydraulic components (coolers, valves, pumps, and accumulators).
We use data with the sample size 1449, taken under stable system settings.
The response is a value that expresses the degree of accumulator failure as a continuous value.
A higher value is closer to normal condition with 130 being the optimal pressure, 115 being a slightly reduced pressure, 100 being a severely reduced pressure, and 90 being close to total failure.
The predictors are the values measured by 17 sensors and form a total of 43680.
We apply the five screening methods to analyze this dataset as in the section on examples of simulated data.
The number of variables is determined using BIC.
Table <ref> shows the results of the analysis of this dataset.
From this result we find that TPPIS selects variables from the largest number of sensors.
TPPIS selects variables `volume flow sensors (FS)' and `efficiency factor (SE)', which are not selected by the other methods.
In addition, TPPIS gives the best BIC score among all methods.
These results indicate that these sensors may relate to the condition of accumulators.
§.§ S&P500
The S&P 500, one of the U.S. stock market indices, is obtained by weighting the market capitalization of 500 companies selected as representative of publicly traded companies.
This analysis uses the data for the year 2020.
The sample size is 253, which is the number of trading days.
The response is the value of the S&P500, and the predictors are the stock price of each of the 500 companies that make up the S&P500.
Note that the number of columns of predictors may be greater than 500 because some companies have multiple stocks, differentiated based on whether they include voting rights.
Since the S&P500 is weighted by market capitalization, it is assumed that the stock price of the company with the highest market capitalization is selected as an important variable.
The values of the S&P500 are taken from FRED <cit.>, and the stock prices of the 500 companies that make up the S&P500 are taken from
<cit.>.
We applied five screening methods to this dataset and compared BIC and selected variables.
The results for the S&P500 are shown in Table <ref>.
TPPIS gives the best BIC score among all the methods.
The 12 variables selected by TPPIS include companies with particularly large market capitalizations such as `AAPL' (Apple), `MSFT' (Microsoft) and `AMZN' (Amazon).
§ CONCLUSION
We have proposed TPPIS, a variable screening method for high-dimensional data with strong multicollinearity.
TPPIS improves the variable selection performance by using a BIC-type criterion to determine the number of common factors that have a role in removing multicollinearity.
In the analysis of simulated data, TPPIS outperformed existing methods using factor analysis for variable selection.
This suggests that TPPIS may be able to correctly select variables that are not considered important by existing methods.
The transformation process of TPPIS to remove multicollinearity from the data uses only information from the data corresponding to the predictors and we do not consider the relation to the response.
Developing a transformation processing method that incorporates information from both types of data could further improve the variable selection performance.
Although numerical examples confirmed that the performance of TPPIS is better than that of existing methods, no mathematical proof is provided.
In the process of devising a proof, we may be able to identify the characteristics of the data for which TPPIS is most effective.
§ ACKNOWLEDGMENT
This work was supported by JSPS KAKENHI Grant Numbers 19K11858 and 23K11005.
tfnlm
10
tian2015variable
Tian S, Yu Y, Guo H. Variable selection and corporate bankruptcy forecasts. J
Bank Financ. 2015;52:89–100.
fang2020predicting
Fang T, Lee TH, Su Z. Predicting the long-term stock market volatility: A
garch-midas model with variable selection. J Empir Finance.
2020;58:36–49.
chowdhury2020variable
Chowdhury MZI, Turin TC. Variable selection strategies and its importance in
clinical prediction modelling. Fam Med Community Health.
2020;8(1).
yun2019overview
Yun YH, Li HD, Deng BC, et al. An overview of variable selection methods in
multivariate analysis of near-infrared spectra. Trends Analyt Chem.
2019;113:102–115.
fan2008sure
Fan J, Lv J. Sure independence screening for ultrahigh dimensional feature
space. J R Stat Soc B. 2008;70(5):849–911.
fan2010sure
Fan J, Song R. Sure independence screening in generalized linear models with
np-dimensionality. 2010;.
fan2011nonparametric
Fan J, Feng Y, Song R. Nonparametric independence screening in sparse
ultra-high-dimensional additive models. J Am Stat Assoc.
2011;106(494):544–557.
li2012robust
Li G, Peng H, Zhang J, et al. Robust rank correlation based screening.
2012;.
li2012feature
Li R, Zhong W, Zhu L. Feature screening via distance correlation learning. J Am
Stat Assoc. 2012;107(499):1129–1139.
balasubramanian2013ultrahigh
Balasubramanian K, Sriperumbudur B, Lebanon G. Ultrahigh dimensional feature
screening via rkhs embeddings. In: Proceedings of the Sixteenth International
Conference on Artificial Intelligence and Statistics; 2013 29 April- 1 May;
Scottsdale, Arizona. PMLR; 2013. p. 126–134.
zhang2017correlation
Zhang J, Liu Y, Wu Y. Correlation rank screening for ultrahigh-dimensional
survival data. Comput Statist Data Anal. 2017;108:121–132.
fan2018sure
Fan J, Lv J. Sure independence screening. Wiley StatsRef: Stat Ref Online.
2018;.
wang2016high
Wang X, Leng C. High dimensional ordinary least squares projection for
screening variables. J R Stat Soc B. 2016;:589–611.
wang2012factor
Wang H. Factor profiled sure independence screening. Biometrika.
2012;99(1):15–28.
zhao2020high
Zhao N, Xu Q, Tang ML, et al. High-dimensional variable screening under
multicollinearity. Stat. 2020;9(1):e272.
jia2012preconditioning
Jia J, Rohe K. Preconditioning to comply with the irrepresentable condition.
arXiv preprint arXiv:12085584. 2012;.
helwig2015condition
Helwig N, Pignanelli E, Schütze A. Condition monitoring of a complex
hydraulic system using multivariate statistics. In: 2015 IEEE International
Instrumentation and Measurement Technology Conference (I2MTC) Proceedings;
2015 11-14 May; Pisa, Italy. IEEE; 2015. p. 210–215.
fred
FRED [Internet]. St. Louis: Federal Reserve Bank of St. Louis ; 2023 Jan
25 [cited 2023 Jan 25]. Available from:
https://fred.stlouisfed.org/series/SP500.
500stocks
Hanseo P. Data from: S&P 500 stocks price with financial statement
[dataset] ; 2022 Apr 18 [cited 2023 Jan 25]. In: Kaggle Datasets [Internet].
Available from:
https://www.kaggle.com/hanseopark/sp-500-stocks-value-with-financial-statement.
|
http://arxiv.org/abs/2306.07587v1
|
20230613072810
|
Efficient Algorithm for Solving Hyperbolic Programs
|
[
"Yichuan Deng",
"Zhao Song",
"Lichen Zhang",
"Ruizhe Zhang"
] |
math.OC
|
[
"math.OC"
] |
1
arrows
theoremTheorem[section]
lemma[theorem]Lemma
definition[theorem]Definition
notation[theorem]Notation
proposition[theorem]Proposition
corollary[theorem]Corollary
conjecture[theorem]Conjecture
assumption[theorem]Assumption
observation[theorem]Observation
fact[theorem]Fact
remark[theorem]Remark
claim[theorem]Claim
example[theorem]Example
problem[theorem]Problem
open[theorem]Open Problem
hypothesis[theorem]Hypothesis
question[theorem]Question
[1]▹ #1
1/2
[1]1/#1
|
http://arxiv.org/abs/2306.03478v1
|
20230606075354
|
Interplay between multi-spin and chiral spin interactions on triangular lattice
|
[
"Li-Wei He",
"Jian-Xin Li"
] |
cond-mat.str-el
|
[
"cond-mat.str-el"
] |
APS/123-QED
National Laboratory of Solid State Microstructures and School of Physics, Nanjing University, Nanjing 210093, China
[email protected]
National Laboratory of Solid State Microstructures and School of Physics, Nanjing University, Nanjing 210093, China
Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China
We investigate the spin-1/2 nearest-neighber Heisenberg model with the four-site ring-exchange J_4 and chiral interaction J_χ on the triangular lattice by using the variational Monte Carlo method. The J_4 term induces the quadratic band touching (QBT) quantum spin liquid (QSL) with only a d+id spinon pairing (without hopping term), the nodal d-wave QSL and U(1) QSL with a finite spinon Fermi surface progressively. The effect of the chiral interaction J_χ can enrich the phase diagram with two interesting chiral QSLs (topological orders) with the same quantized Chern number C = 1/2 and ground-state degeneracy GSD = 2, namely the U(1) chiral spin liquid (CSL) and Z_2 d+id-wave QSL. The nodal d-wave QSL is fragile and will turn to the Z_2 d+id QSL with any finite J_χ within our numerical calculation. However, in the process from QBT to the Z_2 d+id QSL with the increase of J_χ, an exotic crossover region is found. In this region, the previous QBT state acquires a small hopping term so that it opens a small gap at the otherwise band touching points, and leads to an energy minimum which is energetically more favorable compared to another competitive local minimum from the Z_2 d+id QSL. We dub this state as the proximate QBT QSL and it gives way to the Z_2 d+id QSL eventually. Therefore, the cooperation of the J_4 and J_χ terms favors mostly the Z_2 d+id-wave QSL, so that this phase occupies the largest region in the phase diagram.
Interplay between multi-spin and chiral spin interactions on triangular lattice
Jian-Xin Li
July 31, 2023
===============================================================================
§ INTRODUCTION
As an exotic and attractive phase in condensed matter physics, the quantum spin liquid (QSL)<cit.> has been studied extensively in recent years. One of the remarkable characteristics of QSL is that it does not possess any magnetic order even at zero temperature. It is in fact not conventional phase triggered by any symmetry breaking at low temperatures obeying the paradigm of Landau's theory, but a new quantum phase with fractional excitations and classified by projective symmetry group (PSG)<cit.>. Intrinsically, this exotic ground state has non-trivial quantum many-body entanglement so that different types of QSLs correspond to different patterns of entanglement. Besides, there is a straight and coarse classification to distinguish two classes of spin liquids based on whether or not there is an energy gap between the excitation spectrum and ground state. Gapped spin liquids have topological order characterized by the global topological structure and ground-state degeneracy (GSD)<cit.>. On the other hand, in gapless systems, the quasiparticle description, such as gapless fermionic (Dirac) spinon, breaks down<cit.>. And they may be characterized by a higher dimensional topological order, or the categorical symmetry<cit.>.
It is expected that a spin system may fall into a QSL instead of a long-range magnetic ordered phase, when quantum fluctuations are strong enough. Usually, spin frustrations, including the geometrical and exchange ones, can enhance effectively the quantum fluctuations. As a celebrated example, the Kitaev model on the honeycomb lattice<cit.> has an exact QSL ground state, where the exchange frustrations arise from the bond-dependent anisotropic spin couplings, though there is no geometrical frustration in this lattice structure. However, the Kitaev model
is difficulty to realize in a pure spin systems due to its highly anisotropic Kitaev interactions. Recently, many progresses have been achieved to realize the Kitaev interactions in a class of Mott insulating magnets with strong spin-orbit coupling<cit.>. On the other hand, there are strong geometrical frustrations in the triangular lattice and kagome lattice, so the searches for the QSL in the materials with these lattice structures are always on the way<cit.>.
Traditionally, the QSL is firstly proposed in the triangular antiferromagnetic (AFM) Heisenberg model<cit.>. In recent years, it is generally agreed that the AFM Heisenberg model with only the nearest-neighbour (NN) J_1 spin interaction on the triangular lattice exhibits a 120^∘ magnetic order at low temperatures. Therefore, several possible competing interactions beyond the J_1 term are considered. For example, it is found that when the second NN exchange interaction J_2 is about 0.08 J_2/J_1 0.16, the 120^∘ order is melted <cit.> and the QSL would arise <cit.>. Nevertheless, there is still doubt about the class of the QSL in this J_1-J_2 AFM model, though Dirac (gapless) spin liquid has been proposed by those very exhausted numerical calculations<cit.>. This ambiguity is due to the complication of the possible gapless QSLs, as mentioned above<cit.>. Besides the second NN J_2 exchange interaction, the four-site ring-exchange interaction (J_4 term)<cit.> was proposed to favor a U(1) QSL with large Fermi surface on the triangular lattice. It is shown that the 120^∘ order is robust against a small J_4/J_1 ratio and gives way to the U(1) QSL at a large one<cit.>. In between, several different phases have been claimed, including the chiral d + id QSL, the staggered valence bond solid and nodal d-wave QSL, depending on different effective spin models and numerical techniques<cit.>. In particular, a new spin liquid with the d + id spinon pairing and vanishing hopping has been found when J_2≈ 0, whose dispersion exhibits the quadratic band touching at k⃗ = 0, so it is dubbed as QBT spin liquid<cit.>. This state has been used to explain many of the intriguing experimental properties of the low-temperature phase in organic spin liquid candidate materials, EtMe_3Sb[Pd(dmit)_2]_2<cit.> and κ-(BEDT-TTF)_2Cu_2(CN)_3<cit.>.
We also note that a recent experimental observation of the QSL candidate in inorganic material NaRuO_2 with triangular lattice also suggests the importance of the long-range exchange interactions<cit.>, such as the J_4 term. On the other hand, the scalar chiral interaction J_χ that can be derived from the t/U expansion of the Hubbard model at half filling with adding Φ flux through the elementary triangles <cit.> is introduced. The J_χ term can naturally stabilize the topological CSLs<cit.> in a spin-1/2 Heisenberg model on the triangular lattice<cit.>. Consequently, the Berry curvature of CSLs can lead to nontrivial thermal Hall effect<cit.>. Generally, it is expected that this J_χ interaction could stabilize the d + id chiral spin liquid as induced by the J_4 interaction. However, there is a lack of a systematic study of the combined effects of the scalar chiral interaction J_χ and four-site ring-exchang J_4 in the triangular AFM model.
In this work, we investigate the interplay between the scalar chiral interaction J_χ and four-site ring-exchang J_4 in the triangular AFM model with the NN spin interaction J_1 using the variational Monte Carlo (VMC) method, focusing on the different chiral spin liquids and their intrinsic topology. We find that a finite J_χ interaction can stabilize two distinguishable chiral states with time-reversal symmetry breaking, i.e., U(1) CSL with nontrivial fluxes through elementary triangles and chiral Z_2 d + id-wave QSL. Both of them have the same quantized Chern number C = 1/2 but actually belong to distinct phases. Especially, there are two different states of the d + id-wave phase, which correspond to two energetic minima induced by J_χ. They not only compete with each other, but also happen to be energetic degeneracy within the numerical error under specific conditions. The 120^∘ magnetic ordered phase is robust against both a weak four-site ring-exchang and scalar chiral interaction, and is proximate to an algebraic U(1) Dirac QSL by PSG classifications<cit.>. At J_χ=0 and with increase of J_4, the system goes through progressive transitions from the 120^∘ magnetic order state to a symmetric Z_2 QBT QSL, nodal d-wave QSL and U(1) QSL (or called uniform RVB state in literature) with large spinon Fermi surface (U(1) SFS), which is qualitatively consistent with the results reported in Ref. <cit.>. We find that both the QBT and nodal d-wave QSLs are unstable against a very small chiral interaction J_χ. The achiral nodal d-wave state will immediately give way to the chiral Z_2 d + id state with relatively weak spinon pairings. While, the QBT state falls into another chiral Z_2 d + id state with a relatively insignificant hopping terms, which is one of the two different d + id states mentioned above. The dispersion of its quasi-particles is almost the same as that of the QBT state, except a very small energy gap in the former. Thereby, we call it the proximate quadratic band touching (PQBT) state. With further increase of J_χ, the PQBT will enter into the chiral Z_2 d + id state.
The paper is organized as follows. In Sec.<ref>, we introduce the model, the variational Monte Carlo method and the calculation of Chern numbers by the optimized variational wave functions, mainly focus on the construction of the trial variational wave functions. In Sec.<ref>, we present our results on the phases induced by the J_4 and J_χ terms independently and the effects of their interplay. Section <ref> presents a summary. In Appendix A, we introduce the calculation method for the ground-state degeneracy.
§ MODEL AND METHOD
The model we considered is written by,
H = J_1∑_⟨ i,j ⟩2S⃗_⃗i⃗·S⃗_⃗j⃗
+ J_χ∑_i,j,k ∈▵/▿S⃗_⃗i⃗· (S⃗_⃗j⃗×S⃗_⃗k⃗)
+ J_4∑_i,j,k,l ∈◊(P_ijkl + H.c.).
where, the J_1 term is the nearest-neighbour (NN) AFM Heisenberg exchange, and the J_χ term is the scalar chiral interaction with the same magnitude in any elementary triangle (either up triangle ▵ or down triangle ▿, the three sites in the triangle are in clockwise direction, see Fig. <ref>). The last term J_4 is the four-spin coupling and is given in detail by,
∑_i,j,k,l ∈◊(P_ijkl + H.c.) = 5∑_⟨ i,j ⟩S⃗_⃗i⃗·S⃗_⃗j⃗
+ ∑_⟨⟨ i,j ⟩⟩S⃗_⃗i⃗·S⃗_⃗j⃗
+
4∑_i,j,k,l ∈◊[(S⃗_⃗i⃗·S⃗_⃗j⃗)(S⃗_⃗k⃗·S⃗_⃗l⃗)
+ (S⃗_⃗i⃗·S⃗_⃗l⃗)(S⃗_⃗j⃗·S⃗_⃗k⃗)
- (S⃗_⃗i⃗·S⃗_⃗k⃗)(S⃗_⃗j⃗·S⃗_⃗l⃗)] + 1/4,
where ⟨⟨ i,j ⟩⟩ denotes the next-nearest neighbor (NNN) bonds and
i,j,k,l ∈◊ means summing all of elementary four-site rhombi defined by unique NNN pairs
⟨⟨ i,k ⟩⟩(see Fig. <ref>). Hereinafter, we set J_1 = 1 as the unit of energy.
In this work, we study the phase diagram of the model (<ref>) with the variational Monte Carlo method. We start from the parton construction (or Abrikosov-fermion spinon representation) of spin 1/2 operators: S⃗ = 1/2∑_α, β = ↑,↓f_α^†σ⃗_αβf_β. With this representation, we halve one spin to two fermionic spinons. Hence, the original physical spin-1/2 Hilbert space must be recovered by imposing the on-site constraint ∑_αf_α^†f_α = 1. Furthermore, this fermionic fractionalization will lead to the emergence of a SU(2) gauge structure<cit.>, which can become explicit if we introduce two doublets,
ψ_1 =
(
[ f_↑; f_↓^† ]),
ψ_2 =
(
[ f_↓; -f_↑^† ]),
and put them into a matrix Ψ = (ψ_1 ψ_2), then we rewrite the spin-1/2 operator as following:
S⃗ = 1/4 Tr( Ψ^†Ψσ⃗^T ),
where σ⃗ is the Pauli matrix.
Besides this formalistic demonstration, intrinsically, the emergent SU(2) gauge structure can also be revealed by the combination of the U(1) gauge structure: f_σ→ f_σ e^iα and the particle-hole redundancy: f_σ→ f_σcos(β) + σ f_σ̅^†sin(β), both α and β are any angles. In addition, it is the unique property for the fermionic representation but not for the bosonic one with only U(1) gauge structure.
Then, we decouple the Hamiltonian Eq. <ref> into a general quadratic fermionic Hamiltonian of the form with a unconsidered constant number,
H_mf = ∑_i,j( t_ijf_iσ^† f_jσ + Δ_ijf_i↑^† f_j↓^† + H.c.)
+ 2∑_iM⃗_i ·S⃗_i + const,
where an additional background field M⃗_i is introduced to induce a static magnetic long-range order as done before <cit.>.
Because the model we considered is very complicate, we will get so many mean-field variational parameters if we consider all channels. As well known, the numerical simulations with so plenty of variational parameters are almost not reliable and usually make physics unclear in a limited time cost. Combining the previous works Ref. <cit.> with the physics we focus on, here we only consider various NN-bond (⟨ i,j ⟩) hoppings t_⟨ ij ⟩ and pairings Δ_⟨ ij ⟩ with the background field M⃗_i.
After constructing the ground state |G⟩_mf of the mean-field Hamiltonian Eq.<ref>, we utilize the Gutzwiller projective operator P_G = ∏_i(1 - n_i↑n_i↓) to |G⟩_mf to enforce the local particle number constrain: n_i↑ + n_i↓ = 1. Finally, we obtain a general trial variational wave function |Ψ(𝒫)⟩ = P_G|G⟩_mf, where 𝒫 denotes variational parameters. For those states without magnetic order, we can set ∑_iS_i^z=0 (or N_↑ = N_↓ = N/2, N is number of the lattice) without loss of generality. We fix the spinon chemical potential μ = t_ii such that |G⟩_mf is at half filling before projection<cit.>. Actually, for those ansatzes without spinon pairings, we just need the N occupied states of H_mf to construct the trial wave functions, so the chemical potential makes no difference for this procedure so that we can abandon it as a variational parameter in practical numerical calculations. As a result, the rest parameters 𝒫=(t_⟨ ij ⟩, Δ_⟨ ij ⟩, M⃗_i) in the mean-field Hamiltonian are used as variational parameters. And we consider different types of ansatzes, including various Z_2, U(1) QSLs, and 120^∘ AFM ordered states to construct the initial trial wave functions, where the parameters 𝒫 are optimized by minimizing the trial energy E(𝒫) = ⟨Ψ|H|Ψ⟩/⟨Ψ|Ψ⟩. We adopt a triangle lattice with torus geometry: L_1 = L_2 = 12 (L_1,2 are the lengths along the two reciprocal basic vectors (a⃗_1,2) of primitive cell, see Fig. <ref>).
To extract the topological properties for these gapped CSLs, we calculate Chern numbers by use of the optimized variational wave functions |Ψ⟩_opt with twist boundary condition<cit.>, as following
f_i+L_k,↑ = f_i,↑e^iθ_k; f_i+L_k,↓ = f_i,↓e^-iθ_k(k = 1,2),
BP(p) = Im( ln∏_i=1^4⟨Ψ^p_i+1|Ψ^p_i⟩),
C_total= 1/2π∑_p BP(p),
where Eq. <ref> expresses the twist boundary condition, BP(p) is the Berry phase in the plaqutte p, the label i=1,2,3,4 denotes the four corners of the pth plaqutte, and the overlaps are calculated by Monte Carlo method. Eq. <ref> is used to calculate the Chern numbers numerically. In our calculations, we have checked the results with the numbers of mesh plaquttes, N_p = 36, 64, 100, 144, and find that N_p = 100 is large enough so that the Chern numbers do not change by further increasing the mesh plaqutte size.
§ RESULTS
Let us firstly list all six phases we find, which are summarized in the phase diagram shown in Fig. <ref>. It consists of one long-range magnetic 120^∘ order and five disordered phases, the later includes two chiral spin liquids with nontrivial Chern number and three achiral ones. We have considered the potential tetrahedral order<cit.>, especially, for a large chiral interaction J_χ/J_1 at which the spins may behave as classical objects. But, we find that this ordered phase is not energetically favored in the parameter range we considered using our variational numerical simulations, compared with the phases we find in the phase diagram. In the following, we will discuss these found phases in detail.
§.§ Effects of J_4
In this subsection, we fix J_χ = 0 to investigate the effects of the four-site ring-exchange interaction J_4. As well known, the 120^∘ long-range magnetically ordered phase exists when J_4=0. In fact, as has been shown before <cit.> , a background magnetic field M⃗_i = |M⃗|(cos(Q⃗·r⃗_i),sin(Q⃗·r⃗_i),0) is needed to induce this order in the framework of the variational Monte Carlo method, where |M⃗| denotes the field amplitude and Q⃗ = (1/3, 1/3) (here we adopt the single-Q⃗ approximation and the reciprocal bases of the primitive cells as denoted in Fig. <ref>). Otherwise, this magnetic phase will disappear and fall into a U(1) Dirac QSL yielded by the NN-bond hopping terms containing alternate 0 and π flux in the elementary triangles. So, we label it as `120^∘ order+π flux' in the phase diagram Fig. <ref>.
With the introduction of the J_4 term, we find that the 120^∘ order can survive in an extended region up to J_4≈ 0.153. Then, it enters into the QBT state with only a d + id spinon pairing, but without the hopping term as has been found before<cit.>. The phases on different bonds of the d + id spinon pairing state are illustrated in the inset of Fig. <ref>. This QBT state exhibits the quadratic band touching at k⃗ = 0 in its dispersion, as shown in Fig. <ref>(a). With the further increase of J_4 term, the Z_2 nodal d-wave state will be competitive and become the ground state [Fig. <ref>]. This state is characterized by a singlet spinon pairing with the pairing function Δ_⟨ ij ⟩ = Δ_⟨ ji ⟩ and its magnitude Δ_ nd = Re(Δ_d+id), and a bond independent hopping term. The identification of this state is qualitatively consistent with that in Ref.<cit.>, where it also exists in a significant area in the phase diagram. While, it has been argued that the nodal d-wave QSL is not energetically favored with the increase of system size and is not the ground state in the thermodynamic limit<cit.>. We suggest two possible reasons for this difference. One is the detail form of the J_4 term, and we adopt the same form as Ref. <cit.>. Another one is that the stability of this gapless state is sensitive to the lattice geometry. If we use the torus geometry, such as L_1,2 = 12, we always suffer dilemmas about the construction of trial many-body wave function because there are plenty of normal states for quasi-particles in the nodal d-wave state in the thermodynamic limit. In fact, it is better to use the lattice size with L_1 ≠ L_2 (such as L_1=10, L_2=11) instead of the torus geometry in practical VMC procedures, as has been noticed before<cit.>. The pairing amplitude |Δ_ nd| will fade away as the four-spin term J_4 increases. Finally, a U(1) SFS state will emerge and extends to the largest J_4 we considered, as shown in Fig. <ref>.
§.§ Effects of J_χ
As the chiral interaction term J_χ breaks the time-reversal symmetry, it is expected that it can induce or stabilize some chiral phases. Starting from the 120^∘ long-range magnetically ordered phase, we find that it is robust against a chiral interaction J_χ<0.34. In fact, it is found that the strength |M⃗| of the 120^∘ order decreases a little with the increase of J_χ, as we have checked with the lattice size L_1,2=6, 12, 18 and 24. In this region, the chiral order parameter |χ| = |<S⃗_⃗i⃗·S⃗_⃗j⃗×S⃗_⃗k⃗>| defined as the expectation averaged over all triangles (i, j and k are the three vertices of each elementary triangle) is also found to be zero. When J_χ 0.35, the CSL phase will be energetic favorite and become dominant, as shown in Fig. <ref> where the boundary between 120^∘ order and CSL is determined by comparing the energy of the two phases. Associated with this transition, the chiral order parameter |χ| shows a step-like rise from zero to a finite value as shown in Fig. <ref> (J_4=0). Compared to the result obtained with only the J_4 term in the last subsection, we suggest that the 120^∘ magnetic order is more stable against the chiral interaction.
This CSL state is not only time-reversal symmetry breaking but also lattice reflection symmetry breaking. However, the combination of the two symmetries is preserved<cit.>. In this state, there exist alternating ψ and π-ψ fluxs through the down triangles and up triangles without spinon pairings (see Fig. <ref>). We find that the flux ψ increases as J_χ increases in the whole area of the CSL state in the phase diagram, consequently it leads to an increase of the energy gap of quasiparticles. According to our calculation, the gap reaches its maximum when ψ = π/2. It corresponds to the case ψ=π-ψ=π/2 so that the fluxes distribute uniformly along all triangles. Hence, the chiral interaction tends to make the alternating fluxes be homogeneous and increases the gap. As denoted in Fig. <ref>, the 120^∘ order+π flux phase has alternating 0 and π fluxs through the down triangles and up triangles. So, the CSL state inherits this flux structure and acquires a finite ψ to yield gap compared with gapless Dirac spin liquid(ψ = 0). We note that, when we get an optimized flux ψ_opt by VMC, simultaneously, another CSL with ψ^'=π-ψ_opt is degenerate with it in the thermodynamic limit, suggesting the existence of a gauge degree of freedom. Though these nonzero fluxes ψs are not equal within different areas in the CSL phase, the corresponding states exactly belong to the same type of QSLs according to the PSG classification. In other words, any two different states of this CSL phase with different fluxes can be interconverted by a local unitary operation without gap closing<cit.>. More intrinsically, they possess the same topological structure protected by the sharing PSG. So, the CSLs with different ψs have the same total Chern number C_total=2 with high accuracy as shown in Fig. <ref>. Finally, we must emphasize the total Chern number in Eq. <ref> includes two kinds (spin-up and spin-down) of spinons and two periods for the spin operators, which results in a fractional quantized Chern number C = 1/2<cit.>.
§.§ Interplay between J_χ and J_4
Let us first look at the effects of the four-site ring-exchange interaction on the CSL induced by the chiral spin interaction. The U(1) CSL without spinon pairings is stability against a small J_4 interaction, but will transit into the chiral Z_2 d+id-wave QSL state with the increase of J_4. This chiral state is not only TRSB but also lattice reflection symmetry breaking, but the combination of the two symmetries is preserved<cit.> as the same as U(1) CSL. The d+id QSL has the characteristic spinon pairing function whose phases show different distributions on different bonds as illustrated in the inset of Fig. <ref>, but its hopping term is uniform along different bonds. As discussed above, the ψ in the alternating ψ and π-ψ fluxes in the CSL state increases with J_χ, and tends to a homogeneous distribution of fluxes. This relative homogeneous distribution is more compatible with the uniform distribution of the hopping term in the d+id QSL, which might be beneficial to the transition into the d+id QSL. Hence, the region of the CSL state in the phase diagram becomes narrow with J_χ, as can be clearly seen in Fig. <ref>.
Then, we turn to discuss the instability with respect to the J_χ term of the four phases found with J_4≠ 0 in the Sec. <ref>. We find that the two QSLs with gap nodes in the spinon pairing function are fragile to the chiral spin interaction, while the U(1) SFS state is more stable. Noticeably, all the three QSLs will eventually give way to the same phase, namely the d+id QSL. For the 120^∘ magnetically ordered phase, it also transits into the d+id QSL when J_4 exceeds the triple point value in the phase diagram. Due to the existence of a large spinon Fermi surface in the U(1) SFS, a notable J_χ is needed to destabilize it, for example, J_χ≈ 0.6 is required for J_4=0.3. Hence, the U(1) SFS is favored and stabilized by large J_4 terms. In the meantime, the region of the U(1) SFS is also narrowed by J_χ. Therefore, we find that the d + id QSL occupies the largest region in the phase diagram with the cooperation of the J_χ and J_4 interactions [see Fig. <ref>]. As mentioned in Ref. <cit.>, the third-neighbor AFM Heisenberg interaction is also suggested to be able to expand the insignificant region of the chiral d + id QSL state. Our results show that the chiral interaction is an alternative to accomplish that.
Now, let us further discuss the detail of the instabilities of the two QSLs with gap nodes (nodal d-wave and QBT) with respect to the chiral interaction. The nodal d-wave QSL is found to be unstable into the d+id QSL with any finite J_χ with our numerical calculations, so its region in the phase diagram in Fig. <ref> is denoted as a blue line. When the chiral interaction J_χ is turned on, the QBT state with gapless quadratic band touching also opens gaps at the quadratic touching point k⃗ = 0 and another two Dirac points (K = (1/3, 1/3), K^' = (2/3, 2/3), here we adopt the reciprocal bases of the primitive cells as denoted in Fig. <ref>) by acquiring a nonzero hopping term. However, Fig. <ref> shows that its quasiparticle dispersion is quite different from that of the chiral d+id QSL as mentioned above. On the other hand, though its dispersion looks to be quite similar to that of the QBT [see Fig. <ref>], it has small gaps at those k⃗ points, so its low temperature properties will show differences with the QBT. Therefore, we dub it the proximate QBT, though it shares the same PSG with the chiral Z_2 d+id QSL. In fact, we find a strong competition between the PQBT state and the Z_2 d+id state in this region, and it exhibits as the existence of two local minima in the energy curve corresponding to these two states. To characterize the two states, we define a parameter of α =arctan (| Δ|/t), where | Δ| represent the amplitude of spinon pairing term and t is the hopping one. A typical energy curve as a function of α for J_χ=0.05 and J_4=0.17 is presented in Fig. <ref>. The minimum at α=0.495π comes from the PQBT state, which deviates slightly from the value 0.5π of the QBT state after acquiring a small nonzero t. And another one at α=0.05π corresponds to d+id QSL state. For J_χ=0.05 and J_4=0.17, the local minimum at α=0.495π has a lower energy, so the PQBT state is energetically favorable. Thus, we show that the chiral interaction induces two stable states when starting from the QBT, one is the PQBT and the other is the d+id QSL. Firstly, the PQBT state has a lower energy, but the energy difference between them is reduced with J_χ. So, the two state will have the same energy at the critical value. We collect these critical values and plot them as an orange dashed line in the phase diagram Fig. <ref>. Above this line, the system enters into the d+id QSL. We note that there is another difference between these two states, i.e., the chiral order parameter |χ| approaches to the saturate value after crossing this line, but increases smoothly in the crossover PQBT region, as can be seen in Fig. <ref>. While, it has a step-like rise when starting from the nodal d-wave state as shown with J_4=0.2, or from the U(1) SFS with J_4=0.24.
When turning on the spin chiral interaction, one will expect it induces nonzero Chern numbers. To calculate the Chern number, we compact the triangular lattice on the torus with θ_1 and θ_2 the fluxes through the two holes [see Fig. <ref>]. We find nonzero Berry phases as a function of θ_1 and θ_2, which are presented in Fig. <ref>. For the three chiral states including the CSL, d+id QSL and PQBT, their Berry phase in the θ_1 and θ_2 plane exhibits different distribution. This difference is expected to be reflected as different temperature dependences of the thermal Hall effect, as it depends strongly on the momentum dependence of the Berry curvature<cit.>. Though they have different distribtion of the Berry phase, we find that they have the same total Chern number 2 within our numerical errors, see Fig. <ref>.
To interpret the global topological structure, we calculate the GSDs of the two gapped chiral QSLs, namely the CSL and d+id QSL. The detail calculations can been found in the Appendix A. We obtain GSD = 2 for a U(1) CSL with ψ=π/2 and the d + id QSL. So, both of two chiral states support semionic topological excitations<cit.>. Combining with the total Chern number, we infer that both of the two chiral spin liquids are Kalmeyer-Laughlin state with the same filling factor ν=1/2<cit.>. While, they are two different types of chiral spin liquids protected by different PSGs<cit.>.
§ CONCLUSIONS
In summary, we have investigated the interplay between the chiral interaction J_χ and four-site ring-exchange J_4 in the triangular J_1 Heisenberg model by use of the variational Monte Carlo techniques. We map a detail J_χ-J_4 phase diagram, in which the long-range magnetic 120^∘ order and five quantum disordered phases are identified, the latters include two chiral spin liquids with nontrivial Chern numbers and three achiral ones. The J_4 term alone induces QBT, nodal d-wave and U(1) SFS QSLs with its progressive increase. Among them, the nodal d-wave spin liquid is destabilized into the Z_2 d + id spin liquid with any finite J_χ within our numerical calculation. And the U(1) spin liquid is more robust and turns to the chiral Z_2 d + id spin liquid above critical J_χ values. In particular, we find a crossover region between the QBT spin liquid for J_χ=0 and the Z_2 d + id spin liquid once introducing the J_χ term. This proximate QBT state defined in this crossover region differs from the QBT spin liquid in that it acquires a nonzero hopping term and opens gaps at the quadratic touching point and two Dirac points. In this region, both the proximate QBT and d + id spin liquid are stable solutions and compete with each other, while the former is energetically favored. With the further increase of J_χ, the system will give up its preference of the proximate QBT state and enter into the Z_2 d + id spin liquid with a more favorable energy. For the small J_χ and J_4, the 120^∘ magnetically ordered state is dominant. This phase is fully gapped but topological trivial (ground-state degeneracy GSD = 1). The J_χ term will eventually destabilize the 120^∘ order and prefers the U(1) CSL, and this U(1) CSL will also transit into the Z_2 d + id spin liquid with the increase of the J_4 term. These results show that the Z_2 d + id spin liquid occupies the largest region in the J_χ-J_4 phase diagram due to the interplay between the chiral interaction J_χ and four-spin term J_4. Finally, we also show that U(1) CSL, proximate QBT state and the Z_2 d + id QSL are topological nontrivial states with Chern number C = 2 and ground-state degeneracy GSD = 2.
We would like to thank Q.-H. Wang and Z.-X. Liu for many helpful and valuable discussions. This work was supported by National Key Projects for Research and Development of China (Grant No. 2021YFA1400400) and the National Natural Science Foundation of China (No. 92165205).
§ CALCULATION OF THE GROUND-STATE DEGENERACY
Firstly, we compact the triangular lattice on a torus, see Fig. <ref>. In the thermodynamic limit, it does not cost any energy to insert a global π flux into any of the two holes in the torus. In practice, this process is equivalent to changing periodic boundary condition of the mean-field Hamiltonian to anti-periodic one. From this, we can construct four mean-field ground states |G_±,±^mf⟩, where ± denotes the boundary conditions for the directions of a⃗_1,2, in detail, + means periodic boundary condition and the - means the anti one. Then, we apply a Gutzwiller projection to these mean-field ground states to obtain physical wave functions |Ψ_±,±⟩ as the same as that in Sec. <ref>. After these previous preparations, we can obtain the overlap (or density) matrix with the element given by
𝒪_ij = ⟨Ψ_i|Ψ_j⟩/√(⟨Ψ_i|Ψ_i⟩⟨Ψ_j|Ψ_j⟩),
where i,j ∈{++, +-, -+, –}, and we emphasize that the normalization is necessary. Usually, 𝒪 is a 4×4 matrix, but sometimes, we do not construct the whole four states under Gutzwiller projection, see the supplemental material in Ref. <cit.>. In practice, the gap of a certain mean-field state is the bigger the better
for this progress to weaken the finite-size effect. After the construction, we can diagonalize the overlap matrix to get four eigenvalues (or apply singular value decomposition), and the number of the significant eigenvalues is equal to the one of linearly independent states. It is just our final target, the ground-state degeneracy.
*
|
http://arxiv.org/abs/2306.02267v1
|
20230604054732
|
Proteus: Simulating the Performance of Distributed DNN Training
|
[
"Jiangfei Duan",
"Xiuhong Li",
"Ping Xu",
"Xingcheng Zhang",
"Shengen Yan",
"Yun Liang",
"Dahua Lin"
] |
cs.DC
|
[
"cs.DC"
] |
: Simulating the Performance of
Distributed DNN Training
Jiangfei Duan^† Xiuhong Li^♯ Ping Xu^ Xingcheng Zhang^♯
Shengen Yan^ Yun Liang^ Dahua Lin^†♯
[.3em]
The Chinese University of Hong Kong^† Shanghai AI Laboratory^♯
Peking University^ SenseTime Research^
===========================================================================================================================================================================================================================
DNN models are becoming increasingly larger to achieve unprecedented accuracy, and the accompanying increased computation and memory requirements necessitate the employment of massive clusters and elaborate parallelization strategies to accelerate DNN training. In order to better optimize the performance and analyze the cost, it is indispensable to model the training throughput of distributed DNN training. However, complex parallelization strategies and the resulting complex runtime behaviors make it challenging to construct an accurate performance model. In this paper, we present Proteus, the first standalone simulator to model the performance of complex parallelization strategies through simulation execution. Proteus first models complex parallelization strategies with a unified representation named Strategy Tree. Then, it compiles the strategy tree into a distributed execution graph and simulates the complex runtime behaviors, comp-comm overlap and bandwidth sharing, with a Hierarchical Topo-Aware Executor (HTAE).
We finally evaluate Proteus across a wide variety of DNNs on three hardware configurations.
Experimental results show that Proteus achieves 3.0% average prediction error and preserves order for training throughput of various parallelization strategies. Compared to state-of-the-art approaches, Proteus reduces prediction error by up to 133.8%.
Deep neural networks (DNNs), distributed training, parallelism, performance modeling, simulation.
§ INTRODUCTION
Recent years, progressively larger DNN models continue to break predictive accuracy records <cit.>.
As these models grow, they are becoming computationally and memory expensive to train. To efficiently train DNN models, large GPU clusters and sophisticated parallelization strategies are employed to accelerate the training process <cit.>.
For example, NVIDIA trained an 8.3 billion parameters language model on 512 GPUs with expert-designed hybrid data and model parallelism <cit.>.
Since the training performance (throughput) of a DNN highly depends on its parallelization strategy, a natural question to ask is that
can we accurately model the performance of any parallelization strategy for a specified cluster.
Modeling the performance of a parallelization strategy is crucial for performance optimization and
analysis.
1) Knowing the performance of a parallelization strategy can guide our optimization. Performance model can
be leveraged to locate the bottleneck of a parallelization strategy in manual optimization
and compare different parallelization strategies in automated parallelization systems <cit.>.
2) Because implementing a parallelization strategy on current deep learning frameworks <cit.> is error-prone, labor-intensive and resource-costing, an accurate performance model can save lots of effort and resources in evaluating it.
3) Predicting the performance of a parallelization strategy in advance can help us analyze cloud service budgets without requiring GPU resources, such as how many machine hours or nodes to buy, thereby saving computing resources.
Plenty of performance modeling approaches have been proposed to predict the performance of DNN models, but none of them scale beyond hybrid data and model parallelism.
Most of recent efforts to model the performance of DNN models are constrained to the scenario of single GPU. For example, various analytical models <cit.> that build with hardware metrics and learning-based models <cit.> that learn from runtime statistics are presented to study the performance of GPU kernels.
In multi-GPU scenario, prior works <cit.> build analytical or profiling-based performance models for different DNN layers and predict training performance by summing up the computation and communication time of each layer. These approaches focus on a small subset of parallelization strategies and are not applicable to emerging parallelization strategies.
Some automated parallelization approaches <cit.> also build performance models for distributed DNN training. For example, FlexFlow <cit.> customizes a simulator to evaluate parallelization strategies in SOAP space. However, these works aim at searching optimal parallelization strategy for a DNN model instead of accurate performance modeling for general parallelization strategies. The usability and scalability are greatly limited due to their non-programmability of parallelization strategy and small strategy space.
We find two main challenges that hinder us from constructing accurate performance models for distributed DNN training. One challenge is how to model complex parallelization strategies.
Since different parallelization strategies own distinct computation and memory consumption characteristics, handcrafted strategies that are composed of various parallelization strategies at diferent levels are designed to accelerate DNN training, especially large DNN models <cit.>. For example, Megatron-LM combines recomputation <cit.> and hybrid pipeline, data and model parallelism to train large transformer models <cit.>.
The other challenge is how to model complex runtime behaviors.
The underlying assumption of prior works <cit.> is that the cost of a single operator only depends on its input and output tensor shape and it does not hold when meeting complex parallelization strategies. During runtime, communication operators can be overlapped with computation operators to hide the cost of gradient synchronization, and communication operators in different communication groups may share bandwidth resources. Such optimization or sharing is not free but will increase the cost of these operators and thus decreasing the throughput of the entire DNN model. Therefore, an accurate performance model should be able to explicitly capture the optimizations of complex parallelization strategies and the overhead incurred due to complex runtime behaviors.
To address these challenges, we present , a standalone simulation framework that aims at accurately modeling the training throughput for distributed DNN training. <Ref> highlights the advantages of against existing approaches.
First, we introduce a hierarchical tree structure, Strategy Tree, to model complex parallelization strategies.
We find parallelization strategies can be classified into operator- and subgraph-level strategies as a DNN graph is often divided into disjoint subgraphs, each of which is assigned to a group of devices.
The strategies at operator-level specify how the operators and tensors are split and mapped to devices, while the strategies at subgraph-level indicate how to schedule subgraphs (details refer to <Ref>).
The hierarchical structure of strategy tree provides a unified representation for parallelization strategies at different levels and enables to model the huge and complex strategy space.
Second, we propose HTAE (Hierarchical Topo-Aware Executor) to simulate complex runtime behaviors, which are ignored in prior works. We observe that runtime behaviors that have a significant impact on performance can be categorized into two types, comp-comm overlap and bandwidth sharing. HTAE simulates the schedule of subgraphs and operators to detect runtime behaviors during execution and adapts operator cost according to detailed cluster topology, thus capturing complex runtime behaviors of different operators.
Given a DNN model and Strategy Tree, automatically compiles them into a distributed execution graph by splitting operators and tensors and inserting inferred collective communication operators. Afterwards, predicts the training throughput and OOM (Out-Of-Memory) error by mimicking the schedule and execution of the execution graph considering the cluster topology.
In summary, we propose , to the best of our knowledge, the first standalone simulator to enable simulating complex parallelization strategies through fine-grained scheduling and simulation execution. We make the following contributions in building :
* We classify parallelization strategies into operator- and subgraph-level and formulate a unified parallelization space with Strategy Tree to model complex parallelization strategies.
* We identify two types of runtime behaviors that affect performance: comp-comm overlap and bandwidth sharing, and introduce Hierarchical Topo-Aware Executor to dynamically detect and model such behaviors.
* We evaluate Proteus across a wide variety of DNNs on 3 hardware configurations.
Experiments show that Proteus achieves 3.0% average prediction error and preserves order for training throughput of various parallelization strategies. Compared to state-of-the-art approaches, Proteus reduces prediction error by up to 133.8%.
§ BACKGROUND: DISTRIBUTED DNN TRAINING
DNNs are commonly represented as computation graphs in modern DL frameworks <cit.>, with nodes as operators and edges as tensors.
Parallelizing a DNN involves parallelizing elements in the computation graph which can be categorized into two levels of parallelization strategies.
Operator-Level Strategy
Operators and tensors can be partitioned for parallel execution on multiple devices. This partitioning is operator-level, which can be further categorized into computation parallelization and memory optimization based on corresponding computation and memory aspects.
Computation Parallelization is achieved by partitioning the parallelizable dimensions of operators. Typically, we consider every unique dimension occurred in input or output tensors as parallelizable dimensions. We will describe different parallelization strategies taking the linear operator in Figure [fig:psdemo]<ref>a as an example:
output(b, s, o) = ∑_hinput(b, s, h) × weight(o, h).
There are 4 unique dimensions: b (batch), s (sequence), o (output_channel), h (hidden/reduction).
Data parallelism is the most widely used parallelization strategy, which splits batch dimension (b) and replicates weight on all devices.
Model parallelism divides the operator in o or h dimension thus partitioning weight into different parts and each part is trained on a dedicated device.
Hybrid parallelism combines both data and model parallelism to partition operators.
Op shard is a general parallelization strategy that exploits the power of partitioning arbitrary dimensions of (b, s, o, h).
Figure [fig:psdemo]<ref>a shows an example configuration to shard an operator in b and h dimensions. The partition describes how to parallelize different dimensions and map specifies how to place each partition. The operator is split into 4 (|partition|) parts, with each assigned to a GPU. As reduction dimension (h) is partitioned, the operator produces 4 partial output tensors, which should be aggregated to produce the final output tensor.
In this paper, targets on modeling the performance of general op shard unlike prior works that focus on data and model parallelism <cit.>. SOAP <cit.> partitions operators in b, s, o dimensions and is a sub-space of op shard.
Memory Optimization. All dimensions of a tensor are parallelizable. Partitioning a tensor is achieved by splitting along its dimensions similar to partitioning operators.
ZeRO <cit.> and Activation partitioning <cit.> partitions tensors in the first dimension and maps each part to a device to reduce redundancy. They can be combined with parallelization in other dimensions. Figure [fig:psdemo]<ref>b shows an example that partitions o (ZeRO) and h dimensions.
explicitly defines a parallelization strategy for each tensor in a DNN model. Figure [fig:psdemo]<ref>a shows that splitting an operator also creates implicit parallelization strategy for its input and output tensors. The inconsistency between the implicit and explicit strategy will incur additional communication (e.g. weight need to transform from strategy of Figure [fig:psdemo]<ref>b to the implicit strategy of Figure [fig:psdemo]<ref>a).
Subgraph-Level Strategy
A subgraph is composed of operators and tensors with dependencies. Parallelization strategies that describe the schedule of subgraphs are called subgraph-level strategies, including pipeline parallelism and recomputation, which balance training throughput and memory footprint by parallelizing subgraph computations.
Pipeline parallelism divides a computation graph into disjoint parts and assigns each part to a device group. It splits a batch input data into multiple micro-batches to exploit parallelism <cit.>.
Figure [fig:psdemo]<ref>c shows a pipeline example with n_micro_batch micro-batches for each subgraph. To reduce memory consumption, forward and backward micro-batches are interleaved <cit.>, and max_ongoing_micro_batch limits the number of forward micro-batches on the flight.
Recomputation (Activation Checkpointing) <cit.> is a schedule that trades computation for memory. It frees forward subgraph activations after execution and recomputes when intermediate activations are required in backward pass.
Parallelization strategies at operator- and subgraph-level can be incorporated together thus formulating a complex parallelization space. leverages the hierarchical property to model complex parallelization strategies.
§ OVERVIEW
Figure <ref> shows an overview of , a simulation framework towards accurate performance modeling for distributed DNN training. Since the performance of distributed DNN training highly depends on the parallelization strategy, explicitly modeling a parallelization strategy is the first step for a performance model. uses a unified representation strategy tree to model complex parallelization strategies (<Ref>).
's execution graph compiler (<Ref>) bridges the gap between high level parallelization strategy and low level execution. It takes DNN model and strategy tree as inputs, and compiles DNN layers into tensors and operators. The compiler automatically inserts communication operators between tensors and generates a distributed execution graph.
In <Ref>, we first discuss the characterization and impact of runtime behaviors, then introduce 's hierarchical topo-aware executor, which simulates the schedule of the execution graph and predicts the training throughput. During simulation, it adapts operator cost, which is first obtained with the op estimator (<Ref>), considering the cluster configuration and dynamic runtime behaviors.
§ STRATEGY TREE
This section introduces strategy tree, a unified representation to model complex parallelization strategies, including both operator- and subgraph-level strategies.
<Ref> shows a DNN model and its computation graph and strategy tree. Computation graph is commonly used to represent data dependencies between operators and tensors. However, the non-hierarchical structure makes it hard to distinguish different subgraphs, thus failing to model subgraph-level strategies (e.g. the subgraph-level strategy for a and b can only be assigned when we manually split the graph or assign strategies for all tensors and operators).
To solve the problem, models parallelization strategies with a hierarchical tree structure. Tensor, operator and subgraph are basic elements of a parallelization strategy, regardless of data dependencies. The tree structure provides a good abstraction for modeling strategies at different levels and capturing nested structure between various elements. models tensors and operators in leaf nodes and subgraphs in non-leaf nodes. The leaf and non-leaf nodes make it easier to determine operator- and subgraph-level strategies. A complete parallelization strategy consists of parallel configurations on all tree nodes. We will further discuss and compare strategy tree with prior works in <ref>.
§.§ Tree Representation
A leaf node models the forward and backward computation graphs of a DNN layer. As illustrated in Figure [fig:stree]<ref>c, leaf node captures all the forward and backward operators and the tensors they produce and consume. models tensors by their shape, and operators by a set of unique parallelizable dimensions extracted from input and output tensors (<Ref>).
A non-leaf node models a subgraph, which represents the forward and backward computation graphs of several DNN layers. It is possible to group different layers to create various subgraphs, which is a natural hierarchical structure. For example in Figure [fig:stree]<ref>c, the layers d, e and d, e, f constitute two non-leaf nodes respectively at different levels, and the root node models the whole DNN model.
§.§ Parallel Configuration
The parallel configuration defines how different components are parallelized.
Operator-level strategies are specified with computation and memory config in leaf nodes, subgraph-level strategies are assigned to non-leaf nodes with schedule config.
Computation/Memory Config.
Computation (memory) configs are assigned to operators (tensors) in leaf nodes. It contains two aspects: partition and map. The partition (𝒫) defines the degree of parallelism in each dimension and splits the operator (tensor) into |𝒫| disjoint parts. Each part will be mapped to one or more devices defined by map, namely shards on one device or replicates on a device group. In Figure [fig:stree]<ref>c, the computation config partitions the B and K dimensions of the forward operator into 2 and 4 parts respectively, and shard each part on one GPU device.
Memory config defines the real placement of a tensor. With this separated memory config, is able to express the space of memory optimization.
Schedule Config.
Schedule config specifies the subgraph-level strategy of a subgraph, with only one config needed for each non-leaf node due to the dual structure of the forward and backward subgraphs. The config has three aspects (Figure [fig:stree]<ref>c): n_micro_batch denotes the number of micro-batches consumed by the subgraph, Since executing forward micro-batches increases memory consumption, max_ongoing_micro_batch limits the maximum number of forward micro-batches executed before each corresponding backward micro-batch at any time, and recomputation indicates whether to use activation checkpointing.
Non-leaf nodes on the tree have a schedule config that is propagated from the parent node unless explicitly defined by the user. In particular, the schedule config on a non-leaf node is independent of the configs on leaf nodes. Strategy propagation will be discussed in <Ref>.
§.§ Discussion and Comparison
Existing frameworks <cit.> relies on computation graph to unify the parallelization space including computation, memory and schedule. However, focuses accurate performance modeling for a parallelization strategy, while these frameworks are designed for automated parallelization and they will search over all combinations of subgraphs. Explicitly modeling parallelization strategies is necessary for performance modeling, but it is quite hard to specify a complex parallelization strategy for a DNN model in prior automated parallelization works.
GSPMD <cit.> develops powerful programming APIs to specify parallelization strategies for different DNN layers, but the subgraph-level strategy is only supported for identical DNN blocks with tailored vectorized_map API. Furthermore, changing parallelization strategy also takes great efforts to rewrite the model. Our strategy tree unifies parallelization strategies at different levels and decouples parallelization strategy from model expression. By adjusting the strategy tree instead of the DNN model, can change the parallelization strategy for a DNN.
§ EXECUTION GRAPH COMPILER
This section describes 's execution graph compiler, which connects high level parallelization strategies with low level execution. Given a strategy tree, the compiler creates a distributed execution graph by splitting tensors and computation operators and inserting communication operators and control dependencies.
§.§ Graph Compilation
<Ref> illustrates the workflow of execution graph compiler.
first divides the DNN model into disjoint subgraphs based on its DevGroup, which defines a set of devices, in order to parallelize the computations of different micro-batches. The DevGroup of a tree node is composed of all of its children nodes' DevGroups. splits all divisible nodes in breadth-first order from root node and a node cannot be divided unless all of its children nodes share some devices. Figure [fig:compiler]<ref>b shows the DevGroups of three nodes. The DevGroup of node S2 is "gpu 0-7" because layer d and e are partitioned and mapped to these devices. The root node R is divided into 2 subgraphs since node S1 and S2 share no devices and they are not divisible.
then compiles each subgraph into a forward and backward execution subgraph as showed in Figure [fig:compiler]<ref>c.
Tensors and operators are split into small partitions such that each partition resides on and is executed by one device.
Communication operators, data and control dependencies are added to ensure the computational equivalence.
Data dependency.
Each tensor and operator has a parallel configuration that defines the partition and mapping, as discussed in <Ref>. Due to the data dependency between tensors and operators, can infer a parallel configuration for each input and output tensor of operators. Once the two parallel configurations of a tensor are inconsistent, automatically inserts communication operators via strategy transformation to adjust the parallel configuration, otherwise reuses original tensor partitions. Figure [fig:compiler]<ref> shows the strategy transformation of tensor y_1. Layer e partitions y_1 into 2 parts and each part replicates on 4 GPUs, but layer d partitions y_1 into 8 partial tensors on 1 GPU. adds communication operators between y_1 in the execution subgraph of S2 to handle this
inconsistency.
For a subgraph with recomputation enabled, compiles it into two forward and one backward execution subgraphs and adjust the data dependency accordingly. The backward subgraph depends on one forward subgraph (i.e. recomputation subgraph) and the other subgraph can be immediately released after execution (e.g. S1 in Figure [fig:compiler]<ref>c).
Control dependency.
Control dependencies are inserted between execution subgraphs to follow the training schedule defined by the schedule config in non-leaf nodes. First, the forward subgraphs are control dependent on their corresponding backward subgraphs to limit peak memory consumption. Second, also adds control dependency for recomputation subgraphs such that they are executed immediately before the backward subgraphs. In Figure [fig:compiler]<ref>c, node S1 has two forward subgraphs and one of them is control dependent on the backward of node S2.
§.§ Strategy Transformation
Strategy transformation converts tensors to desired parallel configurations with appropriate communication primitives. automatically infers collective communication primitives (e.g. All-Reduce <cit.>), failing over to point-to-point communication if necessary. Since different primitives features distinct communication patterns, uses pattern matching to infer collective communication operators and corresponding communication groups.
currently supports commonly used communication primitives in modern DL frameworks <cit.>. Prior work proposes new communication primitives, such as hierarchical reduce <cit.> and CollectivePermute <cit.>, to accelerate the communication. can be extended by including more candidate patterns.
§ HIERARCHICAL TOPO-AWARE EXECUTOR
This section describes 's Hierarchical Topo-Aware Executor (HTAE), which simulates the schedule and runtime behaviors of a distributed execution graph and predicts the training throughput.
§.§ Performance Characterization
Before introducing the design of HTAE, we first characterize the performance of distributed DNN training using the example of <Ref>, which illustrates the execution timeline and runtime behaviors of <Ref>. The forward and backward execution subgraphs are interleaved and the execution of S1 and S2 is parallelized on different GPU groups. Operators in Figure [fig:compiler]<ref>c consist of three types that can be executed simultaneously: computation, feature and gradient communication operators, and they are scheduled into three streams following data dependency. Modeling the training performance is to model the execution timeline, including schedule, computation and communication.
Runtime Behavior.
Prior work <cit.> assumes that the operator cost is fixed and focuses on modeling the performance of single operator. The training speed of a DNN is the summation of all the operators' costs. However, runtime behavior, which is ignored in prior work, has emerged as a critical aspect determining training performance under today's sophisticated parallelization strategies and optimizations. It is crucial to model runtime behaviors towards an accurate performance predictor since they can affect the execution cost of operators. <Ref> shows that ignoring runtime behaviors results in large prediction error on a cluster with 32 GPUs.
We find that major runtime behaviors can be categorized into two types. First, bandwidth sharing describes the scenarios that different communication operators compete for bandwidth (<Ref>182,184). Second, comp-comm overlap refers to the overlap of computation and communication operators (<Ref>183). In addition, different computation operators could be overlapped on single GPU <cit.>, does not model such scenario since it is rarely used in distributed DNN training. <Ref>184 shows an example of bandwidth sharing by mapping gradient communication operators to a single node machine. The gradient communication includes 4 groups indicated by the GPU color: {{0, 4}, {1, 5}, {2, 6}, {3, 7}}, and their costs rise due to the competition for available bandwidth of scarce physical links.
is the first system to study and model runtime behaviors for distributed DNN training. Unlike prior analytical frameworks <cit.>, predicts training performance via simulation since runtime behaviors only occur during execution.
§.§ Simulator Design
<Ref> shows the design of 's two level simulator, HTAE. The first level is scheduler, which consists of several second level executors. Different schedulers can time-share executors. To predict the performance, HTAE first gets single operator cost with op estimator (<Ref>) and then simulates the schedule of subgraphs and operators to discover runtime behaviors. During simulation, operator cost are adapted to model runtime behaviors considering the cluster topology.
Cluster Configuration.
Cluster Configuration describes the topology of training cluster. There are two types of configurable parameters in device topology. For intra-node topology, we can set device type, device memory, number of devices in a node and the intra-node connection, which describes the physical connections among devices (e.g. GPUs and CPUs). For inter-node topology, we can specify the number of nodes and inter-node connection bandwidth.
Scheduler.
Each scheduler is assigned several forward and backward execution subgraphs, and it interleaves the execution of them based on data and control dependencies to balance micro-batch parallelism and peak memory consumption. The scheduler first selects current execution state (forward or backward), then it chooses one subgraph from available dependency-free execution subgraphs. It alternates different backward subgraphs and prefers forward subgraph that enables backward execution. After determining the subgraph to be executed, the scheduler dispatches initial tasks to executors and begin executing.
Executor.
The executor schedules the execution of operators for a subgraph and records the peak memory consumption. Each executor contains a computation queue, a feature communication queue and a gradient communication queue (<Ref>). Operators in different queues can be executed concurrently such that achieving comp-comm overlap. By separating feature and gradient communication queue, makes it possible to overlap feature and gradient communication and avoid feature communication blocked by gradient communication.
The executor executes computation and communication alternatively. It pops a computation operator from the queue for computation execution and pops one feature and one gradient communication operator at the same time for communication execution. These operators are first sent to the runtime behavior detector to check runtime behaviors and executed afterwards. During execution, the operator cost will be accumulated to count the time cost for each queue separately. The execution of operators will decrease the number of dependencies of their consumers and dependency-free operators will be put into the corresponding queue.
Memory Consumption.
predicts whether a parallelization strategy will out-of-memory (OOM) by monitoring the memory consumption of executors. During execution, each operator reads and writes some tensors. HTAE monitors executor memory footprint by recording these tensor activities. When writing a new tensor, HTAE tracks its memory consumption and reference counter. The memory will be released when the reference counter decreases to zero.
§.§ Modeling Runtime Behaviors
As previously discussed, the operator cost may change during execution due to complex runtime behaviors. The runtime behavior detector checks runtime behaviors for all operators and adapt operator cost accordingly. To enable efficient detection, it keeps execution history records of different execution streams.
Bandwidth Sharing.
There are two types of bandwidth sharing. One is inside a group of gradient or feature communication operators (<Ref>184), and the other is between a group of gradient and feature communication operators (<Ref>182). These operators transfer datas within different device groups and compete for bandwidth of shared physical links. To model this behavior, assumes that concurrent operators fairly share the bandwidth of a physical link and detects how many communication groups share a link during execution. We find this assumption generally holds in practice.
first checks bandwidth sharing for feature and gradient communication operators separately by mapping communication groups to cluster topology. <Ref> shows the hierarchy of physical links in a cluster and detects bandwidth sharing following this hierarchy. starts from NIC bandwidth sharing. Each communication group is split into sub-groups such that each sub-group is composed of devices in the same node. The groups that consist of more than two sub-groups fairly share the bandwidth of NIC. checks all the physical links from top to bottom.
finally detects the intersection effects of feature and gradient communication groups. Since the communication volumes and operations of these groups may different, only adapt operator cost for the overlapped parts. The detection algorithm is the same as the first step, except for that the communication groups include both feature and gradient communications.
Comp-Comm Overlap.
In distributed DNN training, computation and gradient communication operators may overlap, because gradient communication operators can be launched asynchronously and feature communication operators usually block the computation stream. To detect comp-comm overlap, keeps the start and end time of operators. When executing a computation (communication) operator, considers it overlapped if it finds a gradient communication (computation) operator in execution.
introduces an overlap factor γ to model the effect of comp-comm overlap. When finding an operator overlapped, its cost will increase by γ. This is motivated by the observation that operator costs increase by about the same percentage on average during overlap.
To obtain γ, we profile the speeds of backward pass with and without overlapping in data parallel training and γ is set to the increase ratio. As γ is fixed for the type of machine and DNN model, we can get γ in advance with few cost. Prior works <cit.> also try to model the effect of comp-comm overlap. For example, Pollux <cit.> introduces a learnable parameter η to model the data parallel training speed by combining computation time T_grad and gradient communication time T_sync with (T_grad^η + T_sync^η)^1/η. These works target on data parallel training, and we find our simple formulation works pretty well with complex parallelization strategies (discussed latter in <Ref>).
§ IMPLEMENTATION
is implemented as a standard python library (∼9K LoC). follows PyTorch <cit.> API to build model.
Construction of Strategy Tree.
DNN models consists of modules that perform operations on data. A module is roughly equivalent to a DNN layer, and different modules can be nested together to construct a complex module. exploits the module structure to create a strategy tree from top to bottom in depth-first manner. Non-leaf nodes correspond to complex modules and leaf nodes correspond to layers. The root node represents the entire DNN model. tracks the construction of modules/layers and creates corresponding non-leaf/leaf nodes on the tree. The construction of strategy tree preserves the structure of DNN model and makes it easier to specify parallelization strategy for nodes.
Strategy Propagation.
develops a strategy propagation algorithm to ease the programming difficulty of parallelization strategies. For a complete parallelization strategy, programmers are required to specify parallel configurations for critical leaf and non-leaf nodes. will propagate the parallel configurations to the other nodes.
first propagates parallel configurations from top to bottom following tree structure. The schedule config of a non-leaf node is inherited from its parent node unless explicitly defined. then propagates parallel configurations among leaf nodes following data dependency. The propagation proceeds in topological order and includes two steps: forward graph propagation and backward graph propagation. infers the memory config of a tensor according to its producer's computation config and infers the computation config of an operator according to its inputs' memory config.
Op Estimator.
The op estimator predicts the operator cost for all operators in the distributed execution graph. It contains a profiler and analyzer. Since focuses on modeling runtime behaviors, the profiler obtains the time cost of computation operators by profiling them on target hardware, which costs little. There are lots of performance models to estimate single operator cost <cit.>. can be extended to adopt such models.
The analyzer estimates communication cost with α-β model <cit.>. It estimates the bandwidth of a communication group according to the detailed cluster topology. When estimating the time cost of a collective operation, a correction factor is applied to revise the bandwidth to reflect the characteristics of different collective operations.
To simplify implementation, we utilize NCCL topo detection algorithm <cit.> to find all the communication channels of a communication group and its bandwidth is the summation of these channels.
§ EVALUATION
§.§ Methodology
All the experiments are conducted with PyTorch 1.8 (CUDA 10.1, cuDNN 7.6.5 and NCCL 2.7.8).
Benchmarks. Table <ref> summarizes the six representative DNN models that we used as benchmarks, they are widely used in prior works <cit.>. We evaluate throughput with synthetic dataset, which ignores the data loading latency.
Modeling real-world dataset is orthogonal to .
Hardware Configurations. is evaluated across three different hardware configurations. <Ref> summarizes the cluster type and size, intra- and inter-node connections.
§.§ Simulation Accuracy
To evaluate across a wide variety of parallelization strategies, we evaluate each model with 2 popular parallelization strategies. One is most commonly used parallelization strategy (S1), the other is the optimal expert-designed parallelization strategy (S2). Since is aimed at accurate performance modeling rather than discovering new parallelization strategies, and implementing entirely new parallelization strategies is difficult, we do not evaluate on less commonly used parallelization strategies. But the parallelization strategies we tested already cover both operator- and subgraph-level strategies.
Since the most commonly used parallelization strategy is data parallelism, uses data parallelism or its variants as S1 for six DNNs. To enable data parallelism training of large model, combines memory optimization (ZeRO <cit.>) and recomputation in S1 to evaluate GPT-1.5B. Expert-designed parallelization strategies (S2) exhibits more diverse patterns. ResNet50 and Inception_V3 partitions data and output channels, while VGG19 and GPT-2 partitions data, output channels and reduction dimensions for computation parallelization. The S2 of GPT-1.5B combines op shard, pipeline and recomputation. DLRM partitions huge embedding table in S2 to optimize memory footprint.
<Ref> shows the simulation results of various DNN models on two hardware configurations (HC1 and HC2) and <Ref> displays the overall results on three hardware configurations. delivers an accurate performance model and achieves 3.0% average prediction error for training throughput. Out of 180 simulation results, 's estimation of OOM is incorrect only under 2 cases (blue box in <Ref>).
is the first standalone simulator that targets on simulating complex parallelization strategies. The most related and representative cost-model and simulator is Paleo <cit.> and FlexFlow <cit.>, respectively. Paleo <cit.> is an analytical cost-model. It delivers high prediction error on single GPU (ResNet50 (59.8%), Inception_V3 (40%)) and does not support GPT, DLRM models and complex parallelization strategies. Therefore, we did not dive into Paleo and show the results. FlexFlow <cit.> is an automated parallelization framework on SOAP space. To compare generated parallelization strategies, it internally tailors a simulator to simulate the training throughput.
To compare and FlexFlow, we re-implement its simulator as FlexFlow-Sim. To support realistic simulation, FlexFlow-Sim inserts collective communication operators for strategy transformation instead of point-to-point operators as described in FlexFlow paper. The comparison results are shown in <Ref> and <Ref>. The average prediction error for FlexFlow is 12.4%, which is 9.4% higher than . Among all the test cases, the maximum error is 14.7% and 137.9% for and FlexFlow, respectively. For the total 180 training tasks, FlexFlow fails to estimate the performance of 1/3 of them. <Ref> also shows that the prediction error of FlexFlow-Sim becomes larger as the number of GPUs increases.
We find that outperforms FlexFlow mainly in three aspects. 1) can be applied to a much larger parallelization strategy space with the abstraction of strategy tree. 2) FlexFlow ignores complex runtime behaviors thus cannot accurately model the training throughput. 3) FlexFlow's communication bandwidth estimation ignores fine-grained cluster topology. For example, FlexFlow delivers high prediction error for DLRM model, where communication dominates.
§.§ Parallelization Strategy Comparison
Comparing the training throughput of various parallelization strategies is an important problem in designing and understanding high performance parallelization strategies. In this section, we use GPT-2 as benchmark because GPT model is the most popular and widely used model to study all kinds of parallelization strategies and these strategies can generalize to other models. In these experiments, we select 4 parallelizable dimensions across operator- and subgraph level strategies and represent the parallelization strategy as DP× MP × PP (n_micro_batch) and DP, MP and PP is the degree of data, model and pipeline parallelism. The global batch size is 8 and 64 for HC1 and HC2 respectively.
Table <ref> shows the simulation results of GPT-2 with various parallelization strategies. For these parallelization strategies, can accurately model the performance and achieves 3.2% average prediction error. Order preservation is an important feature in strategy comparison and maintains the rank of diverse parallelization strategies. <Ref> demonstrates that HC2 prefers data parallelism training, since hybrid model parallelism shares the bandwidth of IB net, and pipeline parallelism introduces bubbles during training, which will decrease the training throughput. The simulation results also confirms that pipeline efficiency can be improved by injecting more micro batches. HC1 consists of a single NUMA node, and the 2-way model parallelism can fully utilize the QPI links between two CPU sockets, thus achieving highest throughput.
§.§ Runtime Behavior Ablation Study
To study the effective of runtime behavior detector, we test the throughput of VGG19 and GPT-2 on HC1 and HC2 with different parallelization strategies. For VGG19, we use batch size 32 per GPU with data parallelism training. For GPT-2, the global batch size is 8 and 64 on HC1 and HC2 with hybrid op shard and pipeline parallelism. <Ref> shows that runtime behavior detector can greatly improve the simulation accuracy of throughput (average error: Plain (14.4%) vs Proteus (2.4%)). VGG19 is very sensitive to comm-comp overlap, hence introducing overlap factor can significantly improve the prediction accuracy. As there is no bandwidth sharing in data parallelism training, prediction error of VGG19 keeps after adding bandwidth sharing. In contrast, GPT-2 is more sensitive to bandwidth sharing which is especially common in complex parallelization strategies. Therefore, the throughput prediction error reduces remarkably after modeling bandwidth sharing.
§.§ Simulation Cost
To evaluate the simulation cost of , we measure the time it takes to evaluate VGG19 and GPT-2 on HC2 with data parallelism. Since the cost of computation operators can be profiled in advance, we only evaluate the time cost of execution graph compiler and HTAE. <Ref> demonstrates that takes seconds to simulate the performance of DNNs with a large number of GPUs. We believe this cost is acceptable to evaluate a specified parallelization strategy since provides a fine-grained simulation without requiring GPU resources. In contrast, profiling a general parallelization strategy will take a lot of effort and GPU resources.
§ RELATED WORK
Handcrafted Parallelization Strategies are designed to optimize distributed DNN training.
One wired trick <cit.> introduces model parallelism for linear layers to accelerate AlexNet. Megatron-LM <cit.> presents an expert-designed strategy to expedite transformer models combining data, model and pipeline parallelism. DeepSpeed <cit.> introduces ZeRO to reduce memory footprint by partitioning model states across data parallel processes. Recomputation <cit.> utilizes tensor rematerialization to decrease memory consumption. can model the performance of these manual designed strategies thus assisting their analysis and optimization.
Automatic Parallelization.
FlexFlow <cit.> and Tofu <cit.> proposes SOAP and partition-n-reduce space to parallelize operators. GSPMD <cit.> introduces a more general parallelization space by partitioning all parallelizable dimensions of tensors.
DAPPLE <cit.> and PipeDream <cit.> optimize parallelization strategies in data and pipeline parallelization space. Alpa <cit.>
combines data, model and pipeline parallelism and proposes a inter-operator and intra-operator
parallelization space. Existing automatic approaches focus on exploring the space of computation parallelization, while our work introduces a unified parallelization strategy space considering computation parallelization and memory optimization at operator level and schedule at subgraph level.
Performance Model.
Previous works propose analytical performance models for DNN training on single GPU <cit.>
or on multiple GPUs with data parallelism or hybrid data and model parallelism <cit.>. These approaches are not applicable to increasingly complex training workload and strategies. FlexFlow <cit.> introduces a simulation model to estimate the cost of a SOAP strategy, but it is not designed to capture the cost of general strategies and runtime behaviors. aims to provide a general simulation performance model for various parallelization strategies.
§ CONCLUSION
In this work, we present to simulate the performance of distributed DNN training strategies on diverse clusters. features strategy tree to model unified parallelization strategy space and hierarchical topo-aware executor to model runtime behaviors of computation and communication operators accurately. We can leverage to analyze and optimize the performance of general parallelization strategies.
IEEEtran
|
http://arxiv.org/abs/2306.04468v1
|
20230607144005
|
Refined parameters of the HD 22946 planetary system and the true orbital period of planet d
|
[
"Z. Garai",
"H. P. Osborn",
"D. Gandolfi",
"A. Brandeker",
"S. G. Sousa",
"M. Lendl",
"A. Bekkelien",
"C. Broeg",
"A. Collier Cameron",
"J. A. Egger",
"M. J. Hooton",
"Y. Alibert",
"L. Delrez",
"L. Fossati",
"S. Salmon",
"T. G. Wilson",
"A. Bonfanti",
"A. Tuson",
"S. Ulmer-Moll",
"L. M. Serrano",
"L. Borsato",
"R. Alonso",
"G. Anglada",
"J. Asquier",
"D. Barrado y Navascues",
"S. C. C. Barros",
"T. Bárczy",
"W. Baumjohann",
"M. Beck",
"T. Beck",
"W. Benz",
"N. Billot",
"F. Biondi",
"X. Bonfils",
"M. Buder",
"J. Cabrera",
"V. Cessa",
"S. Charnoz",
"Sz. Csizmadia",
"P. E. Cubillos",
"M. B. Davies",
"M. Deleuil",
"O. D. S. Demangeon",
"B. -O. Demory",
"D. Ehrenreich",
"A. Erikson",
"V. Van Eylen",
"A. Fortier",
"M. Fridlund",
"M. Gillon",
"V. Van Grootel",
"M. Güdel",
"M. N. Günther",
"S. Hoyer",
"K. G. Isaak",
"L. L. Kiss",
"M. H. Kristiansen",
"J. Laskar",
"A. Lecavelier des Etangs",
"C. Lovis",
"A. Luntzer",
"D. Magrin",
"P. F. L. Maxted",
"C. Mordasini",
"V. Nascimbeni",
"G. Olofsson",
"R. Ottensamer",
"I. Pagano",
"E. Pallé",
"G. Peter",
"G. Piotto",
"D. Pollacco",
"D. Queloz",
"R. Ragazzoni",
"N. Rando",
"H. Rauer",
"I. Ribas",
"N. C. Santos",
"G. Scandariato",
"D. Ségransan",
"A. E. Simon",
"A. M. S. Smith",
"M. Steller",
"Gy. M. Szabó",
"N. Thomas",
"S. Udry",
"J. Venturini",
"N. Walton"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
MTA-ELTE Exoplanet Research Group, 9700 Szombathely, Szent Imre h. u. 112, Hungary, [email protected] ELTE Gothard Astrophysical Observatory, 9700 Szombathely, Szent Imre h. u. 112, Hungary Astronomical Institute, Slovak Academy of Sciences, 05960 Tatranská Lomnica, Slovakia Physikalisches Institut, University of Bern, Gesellsschaftstrasse 6, 3012 Bern, Switzerland Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Dipartimento di Fisica, Universita degli Studi di Torino, via Pietro Giuria 1, I-10125, Torino, Italy Department of Astronomy, Stockholm University, AlbaNova University Center, 10691 Stockholm, Sweden Instituto de Astrofisica e Ciencias do Espaco, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal Astrophysics Group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE, UK Astrobiology Research Unit, Université de Liège, Allée du 6 Août 19C, B-4000 Liège, Belgium Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042 Graz, Austria Observatoire Astronomique de l'Université de Genève, Chemin Pegasi 51, Versoix, Switzerland Centre for Exoplanet Science, SUPA School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS, UK Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, 8042 Graz, Austria INAF, Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122 Padova, Italy Department of Astrophysics, University of Vienna, Tuerkenschanzstrasse 17, 1180 Vienna, Austria Institute of Planetary Research, German Aerospace Center (DLR), Rutherfordstrasse 2, 12489 Berlin, Germany European Space Agency (ESA), European Space Research and Technology Centre (ESTEC), Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands Instituto de Astrofisica de Canarias, 38200 La Laguna, Tenerife, Spain Institut d'Estudis Espacials de Catalunya (IEEC), 08034 Barcelona, Spain Depto. de Astrofisica, Centro de Astrobiologia (CSIC-INTA), ESAC campus, 28692 Villanueva de la Cañada (Madrid), Spain Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France Université de Paris, Institut de physique du globe de Paris, CNRS, F-75005 Paris, France Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal Center for Space and Habitability, University of Bern, Gesellschaftsstrasse 6, 3012, Bern, Switzerland Leiden Observatory, University of Leiden, PO Box 9513, 2300 RA Leiden, The Netherlands Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, 43992 Onsala, Sweden Space sciences, Technologies and Astrophysics Research (STAR) Institute, Université de Liège, 19C Allée du 6 Août, B-4000 Liège, Belgium Departamento de Astrofisica, Universidad de La Laguna, 38206 La Laguna, Tenerife, Spain Institut de Ciencies de l'Espai (ICE, CSIC), Campus UAB, Can Magrans s/n, 08193 Bellaterra, Spain Aix Marseille Univ, CNRS, CNES, LAM, 38 rue Frédéric Joliot-Curie, 13388 Marseille, France IMCCE, UMR8028 CNRS, Observatoire de Paris, PSL Univ., Sorbonne Univ., 77 av. Denfert-Rochereau, 75014 Paris, France Max Planck Institute for Extraterrestrial Physics, Giessenbachstrasse 1, 85748 Garching bei München, Germany Dipartimento di Fisica e Astronomia "Galileo Galilei", Universita degli Studi di Padova, Vicolo dell'Osservatorio 3, 35122 Padova, Italy INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, 95123 Catania, Italy Astrophysics Group, Keele University, Staffordshire, ST5 5BG, United Kingdom Admatis, 5. Kandó Kálmán Street, 3534 Miskolc, Hungary Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, 1121 Budapest, Konkoly Thege Miklós út 15-17, Hungary Institut d'astrophysique de Paris, UMR7095 CNRS, Université Pierre & Marie Curie, 98bis blvd. Arago, 75014 Paris, France Centre for Mathematical Sciences, Lund University, Box 118, 22100 Lund, Sweden Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom Center for Astronomy and Astrophysics, Technical University Berlin, Hardenberstrasse 36, 10623 Berlin, Germany Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, United Kingdom INAF, Osservatorio Astrofisico di Torino, Via Osservatorio, 20, 10025 Pino Torinese TO, Italy Department of Space and Climate Physics, Mullard Space Science Laboratory, Holmbury St Mary RH5 6NT, UK Brorfelde Observatory, Observator Gyldenkernes Vej 7, DK-4340 Tølløse, Denmark
Multi-planet systems are important sources of information regarding the evolution of planets. However, the long-period planets in these systems often escape detection. These objects in particular may retain more of their primordial characteristics compared to close-in counterparts because of their increased distance from the host star. HD 22946 is a bright (G=8.13 mag) late F-type star around which three transiting planets were identified via Transiting Exoplanet Survey Satellite (TESS) photometry, but the true orbital period of the outermost planet d was unknown until now.
We aim to use the Characterising Exoplanet Satellite (CHEOPS) space telescope to uncover the true orbital period of HD 22946d and to refine the orbital and planetary properties of the system, especially the radii of the planets.
We used the available TESS photometry of HD 22946 and observed several transits of the planets b, c, and d using CHEOPS. We identified two transits of planet d in the TESS photometry, calculated the most probable period aliases based on these data, and then scheduled CHEOPS observations. The photometric data were supplemented with ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) radial velocity data. Finally, a combined model was fitted to the entire dataset in order to obtain final planetary and system parameters.
Based on the combined TESS and CHEOPS observations, we successfully determined the true orbital period of the planet d to be 47.42489 ± 0.00011 d, and derived precise radii of the planets in the system, namely 1.362 ± 0.040 R_⊕, 2.328 ± 0.039 R_⊕, and 2.607 ± 0.060 R_⊕ for planets b, c, and d, respectively. Due to the low number of radial velocities, we were only able to determine 3σ upper limits for these respective planet masses, which are 13.71 M_⊕, 9.72 M_⊕, and 26.57 M_⊕. We estimated that another 48 ESPRESSO radial velocities are needed to measure the predicted masses of all planets in HD 22946. We also derived stellar parameters for the host star.
Planet c around HD 22946 appears to be a promising target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d, as a warm sub-Neptune, is very interesting because there are only a few similar confirmed exoplanets to date. Such objects are worth investigating in the near future, for example in terms of their composition and internal structure.
Refined parameters of the HD 22946 system
Z. Garai et al.
Refined parameters of the HD 22946 planetary system
and the true orbital period of planet dThis article uses data from CHEOPS programmes CH_PR110048 and CH_PR100031. Photometry and radial velocity data of HD 22946 are available at the CDS via anonymous ftp to <cdsarc.u-strasbg.fr> (130.79.128.5) or via <http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/>.
Z. Garai<ref>,<ref>,<ref>,
H. P. Osborn<ref>,<ref>,
D. Gandolfi<ref>,
A. Brandeker<ref>,
S. G. Sousa<ref>,
M. Lendl<ref>,
A. Bekkelien<ref>,
C. Broeg<ref>,<ref>,
A. Collier Cameron<ref>,
J. A. Egger<ref>,
M. J. Hooton<ref>,<ref>,
Y. Alibert<ref>,
L. Delrez<ref>,<ref>,
L. Fossati<ref>,
S. Salmon<ref>,
T. G. Wilson<ref>,
A. Bonfanti<ref>,
A. Tuson<ref>,
S. Ulmer-Moll<ref>,<ref>,
L. M. Serrano<ref>,
L. Borsato<ref>,
R. Alonso<ref>,<ref>,
G. Anglada<ref>,<ref>,
J. Asquier<ref>,
D. Barrado y Navascues<ref>,
S. C. C. Barros<ref>,<ref>,
T. Bárczy<ref>,
W. Baumjohann<ref>,
M. Beck<ref>,
T. Beck<ref>,
W. Benz<ref>,<ref>,
N. Billot<ref>,
F. Biondi<ref>,<ref>,
X. Bonfils<ref>,
M. Buder<ref>,
J. Cabrera<ref>,
V. Cessa<ref>,
S. Charnoz<ref>,
Sz. Csizmadia<ref>,
P. E. Cubillos<ref>,<ref>,
M. B. Davies<ref>,
M. Deleuil<ref>,
O. D. S. Demangeon<ref>,<ref>,
B.-O. Demory<ref>,
D. Ehrenreich<ref>,
A. Erikson<ref>,
V. Van Eylen<ref>,
A. Fortier<ref>,<ref>,
M. Fridlund<ref>,<ref>,
M. Gillon<ref>,
V. Van Grootel<ref>,
M. Güdel<ref>,
M. N. Günther<ref>,
S. Hoyer<ref>,
K. G. Isaak<ref>,
L. L. Kiss<ref>,
M. H. Kristiansen<ref>,
J. Laskar<ref>,
A. Lecavelier des Etangs<ref>,
C. Lovis<ref>,
A. Luntzer<ref>,
D. Magrin<ref>,
P. F. L. Maxted<ref>,
C. Mordasini<ref>,
V. Nascimbeni<ref>,
G. Olofsson<ref>,
R. Ottensamer<ref>,
I. Pagano<ref>,
E. Pallé<ref>,<ref>,
G. Peter<ref>,
G. Piotto<ref>,<ref>,
D. Pollacco<ref>,
D. Queloz<ref>,<ref>,
R. Ragazzoni<ref>,<ref>,
N. Rando<ref>,
H. Rauer<ref>,<ref>,
I. Ribas<ref>,<ref>,
N. C. Santos<ref>,<ref>,
G. Scandariato<ref>,
D. Ségransan<ref>,
A. E. Simon<ref>,
A. M. S. Smith<ref>,
M. Steller<ref>,
Gy. M. Szabó<ref>,<ref>,
N. Thomas<ref>,
S. Udry<ref>,
J. Venturini<ref>,
N. Walton<ref>
Received September 15, 1996; accepted March 16, 1997
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Multi-planet systems are important from many viewpoints. Not only are they susceptible of relatively straightforward confirmation as bona fide planets <cit.>, they also allow intra-planetary comparisons to be made for planets which formed under the same conditions; see for example <cit.>. The majority of the known multi-planet systems were found by space-based exoplanet transit surveys. This is because, while giant hot-Jupiters are relatively easy to observe with ground-based photometry, the detection of smaller planets, for example, Earths, super-Earths, and sub-Neptunes, which are typically found in multi-planet systems, requires the precise photometry of space-based observatories such as TESS <cit.>.
Mutual gravitational interactions in some multi-planet systems can provide constraints on the planet masses through transit time variations (TTVs); see for example <cit.>. Alternatively, radial velocity (RV) observations are needed to put constraints on the masses of planets <cit.>. Even where masses cannot be determined, mass upper limits can provide proof that the studied objects are of planetary origin; see for example <cit.>, <cit.>, or <cit.>. Mass determination can then help constrain the internal structure of the planet bodies, and break degeneracies in atmospheric characterisation follow-up studies. If precise planet radii are also determined from transit photometry, this allows the planet internal density to be calculated and the planetary composition to be estimated; see for example <cit.> and <cit.>. Precise planetary parameters also allow the planets to be put in the context of population trends, such as the radius <cit.> and density <cit.> valleys.
Long-period planets in multiple-planet systems often escape detection, especially when their orbital periods are longer than the typical observing duration of photometric surveys (e.g. ∼ 27 d for TESS). However, detecting such planets is also important. For example, the increased distance from their host stars means that, when compared with close-in planets, they may retain more of their primordial characteristics, such as unevaporated atmospheres <cit.> or circumplanetary material <cit.>. Due to the limited observing duration of the TESS primary mission, which observed the majority of the near-ecliptic sectors for only 27 days, planets on long periods produce only single transits. However, thanks to its extended mission, TESS re-observed the same fields two years later, and in many cases was able to re-detect a second transit; see for example <cit.>. These `duotransit' cases require follow-up in order to uncover the true orbital period due to the gap, which causes a set of aliases, P ∈ (t_tr,2 - t_tr,1)/(1,2,3,…,N_max), where t_tr,1 and t_tr,2 are the first and the second observed mid-transit times, respectively. The longest possible period is the temporal distance between the two mid-transit times, P_max = (t_tr,2 - t_tr,1), and the shortest possible period is bounded by the non-detection of subsequent transits.
In addition to ground-based telescopes, the CHEOPS space observatory <cit.> can be used to follow-up duotransit targets and to determine their true orbital periods and other characteristics. For example, the periods of two young sub-Neptunes orbiting BD+40 2790 (TOI-2076, TIC-27491137) were found using a combination of CHEOPS and ground-based photometric follow-up observations <cit.>. Furthermore, these combined observations uncovered the TTVs of two planets, and also improved the radius precision of all planets in the system. CHEOPS observations also recovered orbital periods of duotransits in HIP 9618 <cit.>, TOI-5678 <cit.>, and HD 15906 <cit.> systems. In the present study, we investigated the HD 22946 system with a similar aim. HD 22946 (TOI-411, TIC-100990000) is a bright (G = 8.13 mag) late F-type star with three transiting planets. The planetary system was discovered and validated only recently by <cit.>; hereafter C22. The authors presented several parameters of the system, including the radii and mass limits of the planets. They found that planet b is a super-Earth with a radius of 1.72 ± 0.10 R_⊕, while planets c and d are sub-Neptunes with radii of 2.74 ± 0.14 R_⊕ and 3.23 ± 0.19 R_⊕, respectively. The 3σ upper mass limits of planets b, c, and d were determined —based on ESPRESSO spectroscopic observations (see Sect. <ref>)— to be 11 M_⊕, 14.5 M_⊕, and 24.5 M_⊕, respectively. As TESS recorded several transits during observations in sector numbers 3, 4, 30, and 31, the discoverers easily derived the orbital periods of the two inner planets, b and c, which are about 4.040 d and 9.573 d, respectively. The orbital period of planet d was not found by C22. The authors determined its presence through a single transit found in sector number 4 and obtained its parameters from this single transit event. Its depth and the host brightness make planet d easily detectable with CHEOPS, and therefore HD 22946 was observed several times with this instrument within the Guaranteed Time Observations (GTO) programmes CH_PR110048 and CH_PR100031, with the main scientific goals being to uncover the true orbital period of planet d and to refine the parameters of the HD 22946 system based on CHEOPS and TESS observations via joint analysis of the photometric data, supplemented with ESPRESSO spectroscopic observations of HD 22946.
The present paper is organised as follows. In Sect. <ref>, we provide a brief description of observations and data reduction. In Sect. <ref>, we present the details of our data analysis and our first results, including stellar parameters, period aliases of HD 22946d from the TESS data, and a search for TTVs. Our final results based on the combined TESS, CHEOPS, and RV model are described and discussed in Sect. <ref>. We summarise our findings in Sect. <ref>.
§ OBSERVATIONS AND DATA REDUCTION
§.§ TESS data
HD 22946 was observed during four TESS sectors: numbers 3, 4, 30, and 31 (see Table <ref>). The time gap between the two observing seasons is almost two years. These TESS data were downloaded from the Mikulski Archive for Space Telescopes[See <https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html>.] in the form of Presearch Data Conditioning Simple Aperture Photometry (PDCSAP) flux. These data, containing 61 987 data points, were obtained from two-minute integrations and were initially smoothed by the PDCSAP pipeline. This light curve is subjected to more treatment than the simple aperture photometry (SAP) light curve, and is specifically intended for detecting planets. The pipeline attempts to remove systematic artifacts while keeping planetary transits intact. The average uncertainty of the PDCSAP data points is 310 ppm.
During these TESS observing runs, 23 transits of planet b were recorded, and the transit of planet c was observed eight times in total (see more details in Table <ref>). As in C22, we also initially recognised a transit-like feature in the sector number 4 data at t_tr,1 = 2 458 425.1657 BJD_TDB through visual inspection of the light curve. Given 65%–80% of single transits from the TESS primary mission will re-transit in the extended mission sectors <cit.>, we subsequently visually inspected the light curve once the TESS year 3 data were available and found a second dip at t_tr,2 = 2 459 136.5357 BJD_TDB in the sector number 30 data with near-identical depth and duration. Given the high prior probability of finding a second transit, the close match in transit shape between events, and the high quality of the data (i.e. minimal systematic noise elsewhere in the light curve), we concluded that this signal is a bona fide transit event and that the transits in sector numbers 4 and 30 are very likely caused by the same object, that is, by planet d.
Outliers were cleaned using a 3σ clipping, where σ is the standard deviation of the light curve. With this clipping procedure, we discarded 300 data points out of 61 987, which is ∼ 0.5% of the TESS data. Subsequently, we visually inspected the dataset in order to check the effect of the outlier removal, which we found to be reasonable. As TESS uses as time stamps Barycentric TESS Julian Date (i.e. BJD_TDB - 2 457 000.0), during the next step we converted all TESS time stamps to BJD_TDB.
§.§ CHEOPS data
HD 22946 was observed five times with the CHEOPS space telescope. This is the first European space mission dedicated primarily to the study of known exoplanets. It consists of a telescope with a mirror of 32 cm in diameter based on a Ritchey-Chrétien design. The photometric detector is a single-CCD camera covering the wavelength range from 330 to 1100 nm with a field of view of 0.32 deg^2. The payload design and operation have been optimised to achieve ultra-high photometric stability, achieving a photometric precision of 20 ppm on observations of a G5-type star in 6 hours, and 85 ppm observations of a K5-type star in 3 hours <cit.>. The CHEOPS observations were scheduled based on the existing TESS observations of planets b and c, and mainly based on the observed transit times of planet d (see Sect. <ref>). The marginal probability for each period alias of planet d was calculated using the package (see Sect. <ref>). We were not able to observe all the highest-probability aliases, because some were not visible during the two-week period of visibility. Within the program number CH_PR110048, we therefore planned to observe the three highest-probability aliases of planet d with CHEOPS, but due to observability constraints and conflicts with other observations, only two visits[A visit is a sequence of successive CHEOPS orbits devoted to observing a given target.] of planet d aliases were scheduled. Its true orbital period was confirmed during the second observation. The remaining three visits were scheduled in the framework of the program number CH_PR100031. Based on these CHEOPS observations, three transits of planet b were recorded during visits 1, 3, and 5, the transit of planet c was observed twice during visits 2 and 4, and a single transit of planet d (in multiple transit feature with planet c) was detected during the CHEOPS visit 4. Further details about these observations can be found in Table <ref>.
From the CHEOPS detector, which has 1024 × 1024 pixels, a 200 × 200 pixels subarray is extracted around the target point-spread function (PSF), which is used to compute the photometry. This type of photometry product was processed by the CHEOPS Data Reduction Pipeline (DRP) version 13.1.0 <cit.>. It performs several image corrections, including bias-, dark-, and flat-corrections, contamination estimation, and background-star correction. The pipeline produces four different light-curve types for each visit, but we initially analysed only the decontaminated `' type, where the aperture radius is automatically set based on the signal-to-noise ratio (S/N). In addition to the subarrays, there are imagettes available for each exposure. The imagettes are frames of 30 pixels in radius centred on the target, which do not need to be co-added before download owing to their smaller size. We used a tool specifically developed for photometric extraction of imagettes using point-spread function photometry, called [See <https://github.com/alphapsa/PIPE>.]; see for example <cit.>. The photometry has a S/N comparable to that of photometry, but has the advantage of shorter cadence, and therefore we decided to use this CHEOPS product in this work. The average uncertainty of the data points is 160 ppm.
The CHEOPS observations were processed using the dedicated data decorrelation and transit analysis software called [See <https://github.com/pmaxted/pycheops>.] <cit.>. This package includes downloading, visualising, and decorrelating CHEOPS data, fitting transits and eclipses of exoplanets, and calculating light-curve noise. We first cleaned the light curves from outlier data points using the pycheops built-in function clip_outliers, which removes outliers from a dataset by calculating the mean absolute deviation (MAD) from the light curve following median smoothing, and rejects data greater than the smoothed dataset plus the MAD multiplied by a clipping factor. The clipping factor equal to five was reasonable in our cases, which we checked visually. With this clipping procedure, we discarded 30 data points out of 3195, which is ∼ 0.9% of the CHEOPS data. The next step was the extraction of the detrending parameters. During this procedure, the software gives a list of the parameters necessary for the detrending. The most important decorrelation is
subtraction of the roll-angle effect. In order to keep the cold plate radiators facing away from the Earth, the spacecraft rolls during its orbit. This means that the field of view rotates around the pointing direction. The target star remains stationary within typically 1 pixel, but the rotation of the field of view produces a variation of its flux from the nearby sources in phase with the roll angle of the spacecraft <cit.>. The extracted detrending parameters were co-fitted with the transit model (see Sect. <ref>).
§.§ ESPRESSO/VLT data
We acquired 14 high-resolution spectra of the host star HD 22946 using the ESPRESSO spectrograph <cit.> mounted at the 8.2 m Very Large Telescope (VLT) at Paranal Observatory (Chile). The observations were carried out between 10 February 2019 and 17 March 2019 under the observing program number 0102.C-0456 (PI: V. Van Eylen) and within the KESPRINT[See <https://kesprint.science/>.] project. We used the high-resolution (HR) mode of the spectrograph, which provides a resolving power of R ≈ 134 000. We set the exposure time to 600 s, leading to a S/N per pixel at 650 nm ranging between 120 and 243. Daytime ThAr spectra and simultaneous Fabry-Perot exposures were taken to determine the wavelength solution and correct for possible nightly instrumental drifts, respectively. We reduced the ESPRESSO spectra using the dedicated data-reduction software and extracted the RVs by cross-correlating the échelle spectra with a G2 numerical mask. We list the ESPRESSO RV measurements in Table <ref>. The average uncertainty of the RV data points is ∼ 0.00015 km s^-1.
We co-added the individual ESPRESSO spectra prior to carrying out the spectroscopic analysis presented in Sect. <ref>. To this aim, we Doppler-shifted the data to a common reference wavelength by cross-correlating the ESPRESSO spectra with the spectrum with the highest S/N. We finally performed a S/N-weighted co-addition of the Doppler-shifted spectra, while applying a sigma-clipping algorithm to remove possible cosmic-ray hits and outliers. The co-added spectrum has a S/N of ∼ 900 per pixel at 650 nm.
§ DATA ANALYSIS AND FIRST RESULTS
§.§ Stellar parameters
The spectroscopic stellar parameters (the effective temperature T_eff, the surface gravity log g, the microturbulent velocity v_mic, and the metallicity [Fe/H]; see Table <ref>) were derived using the and codes, following the same methodology as described in <cit.>, <cit.>, and <cit.>. We used the latest version of the code[The last version, v2, can be downloaded at <https://github.com/sousasag/ARES>.] <cit.> to measure the equivalent widths of iron lines on the combined ESPRESSO spectrum. We used a minimisation procedure to find ionisation and excitation equilibrium and converge to the best set of spectroscopic parameters. This procedure makes use of a grid of Kurucz model atmospheres <cit.> and the radiative transfer code <cit.>.
To derive the radius of the host star HD 22946, we used a Markov-Chain Monte Carlo (MCMC) modified infrared flux method. This enables us to calculate the bolometric flux using stellar atmospheric models defined by our spectral analysis to build spectral energy distributions (SEDs) that are compared with broadband fluxes and uncertainties from the most recent data releases for the following bandpasses: Gaia G, G_BP, and G_RP, 2MASS J, H, and K, and WISE W1 and W2 <cit.>. From the bolometric flux, we then determine stellar effective temperature and angular diameter; this latter is converted to a radius using the offset-corrected Gaia parallax <cit.>. We used Bayesian modeling averaging of the atlas <cit.> and phoenix <cit.> catalogues to produce a weighted averaged posterior distribution of the stellar radius in order to account for uncertainties in stellar atmospheric modelling. We find a value of R_s=1.117±0.009 R_⊙, which is in 3σ agreement with the value of 1.157 ± 0.025 R_⊙ presented by the discoverers.
We finally determined the stellar mass M_s and stellar age t_s using two different sets of stellar evolutionary models, namely PARSEC[PAdova and TRieste Stellar Evolutionary Code: <http://stev.oapd.inaf.it/cgi-bin/cmd>] v1.2S <cit.> and CLES (Code Liègeois d'Évolution Stellaire), see <cit.>. More specifically, we employed the isochrone-placement algorithm developed by <cit.> to interpolate the input parameters (T_eff, [Fe/H], R_s) within pre-computed grids of PARSEC v1.2S isochrones and tracks to derive a first pair of mass and age. A second pair of mass and age values, instead, was retrieved by inputting T_eff, [Fe/H], and R_s directly in the CLES code, which generates the best-fit stellar evolutionary track following the Levenberg-Marquadt minimisation scheme, as described in <cit.>. After carefully checking the mutual consistency of the two respective pairs of outcomes through the χ^2-based methodology presented in <cit.>, we finally merged (i.e. summed) the two M_s and t_s results and obtained M_s=1.098 ± 0.040 M_⊙ and t_s=2.5 ± 1.0 Gyr. The mass parameter value of the host star agrees within the uncertainty with the value provided in the discovery paper, which is 1.104 ± 0.012 M_⊙. However, the planet host seems to be younger than previously presented by C22. The discoverers obtained a value of 5.0 ± 1.0 Gyr. More parameter values, including from this work, are compared with the discovery-paper parameter values in Table <ref>.
§.§ Period aliases of HD 22946d from the TESS data
In order to determine each possible period alias and to schedule CHEOPS observations of planet d, we first performed a period analysis of the available TESS data. For this purpose, we used the package[See <https://github.com/hposborn/MonoTools>.] <cit.>, which is able to model transit light curves in case of multiple transits, duotransits, and monotransits, as well as multiple systems with combinations of such candidates, with both radial velocities and transit photometry. The package calculates a marginalised probability distribution across all allowed aliases for a given transit model by combining priors for each alias. The probabilities are estimated based on two major assumptions, namely that short-period orbits are highly favoured over long-period ones due to a combination of geometric probability and window function, and that planets in multi-planet systems have low eccentricities <cit.>. More details about this software can be found in <cit.>.
The TESS data described in Sect. <ref> were used during the fitting procedure using . In the case of planet b, we set as input parameters the reference mid-transit time of T_c = 2 458 385.7318 BJD_TDB, the orbital period of P_orb = 4.040330 ± 0.000010 d, the transit duration (transit width) of W = 3.4 hr, and the transit depth of D = 134 ppm. In the case of planet c, the inputs were T_c = 2 458 386.1878 BJD_TDB, P_orb = 9.573117 ± 0.000020 d, W = 3.8 hr, and D = 389 ppm. For planet d, we set as input parameters the two mid-transit times detected by TESS, namely t_tr,1 = 2 458 425.1657 BJD_TDB and t_tr,2 = 2 459 136.5357 BJD_TDB, the transit duration of W = 6.5 hr and the transit depth of D = 478 ppm. These parameters were calculated from the TESS data alone.
The orbital period aliases of planet d with a probability of p > 1% are listed in Table <ref>. The software forecasted that a transit of planet d with the orbital period alias number 2 would take place on 25 October 2021, with a mid-transit time of T_c = 2 459 513.1441 BJD_TDB. This forecasted event was observed during the third CHEOPS visit (see Table <ref>), but the expected transit of planet d did not happen; only the transit of planet b was recorded that time. After this observation, we were able to exclude the period alias of P = 41.8454 d from the list of possible aliases. The next forecast predicted a transit of planet d on 28 October 2021, with a mid-transit time of T_c = 2 459 515.9338 BJD_TDB, which means that, in this case, the alias number 4 (see Table <ref>) was preferred as its true orbital period. This forecasted event was observed with CHEOPS during its fourth visit. This time, the transit of planet d was successfully detected together with a transit of planet c, confirming that the period alias of P = 47.4248 d is the true orbital period of planet d. This result also confirms that the second transit-like feature of planet d, observed by TESS in sector number 30, was a real transit event and not an instrumental artifact as considered by C22. Alternatively, the dip observed at 2 459 136.5357 BJD_TDB was a mixture of instrumental effects and the transit of planet d. With this gathered knowledge about the true orbital period of planet d, we were able to combine CHEOPS and TESS photometric observations and RV measurements in order to improve the orbital and planetary parameters of the HD 22946 system, which were previously obtained only from the TESS and RV data by the discoverers.
§.§ CHEOPS, TESS, and RV combined model
In order to produce accurate planetary parameters for all three planets, we built a combined model using all available data, that is, TESS photometry (described in Sect. <ref>), CHEOPS photometry (described in Sect. <ref>), and ESPRESSO RVs (described in Sect. <ref>). The combined model was built using the package[See <https://pypi.org/project/pymc3/>.] <cit.>, which performs Hamiltonian Monte Carlo (HMC) sampling, with Keplerian orbits modeled with package[See <https://pypi.org/project/exoplanet/>.] <cit.>. We used Gaussian processes (GPs) to model the stellar variability present in the TESS light curve, opting for a simple harmonic oscillator (SHO) kernel implemented in the package <cit.> and a quality factor Q=1/√(2,) as is common for quasi-periodic stellar variability. In order to speed up sampling, we binned the TESS data to 30 minute bins far from transits, keeping 2 minute data near transit. As we have reasonable prior knowledge from theoretical analyses for the expected stellar limb-darkening (LD) parameters for HD 22946, we used these as priors in the analysis. We used the quadratic LD law and interpolated tables of coefficients calculated for the TESS <cit.> and CHEOPS <cit.> passbands using the derived stellar parameters of T_eff = 6169 K and log g = 4.47 (cgs). In order to guard against systematic errors, we inflated the σ for each parameter prior to 0.1.
Even though the light curves for HD 22946 have fewer systematic features than the DRP light curves, they can still include flux variations due to the influence of various external factors. Therefore, we can improve the light curve by decorrelating the flux data against metadata generated for the instrument and target. To decipher which decorrelation vectors provide improvement, we ran an initial model for each CHEOPS visit using all available ancillary data – sin and cos of rollangle, background flux, x and y centroid positions, onboard temperature and time (which also fits short-timescale stellar variability). These parameters are normalised to have μ=0.0 and σ=1.0, and decorrelation parameters are given normal priors with μ=0.0 and σ set by the root-mean-square (RMS) noise for each CHEOPS visit. For each visit model, we also included parameters for any planetary transits present in order to ensure the transits would not bias the model. After HMC sampling, we assessed each decorrelation parameter using the average and standard deviations, keeping only those parameters with a Bayes Factor of BF > 1. Despite this detrending, shorter-timescale variation can also be present as a function of roll angle (φ). Pure detrending against sin and cos of roll angle removes the largest amplitude systematic trends at low frequencies. These are those closest in timescale to the transit feature, and so a simpler detrending technique for such timescales guards against over-fitting of the transit. However, the CHEOPS light curve typically also contains systematic noise correlated with roll angle that is at a lower amplitude and higher frequency. This is not therefore adequately removed by simple sin and cos decorrelation. It is this noise that a more flexible GP is better able to model. We therefore also included a GP to model the variation of flux with roll-angle effects. To do this, we first found any potential large jumps in φ and made sure the time series was continuous between these jumps (i.e. by moving the zero point and `wrapping around'). We then transformed the input data such that it is continuous in x —by sorting by φ rather than time. Once again, we used a SHO kernel from with quality factor Q set at 1/√(2). As we expected the morphology of the variations to be preserved for all CHEOPS visits, we used a single shared kernel. We found that the linear decorrelation is the most important, decreasing the log likelihood by a factor of 1400, but the GP is responsible for a reduction of a further 450, which means that use of a GP to model roll-angle flux behaviour is well justified.
As multi-planet systems typically have low eccentricities e <cit.>, and we lack the high number of RVs capable of resolving any differences in e, we chose to fit only circular orbits. In order to guard against unphysical negative values, we used broad log-normal priors for the key transit and RV amplitude parameters, that is, for R_p/R_s (planet-to-star radius ratio) and K (RV semi-amplitude). The quantities derived in Sect. <ref> are used as priors on the stellar parameters in the model. For all datasets —CHEOPS, TESS, and ESPRESSO—, we included a jitter term using a wide log-normal prior. We then sampled the combined model using the function, which is specifically written for astrophysical applications, and allows us to group independent dataset parameters (e.g. the CHEOPS visit-specific decorrelation parameters) together, thereby speeding up sampling greatly. We used ten chains, tuning each for 1300 steps before sampling for a further 1800, resulting in 18 000 unique samples. The sample have effective sample sizes in the thousands, and the gelmin-rubin statistics are below 1.01 for all parameters, suggesting they are sufficiently uncorrelated and unbiased. The full list of fitted GP hyperparameters and detrending parameters with the corresponding best-fitting values can be found in Appendix <ref>. The best-fitting and derived parameters of the system are described and discussed in Sect. <ref>.
§.§ Search for transit-timing variations
In order to look for potential TTVs, we also ran a combined model using unconstrained timing for each planetary transit thanks to the function of , and an independent analysis using the software[See <https://www.allesfitter.com/home>.] <cit.>, applying a nested sampling fit. Although C22 already performed such an analysis and found no obvious sign of TTVs in the system, we repeated this procedure, but in this case using the CHEOPS data as well. This means mainly that we included three transits of planet d in the analysis and used a longer time baseline. We used the same dataset as in Sect. <ref>, which was co-fitted with a GP using the kernel in both cases. All planetary and system parameters were fixed as derived previously, only the GP hyperparameters, the detrending parameters, and the observed-minus-calculated (O-C) parameters for individual mid-transit times were fitted. Both solutions are consistent with a linear ephemeris, which means we did not find any indication of a quadratic trend in the data, in agreement with the conclusion made by the discoverers. As an illustration, the obtained O-C diagram of the mid-transit times for planets b, c, and d from the package is depicted in Fig. <ref>. We can see that the O-C values are scattered around O-C = 0.0 d, which means that no significant TTVs are present in the system.
§ FINAL RESULTS AND DISCUSSION
The best-fitting and derived parameters from the combined model are listed in Table <ref>, and the model posteriors of the host star are summarised in Appendix <ref>. The fitted TESS light curves from sector numbers 3, 4, 30, and 31 are depicted in the panels of Figs. <ref> and <ref>. The CHEOPS individual observations overplotted with the best-fitting models are shown in the panels of Fig. <ref>. The RV observations fitted with a spectroscopic orbit are depicted in Fig. <ref>.
Here, we present new ephemerides of the planetary orbits, which we calculated based on the combined model. Thanks to the combined TESS and CHEOPS observations, we were able to improve the reference mid-transit times and the orbital periods of the planets compared to the discovery values. C22 derived the orbital period parameter values of P_orb,b = 4.040301^+0.000023_-0.000042 d and P_orb,c = 9.573096^+0.000026_-0.000023 d, and expected an orbital period of P_orb = 46 ± 4 d
for planet d, which was estimated based on the transit duration and depth along with stellar mass and radius through Kepler's third law, assuming circular orbits. We confirmed this prediction, finding an orbital period for planet d of P_orb = 47.42489 ± 0.00011 d. The improved ratios of the orbital periods are P_orb,c/P_orb,b = 2.37 and P_orb,d/P_orb,c = 4.95. Based on the Kepler database, the adjacent planet pairs in multiple systems show a broad overall peak between period ratios of 1.5 and 2, followed by a declining tail to larger period ratios. In addition, there appears to be a sizeable peak just interior to the period ratio 5 <cit.>; therefore, we can say that the period ratios in HD 22946 fall into statistics and the seemingly large orbital gap between planets c and d is not anomalous.
In the combined model, we determined the impact parameter b, which is the projected relative distance of the planet from the stellar disk centre during the transit midpoint in units of R_s. Converting these parameter values to the orbit inclination angle values we can obtain i = 88.90^+0.16_-0.05 deg, i = 88.52^+0.08_-0.07 deg, and i = 89.54^+0.02_-0.03 deg for planets b, c, and d, respectively. For comparison, we note that the corresponding discovery values are i_b = 88.3^+1.1_-1.2 deg and i_c = 88.57^+0.86_-0.53 deg. The inclination angle of planet d was not determined by C22. According to the improved parameter values, it seems that only the orbits of planets b and c are well aligned. Planet d is probably not in the same plane as planets b and c.
Based on the combined TESS and CHEOPS photometry observations, we redetermined the radii of the planets, which are 1.362 ± 0.040 R_⊕, 2.328 ± 0.039 R_⊕, and 2.607 ± 0.060 R_⊕ for planets b, c, and d, respectively. The CHEOPS observations are an added value, because compared to the corresponding parameter values presented in C22 (R_p,b = 1.72 ± 0.10 R_⊕, R_p,c = 2.74 ± 0.14 R_⊕, and R_p,d = 3.23 ± 0.19 R_⊕), there is a noticeable improvement in radius precision. Using TESS and CHEOPS photometry observations, the uncertainties on the planet radius parameter values were decreased by ∼ 50%, 68%, and 61% for planets b, c, and d, respectively. We also note that the parameter values from this work are in stark contrast to those derived by C22; these authors found significantly larger radii, that is, larger by ∼ 21%, 15%, and 19% for planets b, c, and d, respectively. We believe this may be due to a misunderstanding of the function in . The function requires the planetary radius R_p in solar radii rather than the planet-to-star radius ratio R_p/R_s. When misused, the result is an inflation of R_p/R_s and R_p values by a factor of R_s/R_⊕, which in this case is a factor of about 15%–21%. This mistake can be seen most clearly in C22, when comparing the models shown in Figure 5 with the implied depths in Table 4 (likely derived from the radius ratio), which are inflated by this factor. Such a mistake was also evident during the reanalysis of the BD+40 2790 (TOI-2076) system <cit.>.
According to the radius valley at ∼ 1.5 - 2.0 R_⊕, which separates super-Earths and sub-Neptunes <cit.>, and based on the refined planet radii, we find that planet b is a super-Earth, and planets c and d are similar in size and are sub-Neptunes, in agreement with C22. It is well known that small exoplanets have bimodal radius distribution separated by the radius valley. Potential explanations focus on atmospheric-escape-driven mechanisms, such as photo-evaporation; see for example <cit.>. The models showed that those planets that have radius below 1.5 R_⊕ were planets that initially had hydrogen/helium atmospheres, but ultimately lost them due to atmospheric escape, while those just above 2.0 R_⊕ had hydrogen/helium atmosphere masses of ∼ 1% of the core mass. Having HD 22946 planets on either side of the valley means that planet b could be a photo-evaporated version of planets c and d. Recently, <cit.> presented a brand new approach, arguing that the density of planets might provide more information than planet radii alone and proposing that a density gap separates rocky from water-rich planets. For M dwarf systems, these authors found that rocky planets form within the ice line while water worlds formed beyond the ice line and migrated inwards. Given that theoretical models predict similar results for stars of other types, this scenario could also be possible in the case of the planets orbiting HD 22946.
Due to the low number of RVs, here we present only the 3σ upper limits for the planet masses in agreement with the discoverers. C22 obtained the 3σ upper mass limits of about 11 M_⊕, 14.5 M_⊕, and 24.5 M_⊕ for planets b, c, and d, respectively, from the same spectroscopic observations. The 3σ upper limits for the planet masses from this work are M_p,b = 13.71 M_⊕, M_p,c = 9.72 M_⊕, and M_p,d = 26.57 M_⊕. Similarly to the discoverers, we obtained very different upper mass limits for planets c and d, although they have similar planet radii, which could be due to a somewhat different internal structure of these planets. Applying the relations of <cit.> and <cit.>, we also re-estimated the planet masses, which were previously forecasted by the discoverers as 6.29 ± 1.30 M_⊕, 7.96 ± 0.69 M_⊕, and 10.53 ± 1.05 M_⊕ for planets b, c, and d, respectively. The improved parameter values are presented in Table <ref>. Furthermore, taking into account the estimated planet masses calculated based on the relations of <cit.>, we predicted the number of additional RV measurements required to achieve a 3σ detection on each mass using the [See <http://maestria.astro.umontreal.ca/rvfc/>.] (; see <cit.>), and the (Wilson et al. in preparation). Based on these simulations, we obtained that another 27, 24, and 48 ESPRESSO RVs are needed to measure the predicted masses of planets b, c, and d, respectively. The expected RV semi-amplitudes assuming the estimated planet masses are K_b = 1.10 ± 0.12 m s^-1, K_c = 2.08 ± 0.10 m s^-1, and K_d = 1.46 ± 0.08 m s^-1.
C22 also probed the planets from the viewpoint of future atmospheric characterisation using the transmission spectroscopy metric (TSM); see Eq. 1 in <cit.>. The authors obtained the TSM values of 65 ± 10, 89 ± 16, and 67 ± 14 for planets b, c, and d, respectively. We revised these values based on the results from the present work. The improved TSM values (see Table <ref>) do not satisfy the recommended value of TSM > 90 for planets with a radius of 1.5 < R_p < 10 R_⊕. On the other hand, given that this threshold is set very rigorously, in agreement with the discoverers, we can note that planet c could be a feasible target for transmission spectroscopy observations with future atmospheric characterisation missions, such as the planned Ariel space observatory <cit.>.
Finally, we discuss the relevance of planet d among the known population of similar exoplanets. HD 22946d represents a warm sub-Neptune. Based on the NASA Exoplanet Archive[See <https://exoplanetarchive.ipac.caltech.edu/index.html>.] <cit.>, there are 5272 confirmed exoplanets up to 22 February 2023, but only 63 planets out of 5272 are sub-Neptune sized (1.75 < R_p < 3.5 R_⊕) and transiting bright stars (G ≤ 10 mag). Only 7 planets out of 63 have orbital periods longer than 30 days and only 4 planets out of 7 have an equilibrium temperature of below 550 K. Three planets have a lower insolation flux than planet d, namely TOI-2076d <cit.>, HD 28109d <cit.>, and HD 191939 <cit.>. HD 22946d is therefore an interesting target for future follow-up observations. One of the questions to be answered in the near future is the composition and internal structure of sub-Neptune-type planets. Using CHEOPS observations, we determined the radius of planet d with high accuracy. Its true mass could be determined with another 48 ESPRESSO RV measurements according to the estimate we present above. A combination of mass and radius gives the overall density, which will be an important step forward towards understanding sub-Neptunes.
§ CONCLUSIONS
Based on the combined TESS and CHEOPS observations, we refined several parameters of the HD 22946 planetary system. First of all, we improved the ephemerides of the planetary orbits in comparison with the discovery values. We can confirm that planets b and c have short orbital periods below 10 days, namely 4.040295 ± 0.000015 d and 9.573083 ± 0.000014 d, respectively. The third planet, HD 22946d, has an orbital period of 47.42489 ± 0.00011 d, which we were able to derive based on additional CHEOPS observations. Furthermore, based on the combined TESS and CHEOPS observations, we derived precise radii for the planets, which are 1.362 ± 0.040 R_⊕, 2.328 ± 0.039 R_⊕, and 2.607 ± 0.060 R_⊕ for planets b, c, and d, respectively. On the one hand, we can confirm the conclusion of the discoverers that the planetary system consists of a super-Earth, and planets c and d are sub-Neptunes. On the other hand, we find the planet radii values to be in tension with the values presented in the discovery paper, which is very probably due to misuse of the software by the discoverers. The low number of ESPRESSO RV measurements allowed us to derive only the 3σ upper limits for the planet masses, which are 13.71 M_⊕, 9.72 M_⊕, and 26.57 M_⊕ for planets b, c, and d, respectively.
We also investigated the planets from the viewpoint of possible future follow-up observations. First of all, we can conclude that more RV observations are needed to improve the planet masses in this system. The applied spectroscopic observations allowed us to derive precise stellar parameters of the host star and to fit an initial spectroscopic orbit to the RV data, but there is ample room for improvement in this way. We estimated that another 48 ESPRESSO RVs are needed to measure the predicted masses of all planets in HD 22946. Planet c could be a suitable target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d as a warm sub-Neptune is very interesting, because there are only a few similar confirmed exoplanets to date. Thanks to the synergy of TESS and CHEOPS missions, there is a growing sample of planets, such as HD 22946d. Such objects are worth investigating in the near future, for example in order to investigate their composition and internal structure. Finally, we can mention that future photometric and/or spectroscopic observations could also be oriented to searching for further possible planets in this system.
We thank the anonymous reviewer for the helpful comments and suggestions. CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. The CHEOPS Consortium would like to gratefully acknowledge the support received by all the agencies, offices, universities, and industries involved. Their flexibility and willingness to explore new approaches were essential to the success of this mission. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. ZG acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, the PRODEX Experiment Agreement No. 4000137122 between the ELTE Eötvös Loránd University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the VEGA grant of the Slovak Academy of Sciences No. 2/0031/22, the Slovak Research and Development Agency contract No. APVV-20-0148, and the support of the city of Szombathely. GyMSz acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, a PRODEX Institute Agreement between the ELTE Eötvös Loránd University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the Lendület LP2018-7/2021 grant of the Hungarian Academy of Science and the support of the city of Szombathely. ABr was supported by the SNSA. ACC acknowledges support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. B.-O. D. acknowledges support from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00046. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (project Four Aces; grant agreement No 724427). It has also been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). DE acknowledges financial support from the Swiss National Science Foundation for project 200021_200726. DG gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 "Gaseousor rocky? Unveiling the nature of small worlds". This work was also partially supported by a grant from the Simons Foundation (PI Queloz, grant number 327127). This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. IRI acknowledges support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grant PGC2018-098153-B-C33, as well as the support of the Generalitat de Catalunya/CERCA programme. This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche. KGI and MNG are the ESA CHEOPS Project Scientists and are responsible for the ESA CHEOPS Guest Observers Programme. They do not participate in, or contribute to, the definition of the Guaranteed Time Programme of the CHEOPS mission through which observations described in this paper have been taken, nor to any aspect of target selection for the programme. The Belgian participation to CHEOPS has been supported by the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Program, and by the University of Liège through an ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation; L.D. is an F.R.S.-FNRS Postdoctoral Researcher. LMS gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 ‘Gaseous or rocky? Unveiling the nature of small worlds’. This project was supported by the CNES. MF and CMP gratefully acknowledge the support of the Swedish National Space Agency (DNR 65/19, 174/18). M.G. is an F.R.S.-FNRS Senior Research Associate. ML acknowledges support of the Swiss National Science Foundation under grant number PCEFP2_194576. NAW acknowledges UKSA grant ST/R004838/1. This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalizacão by these grants: UID/FIS/04434/2019, UIDB/04434/2020, UIDP/04434/2020, PTDC/FIS-AST/32113/2017 & POCI-01-0145-FEDER- 032113, PTDC/FIS-AST/28953/2017 & POCI-01-0145-FEDER-028953, PTDC/FIS-AST/28987/2017 & POCI-01-0145-FEDER-028987, O.D.S.D. is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by national funds through FCT. PM acknowledges support from STFC research grant number ST/M001040/1. We acknowledge support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grants ESP2016-80435-C2-1-R, ESP2016-80435-C2-2-R, PGC2018-098153-B-C33, PGC2018-098153-B-C31, ESP2017-87676-C5-1-R, MDM-2017-0737 Unidad de Excelencia Maria de Maeztu-Centro de Astrobiología (INTA-CSIC), as well as the support of the Generalitat de Catalunya/CERCA programme. The MOC activities have been supported by the ESA contract No. 4000124370. SH gratefully acknowledges CNES funding through the grant 837319. S.C.C.B. acknowledges support from FCT through FCT contracts nr. IF/01312/2014/CP1215/CT0004. S.G.S. acknowledge support from FCT through FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC). ACC and TW acknowledge support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. V.V.G. is an F.R.S-FNRS Research Associate. XB, SC, DG, MF and JL acknowledge their role as ESA-appointed CHEOPS science team members. YA and MJH acknowledge the support of the Swiss National Fund under grant 200020_172746. LBo, VNa, IPa, GPi, RRa and GSc acknowledge support from CHEOPS ASI-INAF agreement n. 2019-29-HH.0. NCS acknowledges support from the European Research Council through the grant agreement 101052347 (FIERCE). This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalização by these grants: UIDB/04434/2020; UIDP/04434/2020. AT thanks the Science and Technology Facilities Council (STFC) for a PhD studentship. P.E.C. is funded by the Austrian Science Fund (FWF) Erwin Schroedinger Fellowship, program J4595-N.
aa
§ ADDITIONAL TABLES
|
http://arxiv.org/abs/2306.08902v1
|
20230615071605
|
Predictable gate-field control of spin in altermagnets with spin-layer coupling
|
[
"Run-Wu Zhang",
"Chaoxi Cui",
"Runze Li",
"Jingyi Duan",
"Lei Li",
"Zhi-Ming Yu",
"Yugui Yao"
] |
physics.app-ph
|
[
"physics.app-ph",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
These authors contributed equally to this work.
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
These authors contributed equally to this work.
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
[email protected]
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
[email protected]
Key Lab of advanced optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China
Spintronics, a technology harnessing electron spin for information transmission, offers a promising avenue to surpass the limitations of conventional electronic devices. While the spin directly interacts with the magnetic field, its control through the electric field is generally more practical, and has become a focal point in the field of spintronics. Current methodologies for generating spin polarization via an electric field generally necessitate spin-orbit coupling. Here, we propose an innovative mechanism that accomplishes this task without dependence on spin-orbit coupling.
Our method employs two-dimensional altermagnets with valley-mediated spin-layer coupling (SLC), in which electronic states display symmetry-protected and valley-contrasted spin and layer polarization. The SLC facilitates predictable, continuous, and reversible control of spin polarization using a gate electric field. Through symmetry analysis and ab initio calculations, we pinpoint high-quality material candidates that exhibit SLC. We ascertain that applying a gate field of 0.2 eV/Å to monolayer Ca(CoN)_2 can induce significant spin splitting up to 123 meV.
As a result, perfect and switchable spin/valley-currents, and substantial tunneling magnetoresistance can be achieved in these materials using only a gate field. These findings provide new opportunities for generating predictable spin polarization and designing novel spintronic devices based on coupled spin, valley and layer physics.
Predictable gate-field control of spin in altermagnets with spin-layer coupling
Yugui Yao
July 31, 2023
===============================================================================
Introduction.–
The discovery of various spin-dependent transport phenomena <cit.> has led to the emergence of the field of spintronics, which has generated extensive interest in both fundamental research and application design.
Two prior conditions for spintronics <cit.> are the active control of spin degrees of freedom and the efficient generation of spin polarization.
As spin is a type of angular momentum, magnetic field represents the most natural and intuitive way to control it.
Specifically, under a magnetic field B, the energy level of an electron with spin angular momentum s will be shifted by
Δ∝ -B·s, a phenomenon known as the Zeeman effect <cit.>.
However, magnetic control of spin is not favored by advanced electronic devices.
In practice, the most desirable approach for controlling spin polarization is static and electric control <cit.>. However, electric fields do not directly couple to spin, necessitating a mediated effect. Current schemes mainly rely on the spin-orbit coupling (SOC) effect <cit.>, which couples an electron's motion or momentum to its spin, enabling indirect interaction between the electric field and spin. Besides, in the view of symmetry, the addition of an electric field may reduce the symmetry of systems and then makes the spin-degenerate bands split <cit.>. All these schemes however fail to provide a simple picture like the Zeeman effect to predict the behavior of spin under an electric field. Additionally, these schemes require materials with strong SOC effects, imposing severe constraints on the search for viable material candidates <cit.>.
Two-dimensional (2D) systems <cit.> offer new possibilities.
Since most of the 2D materials have a finite thickness, a gate electric field normal to the materials (E_z) can create a layer-dependent electrostatic potential.
Then if the electronic states with spin-up and spin-down are respectively localized at the top and bottom layers of the system (see Fig. <ref>), a spin-layer coupling (SLC) emerges.
With SLC, one directly knows that a gate field E_z can produce an energy shift of Δ∝ -E_z d between the opposite spins, where d is the layer distance, resulting in a gate-field control of spin in these novel 2D systems.
Importantly, the SLC scheme not only provides an intuitive and predictable way to control the spin by electric method, but also does not require the SOC effect.
In this work, we first show that the SLC effect can naturally appear in a novel kind of 2D altermagnets <cit.> with valley-layer coupling, and then detail the symmetry requirements for these systems.
Guided by the symmetry analysis, we also identify a number of high-quality material candidates, where significant spin and valley polarization can be achieved by a moderate gate field.
At last, several interesting phenomena associated with SLC are revealed, such as a new design of giant tunneling magnetoresistance (TMR) device that does not require heterojunction structures, spin-resolved interlayer exciton, and layer photogalvanic effect.
Thus, by presenting a disruptively new effect to realize predictable control of spin polarization by electric mean, our work resolves an outstanding challenge in the field of spintronics, and will facilitate future research on the design of spintronic devices.
Valley-mediated SLC.–
The SLC proposed here refers to a stable coupling between spin and layer degrees of freedom of electronic states in momentum space rather than real space, as it would make electric-field control of spin polarization intuitive and predictable.
We begin by detailing the procedures that lead to SLC.
Since our focus is on generating spin polarization through electric means, the systems studied here should break time-reversal symmetry (T) but spin-neutral in the absence of external fields. This suggests that the 2D systems are antiferromagnetic or altermagnetic (AM) materials.
One might naively expect that interlayer antiferromagnetism can guarantee stable SLC. However, this is not always the case, as interlayer antiferromagnetism only ensures stable coupling between spin and layer in real space.
For example, the interlayer antiferromagnetism with horizontal mirror can not have SLC.
Instead, we consider a generic AM material, where the conduction or valence band edge (referred to as the “valley") is not degenerate.
Due to the altermagnetism, the valleys must come in pairs with opposite spin polarization, leading to intrinsic spin-valley locking (see Fig. <ref>a).
Furthermore, recent research has demonstrated that strong valley-layer coupling can be achieved in certain 2D systems <cit.>, where the electronic states in different valleys have strong but opposite layer polarization (see Fig. <ref>b).
By combining spin-valley and valley-layer locking (see Fig. <ref>c), the electronic states with opposite spin polarization (in different valleys) will exhibit opposite layer polarization.
Consequently, we obtain the desired SLC effect, which here is mediated by the valley degree of freedom.
Given the SLC effect, an uniform electric field can act as an uniform magnetic field, thereby enabling control the spin of the systems without the accompanying disadvantages of magnetic fields in practice (see Fig. <ref>d).
Symmetry requirements.–
We analyze the symmetry conditions for the valley-mediated SLC.
For clarity, we consider a 2D collinear AM system with only two valleys labeled as V_1 and V_2, as shown in Fig. <ref>.
The Néel vector of the system points out of the plane.
We assume that the z-axis is normal to the 2D AM system, and the z=0 plane locates at the mid-plane of the system.
Then the system can be considered as two layers: the top layer with z>0 and the bottom layer having z<0.
For each Bloch state |ψ( k)⟩, we can define a spin (layer) polarization P_s(l)=⟨ψ( k)|P̂_s(l)|ψ( k)⟩ with P̂_s(l) the spin (layer) operator. P_s>0 (P_s<0) and P_l>0 (P_l<0) indicate that the state has more weight distributed in the spin-up (spin-down) and top (bottom) layer, respectively.
Therefore, to realize the valley-mediated SLC, the following symmetry requirements must be satisfied.
(i) Symmetries that guarantee vanishing spin or layer polarization at V_1(2) should be broken, such as horizontal mirror M_z, which makes P_l=0 for all the electronic states. Besides, V_1(2) can not have operators that reverse the spin and layer polarization.
(ii) We can divide the symmetries of the AM system into two parts: R and O.
V_1 and V_2 are invariant (interchanged) under the operators in R (O), as follows:
RV_1(2)=V_1(2), OV_1(2)=V_2(1).
In order to achieve valley-contrasted spin and layer polarization, any operator in R must maintain both spin and layer polarizations for each valley:
RP_l(s)(V_i)R^-1 = P_l(s)(V_i),
with i=1,2.
Meanwhile, any operator in O reverses the polarizations:
OP_l(s)(V_1/2)O^-1 = -P_l(s)(V_2/1).
Notice that all the external fields that break symmetry O can lead both valley and spin polarization.
However, only electric gate field can provide an intuitive and predictable control of the spin, as illustrated in Fig. <ref>d.
Using these two conditions, we conducted a search through all 528 magnetic layer groups (MLGs) to identify those that may host valley-mediated SLC. Our findings apply to systems both with and without SOC, and are summarized in Table <ref>. This table provides a systematic and specific guide for identifying material candidates.
High-quality material candidates.–
In addition to developing design principles, it is equivalently important to identify high-quality material candidates. Here, we demonstrate that the family of monolayer decorated transition metal nitrides A(BN)_2 (A= Mg, Ca, Zn and B= Mn, Fe, Co) materials are the most likely candidates to achieve the valley-mediated SLC. Since all nine kinds of materials share similar crystalline structures and electronic bands, we mainly focus on Ca(CoN)_2 in the main text. Further details regarding the properties of the remaining material candidates can be found in the Supporting Materials (SM).
The 3D bulk Ca(CoN)_2 is a layered material that has been demonstrated to be energetically and dynamically stable, as well as easy to synthesize, peel, and transfer. Notably, the exfoliation energy for monolayer Ca(CoN)_2 (ML-CaCoN) is calculated to be ∼1.19 J/m^2, comparable to that of two representative 2D materials: Ca_2N (∼1.09 J/m^2) <cit.> and HfGeTe (∼0.98 J/m^2) <cit.>. Therefore, obtaining ML-CaCoN from its bulk materials through mechanical exfoliation is feasible. We also confirm that ML-CaCoN is dynamically stable (see Fig. <ref>d) and features thermal stability up to 300 K (see SM).
ML-CaCoN has a square lattice structure with optimized lattice constant a=b=3.55 Å, consisting of five atomic layers in the sequence of Co-N-Ca-N-Co (see Fig. <ref>a).
The top (bottom) Co and N atoms are almost in the same plane.
Remarkably, the separation between the two Co (N) layers is about 3.9 Å, which is as large as the typical interlayer spacing of van der Waals heterostructures.
Detailed calculations confirm that the AM configuration of ML-CaCoN is mostly energy stable (see Fig. <ref>c).
The magnetic moments are mainly on the Co sites with a magnitude of ∼ 3 μ_B and the Néel vector is along the z axis.
All these results show that the ground state of ML-CaCoN belongs to MLG 59.5.414, which is exactly an MLG candidate listed in Table <ref>.
According to our symmetry analysis (see Table <ref>), ML-CaCoN will exhibit valley-mediated SLC as long as it has two valleys at the X and Y points.
In fact, these two valleys are connected by 𝒯𝒮_4z symmetry (with 𝒮_4z being a roto-inversion operator), which maps spin-up to spin-down, and interchanges the layer polarization of the two valleys.
Figure <ref>a presents the band structure of the AM order with the spin aligned along the z direction under the SOC effect. It is observed that there indeed exist two valleys for both the conduction and valence bands at the X and Y points in the Brillouin zone (BZ).
Therefore, we can conclude that the valley states must feature valley-contrasted spin and layer polarization.
This prediction is confirmed by our first-principles calculations.
By analyzing the spin and atomic orbital projections of the valley states, we find that both the conduction and valence bands residing at the X valley are completely composed of spin-up, while those at the Y valley only contain spin-down (see Fig. <ref>a).
Additionally, the conduction band at X (Y) valley is mainly distributed in the top (bottom) Co/N layer, whereas the valence band at X (Y) valley is mainly distributed in the bottom (top) Co/N layer (see Fig. <ref>b).
Gate field control of spin splitting.–
We then study the behavior of the valley states of ML-CaCoN under a gate field along the z direction E_z.
Due to SLC, this behavior can be simply predicted without complicated calculations.
Since the gate field creates a layer-dependent electrostatic potential ∝ e E_z d (e denotes the electron charge), the electronic bands with layer polarization P_l>0 (P_l<0) will move up (down).
Therefore, for the conduction band, the gate field behaves as a static magnetic field along -z direction, pushing up the spin-up (at the X valley) while pulling down the spin-down (at the Y valley).
For the valence band, the spin splitting is opposite because the layer polarization of the spin (and valley) states are reversed (see Fig. <ref>b).
Notice that the layer distance of the top and bottom Co (N) layers have an important impact on the spin splitting, similar to the effective g-factor in Zeeman effect.
The calculated band structure of ML-CaCoN under a gate field of E_z=0.2 eV/Å, which is achievable in experiments, is shown in Fig. <ref>c.
The results are in line with our expectations.
The spin splitting of the conduction band minimum (CBM) and valence band maximum (VBM) respectively reach up to 123 meV and 78 meV (see Fig. <ref>c).
Remarkably, all the conduction (valence) bands around the X (Y) valley feature almost the same energy shift as CBM (VBM).
Therefore, the gate field in ML-CaCoN induces not only a large but also an almost uniform effective static magnetic field.
To the best of our knowledge, such an effective static magnetic field has not been reported before and cannot generally be generated by schemes other than the SLC scheme.
Approximately, we can define an effective magnetic field
B_c(v)^eff≡Δ E_c(v)/g_s s_z,
where Δ E_c (Δ E_v) is the energy splitting for CBM (VBM), g_s is the effective g-factor which here is assumed to be 2, and s_z=1/2 is the spin momentum.
Accordingly, one knows that for the conduction (valence) band of the ML-CaCoN, a gate field of E_z=0.2 eV/Å leads to a significant magnetic field as large as ∼ 1.062×10^3 T (∼ 0.673×10^3 T).
We also calculate the spin splitting as a function of the gate field, and the results are shown in Fig. <ref>d, which clearly shows that a continuous, wide-range, and switchable control of spin (valley) polarization is achieved.
When the gate field is not strong enough, we obtain a linear relationship between the induced B_c(v)^eff and the gate field E_z,
B_c(v)^eff=χ_c(v) E_z,
where the coefficients are obtained as χ_c= 5.15×10^3 TÅ/V and χ_v=2.71×10^3 TÅ/V (see SM).
Effective model for valley-mediated SLC.–
To better understand the physics of the valley-mediated SLC, we establish a low-energy effective Hamiltonian for the ML-CaCoN.
The magnetic point group of the X and Y valleys are m'm'2.
The band representation for the CBM (φ_c^X) and VBM (φ_v^X) at the X valley are ^2E and ^1E respectively, while that of the CBM (φ_c^Y) and VBM (φ_v^Y) at the Y valley are ^1E and ^2E.
Using the four states {φ_c^X, φ_v^X,φ_c^Y, φ_v^Y} as the basis, the k· p effective model can be expressed as:
ℋ = Λτ_z+v_1(k_xτ_x+k_yτ_y)+v_2(k_xτ_x-k_yτ_y)σ_z,
where Λ=0.281 is the half of the band gap, v_1=1.167 eV/Å and v_2=-0.109 eV/Å for ML-CaCoN.
Both σ and τ are Pauli matrices that act on the valley space and the basis of {φ_c^V, φ_v^V} with V=X and Y, respectively.
Since the basis {φ_c^V, φ_v^V} have opposite layer polarization, τ can also be regarded as acting on the layer index space.
Similarly, σ can also be considered to operate in the spin space, as the low-energy states at the X (Y) valley is spin-up (spin-down) electrons.
Therefore, the physics of the SLC is solely encoded in the last term of Eq. (<ref>) because it pairs τ with σ.
With the concept of the effective magnetic field, the influence of the gate field on the low-energy bands of ML-CaCoN can be approximately written as:
ℋ_E = -s_z g_s diag(B_c^eff, -B_v^eff, B_c^eff, -B_v^eff).
Discussions.–
The multifaceted progression of altermagnetic spintronics <cit.> has rendered altermagnets a compelling and highly practical research domain. Boasting unique advantages over traditional ferromagnets, such as the absence of stray fields, superior intrinsic precession frequency, and exceptional stability under magnetic fields, altermagnets are being hailed as the foundation for the next generation of high-performance devices. When SLC meets altermagnetism, it presents a great opportunity to explore more possibilities in AM materials, thereby achieving the effect of making 1 + 1 greater than 2.
ML-CaCoN emerges as an optimal platform for generating a current that exhibits impeccable spin (valley) polarization. On doping the system with either electrons or holes and subjecting it to a gate field, the direction of spin (or valley) polarization can be effortlessly toggled by reversing the gate field's direction. This unique gate field induced switchability of spin polarization in ML-CaCoN hints at a novel design paradigm for achieving heightened tunneling magnetoresistance (TMR) using CaCoN alone. This approach significantly simplifies experimental procedures as it bypasses the need for the conventional conductive layer alternation between magnetic and non-magnetic materials, typically required in traditional TMR devices.
As depicted in Fig. <ref>a, in a parallel configuration (the Fermi level is raised to 0.33 eV by doping electrons), both the left and right terminals are subjected to positive electric fields (e.g. E_z=0.2 eV/Å), simultaneously, the matching spin-polarized conduction channels between the electrodes culminate in an ultra-low-resistance state. In contrast, as demonstrated in Fig. <ref>b, in an anti-parallel configuration (for example, when the left terminal is subjected to E_z=0.2 eV/Å, and the right to E_z=-0.2 eV/Å), electrons moving from left to right must switch spin, valley and layer indexes, namely changing from spin-up, X valley and top layer to spin-down, Y valley and bottom layer. This makes such transitions extremely less probable during transport. The overall transmission as a function of electric fields for the CaCoN tunnel junctions is obtained for both parallel (red dots) and antiparallel (blue triangles) configurations, as shown in Fig. <ref>c. Consequently, our design could potentially deliver a stronger suppression of antiparallel configurations, resulting in a more pronounced TMR effect compared to conventional TMR devices.
ML-CaCoN also presents a spectrum of intriguing and adjustable optical properties. For example, ML-CaCoN showcases valley- and spin-contrasted elliptical dichroism, in which left (or right) elliptically polarized light interacts with the inter-band transitions of the electrons possessing X valley and spin-up index (or Y valley and spin-down index).
Moreover, the opposing layer polarization of valence and conduction bands at each valley leads to the formation of spin-resolved interlayer excitons <cit.> and layer photogalvanic effects under the illumination of left or right elliptically polarized light <cit.>. This phenomenon results from the localization of optically excited electrons and holes within different layers of the ML-CaCoN structure.
Significantly, the characteristics of these excitons and photogalvanic effects can be modulated through the gate field, as the gate field impacts the local band gap at the X (or Y) valley.
Acknowledgments.
The work is supported by the National Key R&D Program of China (Grant No. 2020YFA0308800), the National Natural Science Foundation of China (Grant No. 12061131002), the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB30000000), the National Natural Science Fund for Excellent Young Scientists Fund Program (Overseas) and the Beijing Institute of Technology Research Fund Program for Young Scholars.
|
http://arxiv.org/abs/2306.03744v1
|
20230606150118
|
Opacity for realistic 3D MHD simulations of cool stellar atmospheres
|
[
"A. Perdomo García",
"N. Vitas",
"E. Khomenko",
"M. Collados",
"C. Allende Prieto",
"I. Hubeny",
"Y. Osorio"
] |
astro-ph.SR
|
[
"astro-ph.SR"
] |
Instituto de Astrofísica de Canarias,
38200 La Laguna, Tenerife, Spain
[email protected]
Departamento de Astrofísica de la Universidad de La Laguna,
38200 La Laguna, Tenerife, Spain
Isaac Newton Group of Telescopes, Apartado de Correos 321, 38700 Santa Cruz de La Palma, Canary Islands, Spain
Steward Observatory, University of Arizona, Tucson, USA
Realistic 3D time-dependent simulations of stellar near-surface convection employ the opacity binning method for efficient and accurate computation of the radiative energy exchange.
The method provides several orders of magnitude of speed-up, but its implementation includes a number of free parameters.
Our aim is to evaluate the accuracy of the opacity binning method as a function of the choice of these free parameters.
The monochromatic opacities computed with the code are used to construct opacity distribution function (ODF) that is then verified through detailed comparison with the results of the code. The opacity binning method is implemented with the opacities for four representative cool main-sequence stellar spectral types (F3V, G2V, K0V, and M2V).
The ODFs from and show consistent results for the opacity and bolometric radiative energy exchange rate Q in case of the F, G, and K – type stars.
Significant differences, coming mainly from the molecular line lists, are found for the M – type star.
It is possible to optimise a small number of bins to reduce the deviation of the results coming from the opacity grouping with respect to the ODF for the F, G, and K – type stars.
In the case of the M – type star, the inclusion of splitting in wavelength is needed in the grouping to get similar results, with a subsequent increase in computing time.
In the limit of a large number of bins, the deviation for all the binning configurations tested saturates and the results do not converge to the ODF solution.
Due to this saturation, the Q rate cannot be improved by increasing the number of bins to more than about 20 bins. The more effective strategy is to select the optimal location of fewer bins.
Opacity for realistic 3D MHD simulations of cool stellar atmospheres
A. Perdomo García
12
N. Vitas12
E. Khomenko
12
M. Collados
12
C. Allende Prieto12
I. Hubeny4
Y. Osorio
123
Received Month day, year; accepted Month day, year
=============================================================================================================================================================================================================================
§ INTRODUCTION
Radiative transfer is one of the key factors governing the structure and dynamics of stellar atmospheres and it has to be taken into account with special care when constructing a model atmosphere <cit.>.
Detailed interactions between radiation and matter are quantitatively described by the opacity: the absorption coefficients of all contributing (bound-bound, bound-free, and free-free) processes due to all relevant particle species present in the atmosphere <cit.>.
The complexity of the opacity function is enormous as it is visible from the plethora of atomic and molecular spectral lines in the spectra of late-type stars <cit.> and its evaluation is computationally expensive.
In combination with the already computationally demanding solution of the radiative transfer equation (RTE), the problem of calculating detailed monochromatic radiative losses and gains in the atmosphere, accounting explicitly for ≈10^9 lines and ≈10^6 wavelength points, is difficult even in one-dimensional (1D) modelling, and virtually impossible in three-dimensional (3D) time-dependent numerical simulations.
Two important tricks are available, however, that reduce the computational effort by many orders of magnitude.
First, under the assumption of local thermodynamic equilibrium (LTE),
opacity is a function of chemical composition, the turbulent velocity, wavelength, and two independent thermodynamic variables (for example, temperature T and mass density ρ); it can therefore be precomputed and stored into lookup tables.
Even in the situations where some atoms are treated
out of LTE, the opacity of the LTE background can be precomputed
without the contribution of the NLTE lines, which are computed explicitly.
The opacities are computed as by-products in the codes and projects for modelling stellar atmospheres: the family of codes <cit.>, <cit.>, <cit.>, / <cit.> and others.
Second, in the energy conservation equation of the system of magnetohydrodynamic equations, the radiative energy exchange is accounted for by a wavelength-integrated source term Q.
The distribution of radiative heating and cooling with wavelength is irrelevant, at least as long as the one-fluid MHD approach is used.
Therefore, instead of solving monochromatic RTE and then integrating
the results,
an approximate solution may be obtained by first combining or integrating the opacities in wavelength ranges and then solving the RTE for a number of representative opacity values.
This idea is implemented in the opacity distribution function (ODF) defined by <cit.>.
In their approach the monochromatic opacities are divided into hundreds of wavelength segments. The opacities in each segment are then sorted by magnitude to obtain a continuous monotonic distribution that is then discretised by averaging over several (≈10) substeps.
In that way the RTE can be solved for several thousands ODF values instead for millions of monochromatic opacities.
An important underlying assumption in the ODF approach is that there are no
substantial wavelength shifts in the spectrum along the different heights in the atmosphere. There are situations in which this assumption fails <cit.>, especially for plasmas with velocity fields. Nevertheless, even in moving media, if we are not interested in detailed spectra and the velocities are not larger than the thermal velocity, we can still solve the RTE using ODFs.
Precomputed ODF tables produced by the code are widely used in stellar modelling and for computing emergent spectra from the models <cit.>.
The ODF method was recently optimised and generalised by <cit.>.
Nevertheless, not even the speedup enabled by ODF is sufficient when atmospheres are modelled as time-dependent 3D systems.
A solution, known in the literature as the opacity binning (OB) method, was initially proposed by <cit.>.
In this method the opacities are grouped according to the optical depth in a representative atmosphere at which the monochromatic optical depth reaches unity.
Very few groups (or bins) are sufficient to
approximate
fairly accurately
the correct solution for the radiative energy exchange rate, although the explicit wavelength dependence of the opacity is eliminated.
The number of groups and the distribution of the optical depth separators between them are free parameters of the method, while
four groups are typically used, roughly representing the continuum, weak, intermediate, and strong lines.
The OB method opened the door to the efficient implementation of realistic 3D simulations that have revolutionised our understanding of the physics of stellar atmospheres.
The method was further developed by
<cit.>, <cit.>, and <cit.>. <cit.> developed a variant that includes the effects of scattering.
The wavelength dependence of the opacities is disregarded in the OB method.
<cit.> and <cit.> noted that this approximation does not converge to the correct solution in the limit of an infinite number of bins.
Several other groups experimented with variations to the OB method.
<cit.> used 12 bins in their study of the oxygen abundance, while <cit.> and <cit.> introduced a variant of the method whereby the opacities are sorted by wavelength into several groups before sorting by optical depths.
However, there is no unique criterion for choosing the number and distribution of the bins.
The distribution of the opacities varies significantly with the effective temperature and chemical composition of a star and it may thus be expected that different distributions of bins are optimal for different stars.
Our aim in this paper is to test the OB method for four cool main-sequence stars (F3V, G2V, K0V, and M2V) and to design a strategy for finding optimal setups.
Our analysis is based on monochromatic opacities computed with the code.
The code (and its modelling counterpart ) was recently significantly upgraded and equipped with an up-to-date list of atomic and molecular spectral lines necessary for the modelling of cool stellar types <cit.>.
The code is publicly available as open source[<https://www.as.arizona.edu/ehubeny/tlusty208-package/tl208-s54.tar.gz>] and supplemented with a Python wrapper <cit.>.
As we intend to use the code to prepare customised opacities for 3D simulations of near-surface convection in cool stars (Perdomo et al, in preparation) with the code <cit.>, flexibility in selecting opacity contributors is of particular importance for us.
In Sect. <ref> we briefly introduce a set of the 1D models that represent our stars and specify the radiative transfer details. Computation of the monochromatic opacities using the code are described in Sect. <ref>. In Sect. <ref> the essentials of the ODF method are summarised: ODFs are constructed from the monochromatic opacities computed with , and the results are compared with the results of the code. The same data set is used to study strategies for OB in Sect. <ref>: we first test the importance of the location of the bin separators in optical depth (Sect. <ref>), then we explore the influence of the number of bins in optical depth (Sect. <ref>), and, finally, we study two different strategies for combined binning in optical depth and wavelength (Sect. <ref>). Our conclusions are presented in Sect. <ref>.
§ METHOD
§.§ Model atmospheres
To test the OB for a series of main sequence spectral types, a set of 1D stellar atmospheric models is used. These atmospheres are computed assuming hydrostatic equilibrium and using the mixing length theory to account for the convective energy transfer.
The models are initiated with radiation in the grey approximation and corrected to have representative temperature gradient at the top of the atmosphere by scaling the Hopf function q(τ) <cit.> of the the Harvard-Smithsonian Reference Atmosphere <cit.> with the effective temperature of each star, following the expression T^4(τ) = 3/4T_eff^4 ×[τ + q(τ) ]. The models are shown in Fig. <ref> and are computed for spectral types F3V, G2V, K0V, and M2V, with effective temperature (K) and logarithm of gravity (cm s^-2) of [6890, 4.3], [5780, 4.4], [4855, 4.6], [3690, 4.8], respectively.
In the computation of the models we used the Rosseland opacities consistent with the opacities used in the present paper (see Sect. <ref>).
More details about the models will be published elsewhere.
§.§ Radiative transfer computation
The RTE in a plane-parallel static atmosphere in LTE without considering line scattering is
μdI_ν(z, ν)/dz = ρ(z) ϰ_ν(z) [ B_ν(z) - I_ν(z) ],
where I_ν(z, ν) is the specific intensity (erg s^-1 cm^-2 Hz^-1 ster^-1) at frequency ν in the direction μ and at the geometrical height z in the atmosphere; B_ν is the Planck function; ρ(z) is the mass density; and ϰ_ν(z) is the monochromatic absorption coefficient per unit mass (cm^2 g^-1; referred to as opacity throughout the text). The opacity is computed as the sum of the continuum opacity (mainly contributions from the free-free and bound-free processes) and the line opacity (bound-bound individual transitions included in the line lists for atoms and molecules),
ϰ_ν = ϰ_ν^cont + ϰ_ν^lines
We restrict ourselves to the LTE case that is valid only in deep layers where J ≈ B. Therefore our treatment of the coherent scattering terms (see App. <ref>) in the total opacity is only approximate <cit.>.
We solve the RTE in the vertical direction (μ = ± 1).
At each frequency ν, the formal solution for outward and inward intensity (I^+, I^-) in LTE is:
I^±_ν(z_i) = I^±_ν(z_i∓1) exp(-Δτ_ν^i∓1) + ∫_0^Δτ_ν^i∓1 B_ν (t) exp(t-Δτ_ν^i∓1)dt ≡
≡ I^±_ν(z_i∓1) exp(-Δτ_ν^i∓1) + Δ I^±_ν(z_i),
where i is the height index for each point in the atmosphere, being zero at the bottom and increasing upwards; τ_ν^i is the optical depth at the height with index i (dτ_ν(z) = ρ(z) ϰ_ν(z) dz); and Δτ_ν^i+1 = τ_ν^i+1-τ_ν^i and Δτ_ν^i-1 = τ_ν^i-τ_ν^i-1. To solve the RTE, we apply the short characteristics method <cit.>, using the linear approximation of the Planck function:
Δ I_ν^±(z_i) = ψ_0 B_ν(z_i) + ψ_1 B_ν(z_i∓1).
For each frequency ν we compute the coefficients
for the local and the upwind points
ψ_0 = 1 - 1/Δτ_ν + 1/Δτ_νexp(-Δτ_ν) ,
ψ_1 = 1/Δτ_ν - Δτ_ν+1/Δτ_νexp(-Δτ_ν).
Since the method is formulated in a way that it is symmetrical along a ray direction, the coefficients are the same for the upward and inward intensities, and the only change needed is the specific Δτ_ν in each case (Δτ_ν=Δτ_ν^i-1 for the upward intensities; Δτ_ν=Δτ_ν^i+1 for the inward ones). This first order short-characteristics scheme would not be sufficiently accurate if scattering was taken into account (i.e. if the formal solver was used in an iterative solution).
The mean intensity J_ν (erg s^-1 cm^-2 Hz^-1 ster^-1) and the flux ℱ⃗_ν (erg s^-1 cm^-2 Hz^-1) can then be calculated for a vertical ray from the known I_ν:
J_ν (z_i) = 1/2∫_-1^1 I_ν (z_i) dμ≡1/2( I^+_ν(z_i) + I^-_ν(z_i) ),
ℱ_ν (z_i) = 2 π∫_-1^1 I_ν (z_i) μ dμ≡ 2 π( I^+_ν(z_i) - I^-_ν(z_i) ).
Finally, the monochromatic radiative energy exchange rate Q_ν (erg cm^-3 s^-1 Hz^-1), which accounts for the radiation sources and sinks in the energy conservation equation, can be computed either as the divergence of the radiative flux,
Q^F_ν (z_i) = - ∇⃗ℱ⃗_ν≡ -(ℱ_ν (z_i+1) - ℱ_ν (z_i-1)) / (z_i+1 - z_i-1),
or directly from the mean intensity,
Q^J_ν (z_i) = 4 πϰ_ν (z_i) ρ (z_i) ( J_ν - B_ν).
The monochromatic rate is calculated as a weighted combination of the two, using equation 4.13 from <cit.> <cit.>:
Q_ν (z_i) = exp(-τ^i_ν/τ_h) Q^J_ν (z_i) + [1 - exp(-τ^i_ν/τ_h) ] Q^F_ν (z_i),
where τ_h=0.1. As it is explained in <cit.>, this smooth transition between Q^J_ν and Q^F_ν avoids the accuracy problem that Q^J_ν has in the optically thick regime (where J_ν approaches B_ν) and the errors coming from small variations of the orientation of the flux in the optically thin part of the atmosphere (where flux should be nearly constant), that are enhanced by the gradient in Q^F_ν.
§ MONOCHROMATIC OPACITY
For the computation of the monochromatic opacity ϰ_ν we use
, a Python wrapper for
. is a general spectrum synthesis code that has been used to solve the RTE in different astrophysical scenarios, including cool stars (see, for example, and the ). The main purpose of is to synthesise spectra for a given model atmosphere, but it can also be used to compute lookup tables of monochromatic opacities for a given grid of thermodynamic quantities. The code solves the equation of state (EOS), and uses the number densities to compute the continuum and line opacities.
The EOS from is fully consistent with the results of our own EOS solver (Vitas et al. in preparation), which is used to produce the EOS tables for the 3D simulations of stellar atmospheres with the code <cit.>.
The EOS in is solved for a specified grid of temperature T and mass density ρ. The number of nuclei per species (fixed by the abundances) and charge are conserved.
The code solves the chemical equilibrium equations to determine the populations for 38 atomic species. For temperatures lower than a certain value (in this work chosen to be 8000 K), neutral atoms and singly ionised atoms are included in the EOS (not taking into account higher ionisations) and the code also considers molecular formation (including 503 molecular species).
For higher temperatures, molecules are not included in the EOS, and higher ionisations are computed for the atoms.
For more details on the solution of the EOS in (and other details and the general use of the code), we refer the reader to section 2.6 from <cit.>, and <cit.>.
Once the EOS is solved, the opacity is computed in the same (T,ρ) grid, using the atomic and molecular data from several sources (see App. <ref>). The data used by the code can be customised by the user by changing, for example, the line lists for the bound-bound transitions. The control over opacity contributors is important for the cooler stars, where omission of certain molecules (e.g. TiO and VO) or incomplete line lists may strongly affect both detailed spectral synthesis and the bulk opacities used for modelling.
For the ingredients listed in App. <ref>, we computed a lookup table using , with a microturbulence velocity of 2 km s^-1 and for the solar abundances from <cit.>. These solar abundances are outdated, but preferred in our study since they were used in the computation of the reference ODFs we will be comparing to. The T-axis of the grid has 60 values in the range T ∈ [1995,125900] K, with constant step Δlog T ≃ 0.03 K (in the present text, log≡log_10).
The ρ-axis has 35 values in the range ρ∈ [10^-13, 3.2 × 10^-3] g cm^-3 with Δlogρ≃ 0.3. The table is computed for wavelengths in the range λ∈ [20, 9.5× 10^4] nm, with Δlogλ = 10^-6 (λ in nm). In total there is around 7.7×10^9 data points in the table, stored in an HDF5 file of 230 GB. Another table is computed for the same wavelength grid but with finer temperature and density axes (same number of points, but reduced ranges, T ∈ [1995,19900] K and ρ∈ [10^-9, 3.2 × 10^-3] g cm^-3). This table is more suitable for the M2V star, thus reducing the interpolation errors in the RTE solver.
The total opacity (continuum and lines) in the wavelength range from 100 nm to 2500 nm is shown in Fig. <ref>. It is interpolated for the temperatures and mass densities of the four 1D stellar models. The opacity clearly changes with different heights in the stellar models and with spectral type.
To check where the radiation at each wavelength contributes to the emergent radiation, we mark the geometrical heights at which τ_ν=1 (thick line). This curve includes the contribution from both the line and the continuum opacity. Since the wavelength range is wide, we divided it into sub-intervals of 2.4 nm and computed the average height in each of them. Similarly, we overplot the formation heights of the continuum (thin line), at which τ^cont_ν=1.
The ionisation edges and the typical shape of the bound-bound and bound-free H^- opacity coefficient are visible for the τ^cont_ν=1 curve.
In the UV the line forest elevates the height of the τ_λ=1 curve. The strong spectral lines, especially the hydrogen series, are clearly visible as well as their relative decrease in strength with decreasing effective temperature. In the M star, the τ_λ=1 curve is shaped by millions of molecular transitions.
Following the method from Sect. <ref>, we solve the RTE to obtain Q_ν.
Once Q_ν is computed, we can integrate it in frequency using the trapezoidal rule:
Q = ∫_ν Q_ν dν≡1/2∑_i (Q_ν_i + Q_ν_i+1) Δν_i.
The computed Q_ν is shown in Fig. <ref>, for the wavelength range and set of stellar models as in Fig. <ref>. The radiative energy exchange happens mostly around the stellar surface. For decreasing effective temperature, the absolute value of Q_ν is large for a wider wavelength range and the significance of the contribution to Q_ν shifts from the visible for the F star towards the IR for the M star. Figure <ref> also shows how the relative magnitude and area covered by significant Q_ν for the heating in the atmosphere (Q_ν>0) and the cooling (Q_ν<0) changes for the different spectral types.
For the F star the maximum heating is around an order of magnitude smaller than the magnitude of the most intense cooling, and
peaks in the near-UV and blue wavelengths. For the M star, the heating is comparable to the cooling in magnitude, and
ranges from the visible to the near-IR. As in the opacity plot (Fig. <ref>), the individual contributions to the line opacity from strong transitions are obvious for the hotter stars in the sample. In contrast, they almost disappear for the M star, while small structures due to the presence of the molecules become more evident.
Figure <ref> provides a qualitative overview of the monochromatic radiative energy exchange. One must keep in mind that the computations presented here are done in the LTE approximation which is fairly good for most of the photospheric lines, but fails badly for lines that contribute to radiative losses in the chromosphere. However, the method of precomputed opacities is feasible only in the LTE approximation when there is unique mapping between the pair thermodynamic quantities and the computed total opacity. Our primary focus is therefore on the stellar surface (i.e. the top of the convection zone and the lower photosphere), from where the bulk of the radiation escapes the star. This is also the reason why we present the results for the bolometric Q (erg cm^-3 s^-1) and not the ratio Q/ρ (erg g^-1 s^-1), which would put more emphasis on the radiation exchange at lower densities.
§ OPACITY DISTRIBUTION FUNCTIONS
§.§ Construction of the opacity distribution function
The ODF method <cit.> was invented to allow the accurate calculation of the radiative losses due to the millions of opacity contributors in a computationally efficient way. In the 1960s the solution for the RTE was prohibitive for more than a few wavelengths in 1D model atmospheres. Despite the exponential growth in computing power since then, the same problem of efficiently and accurately accounting for the millions spectral lines is still present today when the RTE is solved in 3D time-dependent simulations.
An ODF is constructed
to replace detailed monochromatic opacities by their statistical means in wavelength intervals,
following a procedure that preserves the accuracy of the computed bolometric radiative quantities while heavily reducing the amount of computation
<cit.>. First, one divides the full wavelength range into steps (an example for one step, defined by λ∈ [λ_2, λ_3], is shown at the left panel of Fig. <ref>). Then, all the opacities within each of the steps are sorted by increasing magnitude (see the middle panel of Fig. <ref>).
At this point the direct mapping between the monochromatic opacity values and their wavelengths is lost.
To discretise this monotonic distribution within each step, a number of substeps are defined specifying a set of dimensionless weights ω_j (∑_j ω_j=1). The length of a substep Δλ_i,j is calculated in terms of the length of the step Δλ_i and the weight ω_j, so that Δλ_i,j = ω_j Δλ_i[The length Δλ_i,j is not an interval in the wavelength scale, but it counts a certain number of points in the distribution.].
Finally, the opacities are averaged computing the arithmetic mean within each substep (see the right panel of Fig. <ref>).
This procedure can be performed for any set of monochromatic opacities computed for a pair of thermodynamic quantities. One of them is usually the temperature T and the other is either the density ρ (as in [ can also work with a grid of electron density or pressure.]) or the gas pressure P <cit.>. <cit.> created ODF tables[These can be found in <http://kurucz.harvard.edu/opacities.html>.
The particular table from referred to in the present paper is in the section `Old Kurucz ODF files', computed for the solar abundances from <cit.> and a turbulent velocity of 2 km s^-1. The corresponding monochromatic opacities are not available.] in the described way for specific grids of (T, P). We apply the same strategy to the monochromatic opacities computed using in a (T, ρ) grid, for the two monochromatic opacity tables described (see Sect. <ref>).
The wavelength range of our computation and derived ODF is shorter than the range covered by the ODF, λ∈ [9,10^7] nm. When the table is used to compute the bolometric RT quantities in the different model atmospheres, there is no difference between using it over its entire λ range and in the reduced range of our computations. This is not surprising since the emission of cool stars peaks in spectral regions far from the extremes that we left out from the full wavelength interval of the tables. For our ODF calculation, the wavelength range is divided into 291 steps, which have exactly the same locations and the same 12 weights for the substeps as .
The length of the steps is approximately proportional to the wavelength <cit.>; and the substeps are constructed using the weights ω_j = { 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.05, 2/60, 1/60 }. This kind of non-uniform weighting is taken from the code and had been demonstrated <cit.> to perform particularly well in the visible and infrared for the solar case, being less accurate than uniform weighting in the UV (only important to compute detailed spectra). Our ODF contains the total (continuum and lines) opacity in cm^2 g^-1, just as it is the case for .
The tables from <cit.> have been widely used in the context of MHD simulations of solar and stellar atmospheres <cit.>. The atomic and molecular line lists used to compute these tables <cit.> are sufficiently complete and accurate to make these simulations closely resembling various observational diagnostics. However, apart from the molecules that are included (H_2, HD, MgH, NH, CH, SiH, OH, CH, C_2, CN, CO, SiO), many are missing (e.g. VO and TiO) which are dominant opacity sources in the spectra of M stars. , however, allows for a more complete set of molecules (see App. <ref> and Fig. <ref>) and includes an up-to-date collection of atomic and molecular data (partition functions and line lists). However, to evaluate the effect of the updated data, one has to compare the two data sets carefully.
§.§ Comparison of opacity distribution function from and
While there is overall good match between the two data sets, there are also significant differences. To understand better these differences, we compare the opacity of each step and substep of the ODFs interpolated for the values of T and ρ along the four 1D model of atmospheres. Four examples representing typical differences are shown in the App. <ref>, in Figs. <ref>-<ref>
(where black lines represent the opacity from along the models, and the coloured lines on top of them represent the the opacity from ).
In parallel, we also compare the bolometric Q rate computed using both ODFs. We apply the method presented in Sect. <ref> to solve the RTE and compute Q_i,j for every step i and substep j; we then integrate in frequency following the ODF formalism:
Q^ODF = ∑_i Q_i≡∑_i Δλ_i ∑_j Q_i,jω_j,
where Q_i is the rate integrated in wavelength in the ith step.
Comparing the values of Q computed from different data sets is not straightforward. An example of Q computed for the model of the M star using two opacity sets and their absolute difference is shown in Fig. <ref>. In the figure we identify the typical negative feature that corresponds to the strong cooling at the bottom of the photosphere and the relatively smaller positive one that shows the radiative heating (blanketing effect). Higher up and deeper down in the atmosphere, the values of Q can be several orders of magnitude smaller. While the radiative losses at these heights may still be significant, we focus here on the dominant components around the optical depth unity. Wherever the values of Q are small, the relative difference computed between them may be exaggerated with respect to the importance of these differences from the energy balance in the atmosphere. Moreover, as the sign of Q flips in the region of interest (the interval of heights around τ = 1) and the two Q values computed from and do not necessary flip the sign at exactly the same height, the relative difference around that height might also be misleadingly high. Therefore, instead of computing the relative differences in the entire domain, we focus on the two dominant features and measure the deviation for the cooling and heating part separately.
For each of them we compare the area A of the difference of Q computed for each of the ODF data sets normalised to the area of one of them:
χ_C = A ( | Q^(1) - Q^(2) | )_C/A ( | Q^(1) | )_C,
χ_H = A ( | Q^(1) - Q^(2) | )_H/A ( | Q^(1) | )_H,
where we choose as Q^(1) and Q^(2) the Q computed with the ODF from and , respectively. In Sect. <ref> the same is used to measure discrepancy between the ODF and OB results. The areas are defined as A ( f(z) )_C = ∫_z_b^z_C → H f(z) dz and A ( f(z) )_H = ∫_z_C → H^z_t f(z) dz. The height at z_C → H is the height where Q changes sign for the first time above the cooling component; z_b and z_t are, respectively, the highest point in the atmosphere under the surface (z<0 in the plots) and the lowest point over the surface (z>0) where | Q |< 2 × 10^-4| min( Q ) | (see Fig. <ref>). If
A ( Q )_H < 1% A ( Q )_C, χ_H is not computed (these cases are marked as χ_H= - - - % in the figures).
The radiative energy exchange terms Q_i computed from the four models using the same steps as in Figs. <ref>-<ref> are shown in App. <ref>, in Figs. <ref>-<ref>. The corresponding values of the deviation measures are indicated in each panel.
When comparing the opacity from the ODF of the two data sets, we see a general match in behaviour and values in most of the visible and infrared intervals for the F, G, and K stars.
An example for a step with a good match (λ∈ [590, 600] nm) is shown in Fig. <ref>. The Q_i values computed in the same step closely overlap for these stars with all χ_C and χ_H between 1.4% and 3.1% (see, for example, Fig. <ref>).
For the near-UV, visible, and IR, the steps that include strong lines show significant differences in the opacity. Some examples are the steps that include hydrogen Hα
(Fig. <ref>) or NaI D1 and D2 (Fig. <ref>). Similar discrepancies are found in the steps containing other strong lines like the CaII H and K lines, the G band, HI β, MgI b1, the CaII triplet, and the most intense lines from the Paschen, Brackett, and Pfund series. In the case of all the hydrogen series the opacity from is higher than the one from SYNSPEC in the deeper part of the atmospheres, but this reverses in the upper part (see Fig. <ref>). For these hydrogen lines, such opacity behaviour is present in all the substeps, although the reversal occurs deeper in the less opaque substeps. We notice a similar tendency in the G band. Another trend can be seen for the steps containing CaII lines, MgI b1 and NaI D1 and D2, for which the opacity from SYNSPEC is higher than the one from ATLAS at all the heights of the atmospheres for the most opaque substeps (see Fig. <ref>). The number of affected substeps varies between the 2 and 4 most opaque, depending on which intense line falls within the step. The mismatch is affected by the number of strong lines contained in the interval, the magnitude of their opacity with respect to the continuum, and their width relative to the length of the interval.
In the less opaque substeps, the two data sets give very close results.
In the steps that contain strong spectral features, the radiative energy exchange rates generally show χ_C,χ_H≲ 5-10 % for the F, G, and K stars (see, for example, Figs. <ref>,<ref>). An exception happens for the G star for the step containing hydrogen Hα
where χ_H≃ 25 % (see Fig. <ref>). However, in the latter example, the area of the heating feature is only 0.02 times the area of the cooling, making this discrepancy less important.
The differences between and for the strong atomic lines could be, for example, due to the wavelength resolution in the monochromatic opacity tables or the broadening parameters used in the line synthesis (e.g. Van der Waals broadening or damping constants).
In the case of the hydrogen lines, the difference is explained by the treatment of the Stark broadening. For this project, we used approximate Stark broadening profiles after <cit.>, while in the tables from <cit.> are used.
Without having access to the monochromatic opacities from it is difficult to pinpoint the exact reason of the discrepancies for the rest of the species.
Although both codes are open source, their complexity makes it very difficult to find the exact source of differences by comparing the actual source code.
Apart from the spectral features discussed above, the comparison of the two opacity sets in the visible and IR for the F, G, and K stars reveals an excellent agreement. At the same time, there is a notable mismatch for the opacity in the UV between the two data sets (see an example in Fig. <ref>). These discrepancies are also evident in the corresponding values of Q_i (see an example in Fig. <ref>), for which we find differences of around χ_C, χ_H≃ 5-40% for λ≳ 190 nm (as in the example from Fig. <ref>), and around χ_C, χ_H≃ 30-80% for λ < 190 nm (in other steps in the ultraviolet).
The differences between the two data sets in the UV are likely due to the use of photoionisation cross-sections from the Opacity Project [<http://cdsweb.u-strasbg.fr/topbase/publi.html>] and the Iron Project <cit.> in <cit.>, while in the table from 1993 the cross-sections come mainly from <cit.> and <cit.>.
With all the similarities found in the visible and infrared, and the nearly perfect match we see in the bolometric Q for the F, G, and K stars (Fig. <ref>), it may be concluded that both ODF tables are fairly similar and reproduce a close solution of the RTE.
However, both the comparison of the opacity and the computed values of Q from and reveals much larger discrepancies for the M star. The opacities from are, in general, larger than the ones from for the points at the top of the M atmosphere. The corresponding values of χ_C and χ_H are also larger, for most of the steps in the visible and NIR they are in the range ≃10–35% (≃40–60% in the worst cases).
As previously discussed, the discrepancies between the two tables for the later-type star are expected owing to different selection of molecules and line lists. The discrepancy is prominent in the heating component of the bolometric Q (Fig. <ref>), for which χ_H≃ 77%, in contrast to the values χ_C < 3 % and χ_H < 10 % found for the F, G, and K stars. To demonstrate the effect of certain molecules on the discrepancies between the data sets, we recompute the table and Q, excluding TiO and VO from the line lists for the M star (blue curve in the bottom right panel of Fig. <ref>). In this case, the value of χ_H is considerably reduced, but it is still relatively high (≃29%). Even if the selection of molecules in the two codes is identical, differences between the opacities are expected owing to different line lists and other atomic and molecular data. For example, the use in of more up to date data for the opacities, in particular for H_2O and TiO, from EXOMOL [<http://www.exomol.com/data>], or different partition functions and oscillator strengths.
§ OPACITY BINNING
A further approximation of the opacity computation is the OB method initially introduced by <cit.>. Its implementation in the MURaM and CO^5BOLD
codes is described by <cit.> and <cit.> respectively. In this method the opacities are sorted taking into account their distribution with the discretised optical depth τ^ref of a representative one-dimensional model atmosphere. We use the Rosseland optical depth as reference.
An opacity bin l is defined by selecting a pair of lower and upper τ-separators (τ^ref,l_0, τ^ref,l_1).
In a variation of the method <cit.> the wavelength dependence is partially preserved by grouping opacities by both optical depth and wavelength. In that case each bin is defined by two pairs of separators, one in optical depth (τ-separators) and one in wavelength (λ-separators).
We start from the ODF
constructed as it is described in Sect. <ref>.
The optical depth is computed for each step i and substep j of the ODF for all geometrical heights z in a model atmosphere as
τ_i,j (z) = τ_i,j (z_top) + ∫_z_top^z ϰ_i,j (z) ρ d z ,
where ρ refers to the mass density in the model atmosphere (and not the density in the grid of the opacity table) and the height dependence in ϰ_i,j (z) denotes that the opacity from the ODF is interpolated into the temperatures and densities from the model atmosphere.
The opacities are then grouped so that opacity of the ODF step i and substep j belong to bin l if τ^ref,l_0 ≤τ^ref (z_i,j) < τ^ref,l_1, where z_i,j is the height at which τ_i,j=1. Additionally, when the λ-separators are used, every ODF value is sorted by wavelength so that the central wavelength of an ODF step falls between a pair of λ-separators. Figure <ref> shows examples of such grouping of the ODF values into five bins for our four stellar models. There are two λ-separators approximately dividing the wavelength axis into ultraviolet, visible, and infrared and there is one τ-separator for the UV (logτ = -3) and a joint one for the visible and IR (logτ = -1). The corresponding 2D histograms of the ODF points are shown in Fig. <ref>. The minimal optical depth in the reference models vary with the stellar type and wavelength. We assign the top geometrical height of the atmosphere z_i,j=z_top to all ODF values with τ_i,j(z_top) > 1.
After the opacities of the ODF have been distributed into bins, for every bin l (that includes the ODF steps i=i(l) and substeps j=j(i,l) contained in the bin) and for every T,ρ of the grid of the ODF we compute the Planck function,
B_l = ∑_i(l)Δλ_i B_i,
its derivative,
. ∂B/∂T|_l = ∑_i(l)Δλ_i ∂B_i/∂T,
the Planck mean opacity,
ϰ^Pl_l = ( ∑_i(l)Δλ_i B_i∑_j(i,l)ω_j ϰ_i,j) / B_l,
and the Rosseland mean opacity,
ϰ^Ro_l = . ∂B/∂T|_l / ( ∑_i(l)Δλ_i ∂B_i/∂T∑_j(i,l)ω_j/ϰ_i,j),
where Δλ_i, ω_j, and ϰ_i,j are, respectively, the length of the wavelength step, the weight of the substep, and the opacity of the ith step and jth substep of the ODF. B_i is the Plank function in the middle of the wavelength step for each temperature T in the (T, ρ) grid used to compute the ODF.
Finally, following <cit.>, ϰ^Pl_l and ϰ^Ro_l are combined to get a mean value of the opacity for each bin:
ϰ_l = ( 2^-τ_l /τ_thr) ϰ^Pl_l + ( 1-2^-τ_l /τ_thr) ϰ^Ro_l,
where the transition between thin and thick optical regimes is at the threshold optical depth τ_thr=0.35 and τ_l is computed
using pressure p in the ODF grid and a stellar surface gravity g, approximately as τ_l = ϰ^Ro_l p/g.
With this definition, the opacity from each bin converges to the grey approximation for large enough optical depths (see the discussion about Fig. <ref> in App. <ref>).
As in the ODF case, we compute the bolometric Q rate, Q^OB, by solving the RTE for each bin and summing over the bins:
Q^OB = ∑_l Q_l,
where Q_l refers to partial rate of the bin l computed using Eq. <ref>.
We apply the described procedure to the ODF computed with monochromatic opacities presented in Sect. <ref>.
There are several important free parameters in the procedure: the model atmosphere itself, the number of bins in τ and λ, and the locations of separators between the bins.
There is no obviously intuitive choice for these parameters.
We designed numerical experiments to test various alternatives.
As a deviation
measure of the radiative energy exchange rate Q^(2) (computed using the binned opacity) with respect to the reference rate Q^(1) (computed with ODF), <cit.> plotted the height-dependent absolute difference of Q^(1) and Q^(2)
divided by the density stratification ρ of the atmosphere. The scaling with density is important to compare the radiative rates with other energy sources and sinks in the simulations. Alternatively, <cit.> opted for computing max|Q^(1) - Q^(2)|/ max|Q^(1)|, to reduce the deviation measure to one value for a given choice of parameters. The latter choice is obviously more practical for
trial-and-error optimisation of the free parameters. However, in this approach the deviation is biased towards the dominant cooling component of Q.
In our experiments, we take a compromise by using the deviation measures introduced in Eqs. <ref> and <ref>. The reference values of the Q rate are those computed using ODF with (Q^(1)) and the values that are tested (Q^(2)) are the ones calculated using the binned opacities. This choice of the deviation measures, allows us to automate the process of optimising the free-parameters, while it preserves separate information on both cooling and the heating component of Q.
As this is a multi-parameter study we break the problem down by first studying the influence of the location of the τ-separators for a fixed number of bins (4) and with no bins in the wavelength direction. We then allow the number of τ bins to change. Finally, we test several cases including the λ bins.
§.§ τ-bin location
To understand the dependence of the Q rate on the location of the τ-separators for the four stars, we first fix the number of bins to 4 following <cit.> and <cit.>.
Similarly to the experiment described in section 3.4 of <cit.>, we try all possible combinations of the τ-separators from a discretised grid of optical depths for each of the stars. For the F3V star this gives us 9880 combinations selected from a grid of 40 equidistant values in logτ^ref∈ [-9.0, 0.5]; for K0V and G2V, 4060 combinations (grid of 30 equidistant values in logτ^ref∈ [-6.5, 0.5]); for M2V, 4060 combinations (grid of 30 equidistant values in τ^ref∈ [-4.5, 0.5]). The grid is customised for each star to cover properly the relevant range in optical depths (see the vertical axis of Fig. <ref>). The τ-separator with maximum optical depth is labelled as τ_0. The indices of the separators increase with height in the atmosphere (τ_i > τ_i+1).
The statistics for each star in our sample show different trends. In general, combinations of separators that give simultaneously small deviations χ_C and χ_H are easy to find for the G and K stars, while they become less frequent in the case of the F star and quite rare for the M star. The number of cases leading to the deviations below certain threshold is given in Tables <ref> and <ref> separately for the cooling and the heating, and in Table <ref> when both components are constrained together.
From Tables <ref> and <ref> it is clear that the cooling part is much easier to replicate accurately than the heating one. Almost any combination gives deviations less than 20% in the cooling for any of the stars, while only about half of them minimise χ_H below 20% for the F, G, and K and only a few for the M star.
Table <ref> shows how many cases we find with both χ_C and χ_H smaller than certain thresholds indicated individually for each of the stars. For the M2V star the choice of 4 τ bins is obviously insufficient. There are only a few combinations (less than 1%) that lead to χ_C<10% and χ_H<20% and there is not a single one that reduces χ_C and χ_H below 5% and 20% simultaneously.
How do the deviation measures vary with the location of the three separators? An illustrative example for the G2V star is shown in Fig. <ref>, which shows the dependence of χ_C and χ_H on the location of the middle separator τ_1 when the other two separators are fixed. To visualise the results, in each column of the figure the deepest separator is fixed at one of six values (logτ_0 = 0.5, 0, -0.5, -1.2, -1.9, -2.9). Each of the curves correspond to a different fixed value of the highest separator τ_2. The position of the highest separator is indicated by the colour, the black curve corresponding to the smallest value of τ_2 and the red curve to the largest one.
The figure shows that χ_C and χ_H vary significantly with the location of τ_1 if the location of the deepest separator τ_0 is deeper than some critical depth (left-hand columns of Fig. <ref>). That critical location of τ_0 is located deeper for χ_C than for χ_H (logτ_0 ≃ -0.5 for χ_C and logτ_0 ≃ -1.2 for χ_H). For χ_C especially, if τ_0 ≃ -0.5 the curves flatten out, meaning that the results are essentially insensitive to the location of the other two nodes.
If τ_0 is deeper than these critical values, the deviation measures are insensitive to the location of the other two separators only if τ_2 is relatively close to τ_0 (the red curve in the left-hand column of Fig. <ref>).
When τ_0 is pushed higher up, the deviation measures are insensitive to the location of the remaining separators and their value increases (compare two right-hand columns of Fig. <ref>).
When logτ_0 ≥ -0.5, the shape of the curves for the χ_C and χ_H in Fig. <ref> becomes more different. The cooling deviation shows one minimum around logτ_1≈ -2.5, while the heating deviation has two minima: one for the locations of τ_1 close to τ_0 and another one that moves higher up with increasing height of τ_0. When logτ_0 = -0.7, the minimum is at logτ_1 ≈ -5, and when logτ_0 ≥ -0.5, the minimum is at logτ_1 ≈ -2.5.
Since the location of one of the minima of χ_H coincides with the minimum of χ_C, both can be minimised using the same set of separators even when τ_0 is as deep as logτ_0 = 0.5.
When the location of the separator τ_0 is set above the critical values, the sensitivity of both χ_C and χ_H to the location of τ_1 and τ_2 mostly disappears (see the sequence from the left to the right panel of Fig. <ref>). This happens for a higher up location of τ_0 for χ_H than for χ_C.
Our analysis suggests that, in the case of the G2V star, there are two strategies for placing the separators so that χ_C and χ_H remain below 4% and 15% respectively. The first strategy is to set logτ_0 around the continuum formation height, [-0.5, 0] and logτ_1∈[-3,-2.5]. The second one is to place logτ_0 so that it includes most of the weak and intermediate photospheric lines (logτ_0∈[-2.4,-1.9]) and logτ_1 even higher in [-5,-3.5]. In both cases the location of logτ_2 can be anywhere above logτ_1. The former case (see 2nd and 3rd column) contains the combinations that produce the minimal χ values in our experiment, although the curves are not entirely flat in such a way that the optimal solution is sensitive to the model atmosphere and to the ODF data. Alternatively, the latter case still guarantees small deviations, but it appears to be more robust as χ_C and χ_H essentially depend only on the location of the deepest bin. Since our model atmosphere does not have a chromospheric temperature rise, however, the strategy with all separators positioned relatively high may be limited to that type of models.
For the spectral types F3V and K0V the results are similar to those for G2V. For τ_0 located deeper than a certain critical value, χ_C and χ_H vary with τ_1 similarly to the G star, showing two minima for χ_H, of which one coincides with the single minimum of χ_C. There is a little variation of the deviation curves with τ_2 except when it is close to the τ_0. For the F3V star, the optimal combinations that produce χ_C≤ 8% and χ_H≤ 15% are (a) logτ_0∈[-0.2, 0.5] and logτ_1∈[-2.5, -2] and (b) logτ_0∈[-2.2,-0.7] and logτ_1∈[-5.5, -4.7], with logτ_2 again anywhere above logτ_1.
For the K0V star, we identified three variants that produce χ_C≤ 5% and χ_H≤ 15%: (a) logτ_0∈[0, 0.3] and logτ_1∈[-2, -1], (b) logτ_0∈[-0.5, -0.2] and logτ_1∈[-4, -3], and (c)logτ_0∈[-1.9, -1.7] and logτ_1∈[-5, -3], with logτ_2 < logτ_1.
While the results for the M2V star presented in Fig. <ref> show general similarity to the results for the other stars, there are two important differences.
First, for the deep choice of logτ_0 there is only one minimum for the χ_H varying with τ_1 and it does not coincide anymore with the minimum for χ_C, but with its local maximum (cf. two left columns in Fig. <ref>).
Secondly, once the τ_0 separator is sufficiently high in the atmosphere so that the deviation measures are insensitive to changes in τ_1, χ_C
flattens out as for the other stars but its value quickly increases with the height of τ_0 (see sequence from left to right in the figure), from χ_C≃ 6% when logτ_0 = 0.2 to χ_C≃ 15.5% when logτ_0 = -0.9.
Higher up, the value of χ_C remains larger than 12%, while χ_H also starts to increase once it becomes flat (after τ_0=-1.2).
Therefore, for the M2V star it is not possible to minimise both deviation measures simultaneously with four bins.
The best combination of parameters that we found gives χ_C≤ 9% and χ_H≤ 27% for logτ_0∈[0.2, 0.5] and logτ_1∈[-1.1, -0.6], with logτ_2 < logτ_1.
Based on the analysis presented above, we selected optimal four-bins combinations as reference for other binning strategies that we described in the following sections: logτ = { -1.9, -5, -7 } for the F3V star; logτ = { 0, -2.5, -5 } for the G2V; logτ = { -0.2, -3, -5 } for the K0V; logτ = { 0.3, -0.8, -2.5 } for the M2V.
§.§ Number of τ-bins
To understand the dependence of the Q rate on the number of separators, we computed Q for the four stars for different number of separators (2, 3, 5, 10, 20, 30, 40, 50, and 100)[In the present and following sections, there could be combinations of bins where one bin might end up empty. In this case, when computing the RTE solution, the empty bin is ignored. ] that are uniformly distributed in the relevant range for each of the stars: logτ^ref∈ [-9, 0.5] for F3V, [-6.5, 0.5] for K0V, and G2V and [-4.5, 0.5] for M2V. Again, the selected optical depth ranges ensure that all the stellar models are properly represented (see the vertical axis of Fig. <ref>).
In Figure <ref> χ_C and χ_H are shown as a function of the number of bins.
For F, G, and K – type stars, there is very little variation in χ_C. The trend is slightly different for each of the stars individually. While χ_C slowly decreases for the K star with an increasing number of bins, it increases for G2V, and it first increases and then decreases for the F3V – type star. In all three cases the value of χ_C saturates at around 20 bins.
The M star shows a significant improvement in χ_C when the number of bins increases from 4 to 21. For more bins it saturates at ≈5%.
Although there is more variation in χ_H than in χ_C, the overall trend is similar and, again, the results saturate for more than ≈20 bins.
<cit.> tested the difference between the Q rate computed with the binned opacity and with the ODF for an increasing number of bins (up to 60). Similarly to our results, they found that the solutions for Q converge towards a limiting value that is different from Q_ODF.
<cit.> investigated the performance of the OB and ODF methods
in the context of shock-generated radiation during simulated re-entry of Apollo AS-501 vehicle into the Earth's atmosphere. Although the physics of the experiment and implementation of the OB are different (e.g. planckian mean for ϰ_l, ϰ-sorting instead of τ-sorting), they also found that the results of the radiation computation quickly converge with increasing number of bins (reaching saturation at around 10 bins). However, note that their conclusions about the ODF solution are not directly translatable to the radiative transfer in the stellar atmospheres owing to different definitions of ODF.
Therefore, the saturation values of χ_C and χ_H are the minimal deviations that cannot be eliminated by increasing the number of the τ-separators. Depending on the stellar type, the minimal χ_C values span between 2% and 7%, and the minimal χ_H span between ≈10% and ≈30%. These minimal deviations are the smallest for the K and the largest for the F star.
However, it is possible to find certain combinations of the separators that minimise the deviations particularly well. For example, the χ_H value for the F3V star is reduced to only a few per cent when three separators are set to logτ = { -1.8, -4.3, -6.6 }.
This is consistent with locations that produce simultaneous minimum χ_C and χ_H deviations, as discussed in Sect. <ref>.
The deviations with the optimal combinations found using four bins for each of the stars in Sect. <ref> are shown as coloured plus markers in Fig. <ref>.
These deviations are either similar to or smaller than the saturation values when the bins are distributed uniformly. We have also computed the Q rates using the Rosseland opacity mean applied to the whole spectrum (Eq. <ref>). The χ differences for this grey solution are shown as coloured circles in Fig. <ref>. Their values are larger or similar to the the saturation ones, but always larger than the solutions with four bins from Sect. <ref>.
We experimented with several other strategies for distributing separators automatically.
They all confirm the conclusion that careful distribution of the separators is more important than their number.
§.§ {τ,λ}-binning
The OB method using only τ-separators does not reproduce the exact solution in the limit of infinite number of bins because the contributions from different parts of the spectrum, each with its own height variation, are mixed in any given bin <cit.>.
One way to improve the approximation is to preserve some of the
wavelength dependence in the binning procedure by grouping the
opacity data points with respect to their spectral regions prior
to the τ-binning.
Therefore, λ-separators between these regions are introduced in addition to the τ-separators.
Such bidimensional distribution of the opacities is non-uniform in (τ, λ) parameter space: in every wavelength interval the τ-separators may be specified
independently (both their number and location) to account for the height distribution of the opacities in that interval.
<cit.> introduced the {τ,λ}-binning, showing that splitting the continuum at the Balmer jump improved significantly the results for Vega, due to the systematically different behaviour of Q for wavelengths short-ward and long-ward of the jump.
<cit.> explored the possibility of splitting the least opaque bin into two subgroups for long and short wavelengths and concluded that the improvement is probably not worth the extra computational effort.
<cit.> also attempted to improve the OB procedure by partly accounting for the lost wavelength dependence. <cit.> treated the UV and the visible + IR opacities separately. Each group is sorted with respect to its own set of τ-separators. In addition, the least opaque bin of the visible + IR opacities is further split with a set λ-separators.
We computed the Q values using a {τ,λ}-binning approach similar to <cit.> (see his figure 2.6) for different total number of bins, N = 6, 7, 8, 9, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, and 100. The distribution of the bins for the G2V star for N = 6, 7, 8 and 9 is shown in Fig. <ref>.
For any N, the spectrum is first divided into the short and long wavelength parts by placing one λ-separator at λ_0.
The value of λ_0 is 400 nm for the F and G stars (effectively separating UV from visible), 500 nm for the K star, and 600 nm for the M star.
The short-wavelength opacities are then grouped into three τ-bins, and the long-wavelength ones into two. The least opaque τ-bin of the long-wavelength opacities is then split by one or more additional λ-separators. For N=6 the λ-separator is placed at 1600 nm. For N>6, the λ-separators of the least opaque τ-bin are placed equidistantly in logλ in the range [λ_0, 9570] nm. The τ-separators in the short-wavelength region are customised for each stellar type: -2.6 and -5.8 for F3V, -1.8 and -4.2 for G2V, -1.8 and -4.0 for K0V, and -1.2 and -2.8 for M2V. The locations of the separators are chosen optimally to sample different features present for each of the stars when the ODF points are plotted in the τ-λ plane (cf. Figs. <ref> and <ref>).
We computed the Q rate for every combination of {τ,λ}-bins. The corresponding deviations χ_C and χ_H are shown in Fig. <ref>. The behaviour of χ_C with the number of bins is generally similar to that found in Fig. <ref>. For four stars the χ_C deviation decreases with increasing number of bins until it saturates after around 10–20 bins. The saturation values of χ_C are lower than the ones found in Sect. <ref> (cf. Table <ref>). For less than 15 bins (Δlogλ > 0.1) χ_C remains nearly constant for G2V and K0V, but increases significantly for F3V and M2V. The results for M2V show two peaks for N = 7 and 10, and a minimum between them for N = 8.
This strong variation in χ_C is a good illustration of how sensitive the Q rate is on how the ODF data points are distributed between the bins.
Figure <ref>
shows the distribution of the bins with N = 7 and N = 9 (2nd and 4th panel). Bins 1 to 5 are identical in both cases. Bins 6 and 7 for N=7 are split into two bins each for N=9.
Bin 7 (for N=7) includes most of the visible spectrum, it coincides with the central part of the Planck function and thus it dominates the radiative losses. The rate Q computed from that bin is compared to the total Q in the left-hand panel of Fig. <ref>.
In the right-hand panel it is compared
with Q values computed from the bins 8 and 9 (for N = 9) and with their sum. Individual Q profiles for the bins 8 and 9 (for N = 9) differ significantly in shape, amplitude and location of the maximal cooling.
Because of the non-linearity of RTE, their sum is not equal to Q computed when the opacity of the two bins is merged in the bin 7 for N=7.
In terms of our deviation measure χ_C, this leads to a drop from ≈14 % for N=7 to ≈3 % for N=9 (top panel of Fig. <ref>).
The finer the distribution of the visible opacities in wavelength is, the more stable are the results leading to the saturation of χ_C.
For the M2V star the χ_H saturation value is about the same as in Fig. <ref>, but for the F, G, and K stars χ_H saturates at significantly larger values owing to an excess of heating relative to the ODF solution.
In another experiment, we tested a variant of the previous method were bins are distributed in a checkerboard pattern in the (τ, λ) plane. The number of τ-separators is varied between 1 and 16 and the number of λ-separators is fixed at 5 (λ=381,726,1383,2636,5023 nm). The τ-separators for all the stars are distributed equidistantly at logτ^ref∈ [-3, 0.5]. The minimum number of bins in this experiment is 12, the maximum 102.
The results are shown in Fig. <ref>. For a small number of bins
χ_H is larger than the deviation measure we obtain with the optimised solutions from Sect. <ref>.
With 12 bins the values χ_H lie between 20 and 30%, similar to those in Fig. <ref> with 11 bins distributed only in τ. With finer sampling in τ, the value of χ_H is quickly reduced below 10% and saturated at that level for all four stars.
From this analysis it follows that introduction of the λ separators may reduce the minimum deviation of the OB in
particular cases when it is combined with carefully selected
separators in τ. However, this improvement is relatively small in comparison with the basic method where sorting is done only by optical depths (cf. Figs. <ref> and <ref>).
Table <ref> summarises the deviation measures χ_C and χ_H computed in the experiments described in this section.
The values shown for the experiments described in Sect. <ref> and <ref> correspond to the minimum number of bins at which the deviation measures approach their saturation values.
It is clear that the 4-bin approach with carefully tuned locations of the τ-separators (Sect. <ref>) performs similarly or better than the other approaches for all 4 stars. The only exception is the M star, where taking into account the wavelength distribution simultaneously with distribution in τ for all λ intervals is an obvious advantage (Fig. <ref>).
The strategy proposed by <cit.> reproduces the cooling component better than other strategies, but it produces large errors in the heating component.
Finally, in Figure <ref> Q and Q/ρ are compared for the four stars and five opacity approaches (ODF, grey opacity, and three OB setups). The deeper part of the atmosphere where mainly cooling occurs is shown in the two left columns; the higher layers where the heating occurs are shown in the two right columns.
On one hand, the cooling can be fitted either using Q or Q/ρ, since the shape of Q is practically not modified by the relatively slowly changing density in sub-surface layers.
On the other hand, the density scale height diminishes faster in higher layers, thus, dividing by the rapidly decreasing density enhances the differences we see in Q.
For the part of the atmosphere studied in the present paper (top of the convection zone and lower photosphere), Q and Q/ρ show similar information regarding the quality of the computed Q^OB in respect to Q^ODF.
Some differences are expected when choosing Q/ρ, but the general behaviours described in this work should be the same. If one is interested in layers closer to the chromosphere and beyond, Q/ρ would be the quantity to optimise.
§ CONCLUSIONS
This work is based on detailed monochromatic opacity tables computed with the <cit.> code using complete up-to-date sets of atomic and molecular line lists.
The ODF is calculated using these opacities and compared with the ODF from the code <cit.>.
Although there are some systematic differences between the two data sets, a good match is found for the F3V, G2V, and K0V stellar spectral types when the radiative energy exchange rate Q is computed using each of the data sets.
For the M2V – type star the comparison of versus ODFs reveals the importance of the molecules in the spectra of these stars. The lack of some significant opacity contributors such as VO and TiO in the line lists produces less heating in the radiative energy exchange rate than in the results.
In Sect. <ref> the OB method <cit.> is applied to the ODF.
This method enabled rapid progress in realistic 3D MHD simulations of the near-surface convection as it reduced the number of required radiative transfer computations by several orders of magnitude while preserving the essence of the non-grey nature of the radiative transfer solution.
However, the method involves several free parameters that cannot be intuitively guessed.
We systematically tested several strategies for the opacity binning: positioning of the four bins (Sect. <ref>), the number of τ bins (Sect. <ref>) and simultaneously binning in optical depth and wavelength (Sect. <ref>).
None of the strategies tested converge to the ODF solution in the limit of a large number of bins since the binning mixes parts of the spectrum with different height profiles of the opacity (App. <ref>).
It is possible to find combinations of small number of bins that significantly reduce the deviations. However, these combinations are model-dependent and with relatively narrow intervals in which the bins can be located. Strategies that include preservation of the wavelength dependence through combined τ- and λ-binning may improve the deviation measures, but with a significant increase in computing overheads. For example, even in the most extreme case of the M star, to reduce the already small χ values by a factor of 3 (see the bottom row of Table <ref>), requires an increase in the number of bins, and thus the computing cost by a factor of 4 or 5. In most applications such accuracy is not needed as other approximations made in the RTE solution and the MHD modeling contribute more to the total error budget.
The results presented in this study are limited to the solar chemical composition. In the stars with different metallicities both the atmospheric structure and the relative importance of the various opacity contributors will change. Therefore, our results should not be blindly applied to these stars. Nevertheless, the trends identified in our analysis may be used as guidelines for tuning the details of the OB procedure for the stars with non-solar metallicities.
An open question left in this work is the importance of the measured deviations χ_C and χ_H in the final structure of the atmosphere in the context of simulations. To answer this question, one may start by extending the study for the Sun in section 3.4 of <cit.> for the case of other stellar spectral types in time dependent simulations.
It should also be emphasised that the optimal binning approximation selected based on the 1D models will not necessarily remain optimal when implemented in 3D simulations. However, the tuning of the binning strategy is
computationally too expensive to be routinely done in 3D.
In the follow-up of this study we shall explore how different
binning strategies perform in realistic 3D atomospheres simulated with the MANCHA code <cit.>, especially in the case of the M – type star.
This work was supported by the European Research Council through the Consolidator Grant ERC–2017–CoG–771310–PI2FA and by Spanish Ministry of Science through the grant PID2021–127487NB–I00.
APG acknowledges support from the Agencia Estatal de Investigación (AEI) of the Ministerio de Ciencia, Innovación y Universidades (MCIU) and the European Social Fund (ESF) under grant with reference PRE2018–086567.
CAP is thankful for funding from the Spanish goverment through grants AYA2014–56359–P, AYA2017–86389–P and PID2020–117493GB–100.
We thank the referee for the detailed and careful reading of the manuscript and for asking the questions that lead to Appendix <ref> and Fig. <ref>.
We thank Hans Günter Ludwig for reading the manuscript, the interesting discussions about the OB method and for suggesting the analysis in Fig. <ref>.
We are thankful to Alexander Shapiro for reading the manuscript and giving his feedback.
We are thankful to Terry Mahoney for the language editing.
This research has made use of NASA's Astrophysics Data System Bibliographic Services.
aa
§ CONTENT OF THE MONOCHROMATIC OPACITY
In each of the transitions taken into account to compute the opacity can be specified by the user. Only explicit ions (in our computations, H, H^+, H^-, He, He^+, He^++, C, C^+, C^++, N, N^+, N^++, O, O^+, O^++, Na, Na^+, Na^++, Mg, Mg^+, Mg^++, Al, Al^+, Al^++, Si, Si^+, Si^++, Ca, Ca^+, Ca^++, Fe, Fe^+ and Fe^++) are considered in the continuum opacities solved in LTE (see sections 2.4 and 2.5 of as well as for more details). We use the data included in the repository (version 1.2), as described in the following paragraphs. The contributions involving molecules and any of the Collision-Induced Absorptions are only taken into account below certain temperature, in this work chosen to be 8000 K (i.e. when molecules are considered in the EOS).
The free-free cross-sections for H and He^+ are computed hydrogenically with approximated free-free Gaunt factor <cit.>. For He, C, N, O, Na, Mg, Al, Si, Ca and Fe the cross-sections are computed again hydrogenically, but with Gaunt factor of 1. Other free-free contributions are also taken into account:
* H^- from <cit.>.
* H_2^+ from <cit.>.
* H_2^- from <cit.>.
* He^- from the polynomial fit of <cit.> to the data of <cit.>.
The bound-free transitions are mainly computed using hydrogenic cross-section, data from the Opacity Project
TOPbase [<http://cdsweb.u-strasbg.fr/topbase/topbase.html>] and the Iron Project [<https://cds.unistra.fr/topbase/TheIP.html>]:
* H^-: the cross-section is computed hydrogenically with Gaunt factor of 1 <cit.>.
* H_2^+: the cross-section is computed using the expression from <cit.>.
* H: the cross-section for the first nine transitions are computed hydrogenically with Gaunt factor of 1.
* He: the cross-sections for the first 14 transitions are computed using cubic fits from <cit.> from the Opacity Project data, taking into account the multiplicity of every transition (if these are triples or singlets).
* He^+ : the cross-section for 14 transitions is computed hydrogenically with exact Gaunt factors.
* C: the cross-sections for 104 transitions are interpolated from TOPbase data.
* C^+: the cross-sections for 40 transitions are interpolated from TOPbase data.
* N: the cross-sections for 89 transitions are interpolated from TOPbase data.
* N^+: the cross-sections for the 51 transitions are interpolated from TOPbase data.
* O: the cross-sections for 54 transitions are interpolated from TOPbase data.
* O^+: the cross-sections for 74 transitions are interpolated from TOPbase data.
* Na: the cross-sections for 32 transitions are interpolated from TOPbase data.
* Na^+: the cross-sections for eight transitions are interpolated from TOPbase data.
* Mg: the cross-sections for 71 transitions are interpolated from TOPbase data.
* Mg^+: the cross-sections for the 31 transitions are interpolated from TOPbase data.
* Al: the cross-sections for 33 transitions are interpolated from TOPbase data.
* Al^+: the cross-sections for 81 transitions are interpolated from TOPbase data.
* Si: the cross-sections for 57 transitions are interpolated from TOPbase data.
* Si^+: the cross-sections for 46 transitions are interpolated from TOPbase data.
* Ca: the cross-sections for 79 transitions are interpolated from TOPbase data.
* Ca^+: the cross-sections for 32 transitions are interpolated from TOPbase data.
* Fe: the cross-sections for 49 transitions are obtained from the Iron Project data.
* Fe^+: the cross-sections for 41 transitions are obtained from the Iron Project data.
Other contributions to the continuum and bands are also included:
* H, He and H_2 Rayleigh scattering from <cit.>.
* Thomson scattering: ϰ = 6.65× 10^-25 n_e/ρ (cm^2 g^-1)
* Collision-Induced Absorption of H_2-H_2 <cit.>, H_2-He <cit.>, H_2-H <cit.> and H-He <cit.>.
* CH, OH continuous absorption from <cit.>.
Bound-bound transitions are computed using the data from different line lists, from which the code selects which lines may actually contribute (e.g. with a threshold in the line-to-continuum opacity ratio). These are mainly from Kurucz line lists [<http://kurucz.harvard.edu/linelists.html>], updated by data from the National Institute of Standards and Technology (NIST) when available. also uses data from EXOMOL [<https://www.exomol.com>] for some of the molecular transitions. In the case of the molecules, around a total of 2 × 10^7 transitions for H_2, CH, NH, OH, NaH, MgH, SiH, CaH, CrH, FeH, C_2, CN, CO, MgO, AlO, SiO, CaO, VO, H_2O and TiO are included (see Fig. <ref>). In the case of the atoms, includes a line list with around 2 × 10^6 transitions for:
* Neutral H, As, Se, Rb, Sb, Te, Cs, Pt, Au, Tl, and Bi.
* Neutral and first ion of He, Li, Ga, Ge, Sr, Y, Tc, Pd, Ag, Cd, In, Sn, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Tm, Yb, Lu, Hf, Ta, W, Re, Os, Ir, Hg, Pb, Th, and U.
* Neutral and first 2 ions of Be, Nb, Rh, Zr, and Ru.
* Neutral and first 3 ions of B, C, and Mo.
* Neutral and first 5 ions of N, O, F, Ne, Na, Mg, Al, Si, P, S, Cl, Ar, and K.
* Neutral and first 7 ions of Zn.
* Neutral and first 8 ions of Ca, Sc, Ti, V, Cr, Mn, and Co.
* Neutral and first 9 ions of Fe, Ni, and Cu.
§ DETAILS OF THE COMPARISON OF THE OPACITY DISTRIBUTION FUNCTION FROM AND
Figures in this appendix show examples of typical differences between the ODFs from and in the opacity and in the Q rates for the four stellar models (see Sect. <ref>).
§ SATURATION OF THE OPACITY BINNING METHOD AND COMPARISON AGAINST GREY OPACITY
The OB method is not expected to converge to the ODF solution for a large number of τ-bins <cit.>.
To explain why this is the case, we use the checkerboard distribution of the bins introduced in Fig. <ref>, similar to that used in Sect. <ref>, but now with 4 bins in τ and 6 in λ.
The number of non-empty τ-bins changes
depending on the λ-bin. For example, in the case of the F3V star there are 4 τ-bins for λ < 300 nm, while there are 2 τ-bins for λ∈ [1000,2000] nm (Fig. <ref>).
For each star, in Fig. <ref> we compare the binned opacity of pairs of wavelength ranges selected to better illustrate this behaviour. We compare λ-ranges that have the same number and location of non-empty τ-bins.
For each star and any of the τ-bins, the opacity from one of the two λ-ranges vary with height in a different way than the opacity from the other.
Even for arbitrarily increased number of bins,
binning only in τ mixes wavelength regions which opacities have different height profiles
(this is illustrated in figure 4.13 from , too). Although this effect may be reduced, it is present even when splitting in λ, since each contributor to the opacity may have its own variation with height.
Figure <ref>
shows how the opacity computed with the OB method converges to the Rosseland opacity in each τ-bin, ϰ_l=ϰ^Ro_l (dotted black lines), in the limit of large enough optical depths. We show only the case for the F3V star and the range for λ<300 nm to avoid overcrowding the plot.
However, the conclusions are valid for
any λ-range and star. Another example of this convergence can be found in figure 4.12 from <cit.> for the case of the Sun.
Figure <ref> also shows the Rosseland opacity mean for the whole spectrum in black dashed line. The harmonic mean used to compute this averaged opacity,
ϰ_Ro = ∫∂B/∂T dλ / ( ∫∂B/∂T1/ϰ_λ dλ),
gives more weight to the continuum than to the line cores,
making that the Rosseland mean is close to the opacity of the least opaque bins.
Finally, Figure <ref> shows that the combination of the opacity bins in the Rosseland fashion,
. ∑_l . ∂ B/∂ T|_l / ( ∑_l . ∂ B/∂ T|_l 1/ϰ_l),
yields the correct value for the Rosseland mean for deep enough optical depths.
For these optical depths (see Fig. <ref> and Eq. <ref>)
ϰ_l ≈ϰ^Ro_l = . ∂B/∂T|_l / ( ∑_i(l)Δλ_i ∂B_i/∂T∑_j(i,l)ω_j/ϰ_i,j),
thus, the combination of the bins following Eq. <ref> gives
. ∑_l . ∂ B/∂ T|_l / ( ∑_l . ∂ B/∂ T|_l 1/ϰ_l) =
. ∑_l ∑_i(l)Δλ_i ∂B_i/∂T/ ( ∑_l ∑_i(l)∂B_i/∂T∑_j(i,l)ω_j/ϰ_i,j).
This equation is similar to Eq. <ref>, but with the formalism of the ODF to compute the integrals and with the additional sums over the bins. It can be written in a more general way as
. ∑_l . ∂ B/∂ T|_l / ( ∑_l . ∂ B/∂ T|_l 1/ϰ_l) ≡
. ∑_l ∫_λ(l)∂B_λ/∂T d λ/ ( ∑_l ∫_λ(l)∂B_λ/∂T1/ϰ_λ(l) d λ),
where λ(l) corresponds to the wavelengths that enter the lth bin. Binning in τ, λ, or both always is equivalent to group only separating in λ, since each opacity point corresponds to a different wavelength[In the case of the ODF, the substeps lose their wavelength dependence, but the ODF formalism takes care of this.]. This property as well as understanding the integrals of the opacity in terms of Riemann or Lebesque integration
explains the result shown in Fig. <ref>.
Similarly, the Planck mean should be recovered for higher layers.
|
http://arxiv.org/abs/2306.03056v1
|
20230605172831
|
Magnetic field-induced partially polarized chiral spin liquid in a transition-metal dichalcogenide Moiré system
|
[
"Yixuan Huang",
"D. N. Sheng",
"Jian-Xin Zhu"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.mtrl-sci"
] |
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
Department of Physics and Astronomy, California State University, Northridge, California 91330, USA
[email protected]
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
As one of the most intriguing states of matter, the chiral spin liquid (CSL) has attracted much scientific interest. On one hand, its existence and mechanism in crystalline strongly correlated systems remain hotly debated.
On the other hand, strong correlation driven emergent phenomena can be realized in twisted transition-metal dichalcogenide bilayers with a tremendously tunable large length scale providing a new platform for the emergence of
CSLs. We focus on a strongly correlated model relevant to heterobilayer _2/_2 and investigate the Mott insulating phase at half filling under an out-of-plane magnetic field. Considering both its orbital and spin Zeeman effects we identify three conventionally ordered phases including a 120^∘ Neél phase, a stripe phase and an up-up-down phase. For intermediate fields an emergent quantum spin liquid phase is identified with partial spin polarization. We further characterize the topological nature of the quantum spin liquid as the ν = 1/2 Laughlin chiral spin liquid through the topological entanglement spectrum and
quantized spin pumping under spin flux insertion. In addition, we map out the quantum phase diagram for different twisted angles in an experimentally accessible parameter regime.
Magnetic field-induced partially polarized chiral spin liquid in a transition-metal dichalcogenide Moiré system
Jian-Xin Zhu
July 31, 2023
===============================================================================================================
Introductions—
Quantum spin liquids <cit.> (QSLs) are exotic states of matter that feature non-trivial topology <cit.> and fractionalized excitations <cit.>. The novel properties of QSLs have attracted numerous studies for several decades which provide potential ways to realize unconventional superconductivity through doping a QSL <cit.>, and enable topological quantum computation <cit.>. Furthermore, it has been proposed that the behaviors of many more exotic correlated electronic systems such as the holon Wigner crystal and strange metal can also be understood from the perspect of doping QSLs <cit.>. However, candidate materials of the QSLs are very rare partially because of the lack of definite evidence to detect QSLs in experiments.
As a promising example, the chiral spin liquid (CSL) carries quantized chiral edge current that realizes the bosonic version of the fractional quantum Hall effect (FQHE). As a result, the CSL state may be identified through the quantized thermal Hall conductivity <cit.>. Upon doping, the CSL can lead to topological superconductivity <cit.> that supports Majarana edge excitations, responsible for edge zero-biased peak in experiments <cit.>. Recently, such novel states have been found numerically on the triangular lattice in realistic models like the Hubbard model <cit.>, and the extended Heisenberg model <cit.> supplemented by three-spin chiral interactions <cit.> or four-spin ring exchanges <cit.>.
However, the higher order terms that involve more spins are generally small in perturbation theory and
cannot be tuned without considering the interplay among different effective spin interactions. Furthermore, the range of relative strength between nearest neighbor and further neighbor interactions is not fully accessible for most candidate materials that show spin-liquid like behaviors <cit.>.
Thus, the numerical identification for a CSL based on real material parameters remains an open question.
Recently, it has been proposed that twisted transition-metal dichalcogenide (TMD) bilayers can simulate Hubbard model physics with tunable interacting strengths by different twisted angles <cit.>. In particular, the _2/_2 heterobilayers has a single flat band that can be isolated with hole doping, leading to an effective Hubbard model with competing interactions on the triangular moiré superlattices <cit.>. Even more appealingly, thanks to the significantly larger unit cell, an experimentally accessible magnetic field can generate a large magnetic flux per unit cell to induce a strong effective chiral interactions that are impossible in conventional solid state crystalline systems <cit.>.
Motivated by the possible field-induced chiral spin liquids in TMD moiré systems that have experimental tunability, we study the emergent phases of the half-filled Hubbard model in the presence of an applied out-of-plane magnetic field based on the parameters of the _2/_2 moiré bilayers. Through large-scale Density Matrix Renormalization Group (DMRG) simulations <cit.> and considering both the orbital and spin Zeeman effects of the magnetic field, we identify a CSL with partial spin polarization <cit.> induced by the interplay between the effective three-spin chiral term and the Zeeman term with an intermediate magnetic field. In particular, the topological nature of the CSL is identified as the ν = 1/2 FQHE through the spin flux insertion simulations and topological entanglement spectrum. Applicable to the _2/_2 heterobilayer setting, we map out the global quantum phase diagram using experimentally tunable parameters, which are the twisted angles and magnetic fields. Besides the CSL, we identify a stripe and a up-up-down (UUD) phase with finite magnetization for larger magnetic fields, as well as a 120^∘ Néel phase for smaller fields. Our results show that the interplay between orbital and spin effects of the applied magnetic field gives rise to a partially polarized-CSL (PP-CSL), which is stabilized in an experimentally accessible parameter regime.
Model derivations—
The monolayer TMD has a hexagonal lattice with a relatively large band gap at the 𝐊 and 𝐊' points. If we consider a heterobilayer with two similar lattice constants like _2/_2, the moiré pattern can be formed with a large moiré lattice constant a_m with a twisted angle <cit.>. The bands of the moiré system have four-fold degeneracy including the valley and spin degeneracy. Because of the relatively large spin-orbit coupling in the valence bands, the top two valence bands can be isolated with only valley degeneracy. The valence band states have a spin-valley locking, meaning that the states at 𝐊 points are associated with spin ↑ and the states at 𝐊' points are associated with spin ↓. In the moiré or reduced Brillouin zone which is much smaller, the band folding gives an overlap between the two valleys. Thus, the resulting two degenerate bands can be approximated by a single band Hubbard model with only spin degeneracy on the moiré superlattice. Because the heterobilayers have relatively large direct band gap, the valence bands of the _2 layer sit inside the gap of the _2 layer, so the states in the top valence bands are localized in only one layer.
As shown in Ref. <cit.>, the Wannier functions of the moiré bands are localized near the triangular superlattice positions. Therefore, the resulting Hamiltonian is a Hubbard model with electron hoppings and Coulomb repulsion on moiré superlattice sites, given as
H_H = ∑_{ ij } ,σ ( -t_ije^i2π e/ħ𝐀· (𝐫_i-𝐫_j)c^†_iσc_jσ+h.c.)
+U∑_in_i↑n_i↓ +h_z∑_i(c^†_i↑c_i↑-c^†_i↓c_i↓) ,
where we consider the nearest-neighbor (NN) and next-nearest-neighbor (NNN) hopping t_1, t_2, and the on-site Coulomb repulsion U. An applied out-of-plane magnetic field affects both the orbital as the vector potential ∇×𝐀 = 𝐁 that adds a phase factor on the electron hoppings, and on the spin as the Zeeman effect with h_z=1/2μ_Bg_sB, where μ_B is the Bohr magneton and the spin g-factor g_s≈ 2.
At half filling of the moiré bands the Hubbard U is much larger than t_1 <cit.>, thus the charge is localized in the Mott insulating state with one electron per moiré unit cell. The effective spin model can be derived from the perturbation expansion of t/U <cit.>. In the presence of an out-of-plane magnetic field B, the phase factor on the electron hoppings results in an effective three-spin chiral interactions <cit.>. The overall spin Hamiltonian becomes
H_S = J_1∑_⟨ ij⟩𝐒_i·𝐒_j+J_2∑_⟨⟨
ij⟩⟩𝐒_i·𝐒_j
+ J_χ∑_{ ijk }∈𝐒_i· (𝐒_j×𝐒_k)+2h_z∑_iS^z_i ,
Here ⟨ ij⟩ and ⟨⟨ ij⟩⟩ refer to the NN and NNN site pair, and { ijk } in the summation ∑ _Δ refers to the three neighboring sites of every unit triangle taken clockwise [see Fig. <ref> (a)]. The strengths in the effective spin couplings are J_1=4t_1^2/U-28t_1^4/U^3, J_2=4t_2^2/U+4t_1^4/U^3, and J_χ=24t_1^3(eΦ _B/ħ)/U^2, where Φ _B=2π B√(3)/4a_m^2 is the magnetic flux through each unit triangle with a_m being the lattice constant of the triangular moiré superlattice. Other chiral interactions originating from ring geometries beyond the minimal triangular can also slightly enhance the chiral effect, which are not included here for simplicity. As a remarkable observation, the enlarged length scale of the moiré lattice unit cell significantly amplifies the orbital effect of the magnetic field, which in turn enhances the chiral spin interactions over that in a regular solid state crystalline system by orders of magnitude.
Numerical methods—
Next we study ground state properties of these localized electrons. We adopt the moiré lattice constant a_m dependence of t_1, t_2 and U from Ref. <cit.>. As such, the coupling parameters in Eq. (<ref>) are ultimately determined by a_m and B.
We explore different phases by tuning a_m between 5.5 nm and 7.5 nm, and B from 0 up to 17 Tesla.
The ground state properties of the model are obtained by using both finite and infinite DMRG methods <cit.> with U(1) spin symmetry <cit.>. The finite-width cylinder geometry with circumference of 6 lattice sites is mainly used in the calculations. In the flux insertion simulation, a flux θ _F is adiabatically inserted through the cylinder, as illustrated in Fig. <ref> (b). The spin pumping response to the flux insertion and the topological entanglement spectrum are used to determine the topological nature of the state.
The system has open boundary in the e_a- or x-direction and periodic boundary conditions in the e_b- or y-direction [see Fig. <ref> (a)], with the number of sites denoted as L_x and L_y, respectively. The total number of sites is N=L_x× L_y.
For the ground state searching we use finite DMRG with relatively large system sizes up to N=360 to access more spin sectors and obtain the B dependence of the magnetization. The flux insertion simulations are carried out with infinite DMRG on a smaller unit cell of 144 sites, because the flux insertion does not change the magnetization and the spin pumping is more robust with infinite DMRG algorithm.
We focus on the results on L_y = 6 systems, which are supported by the results on cylinders with L_y=8. For finite DMRG we keep the same N on different L_y for a direct comparison of spin sectors, while the results show almost no finite size effect in the x-direction.
We keep up to bond dimension M=6000 to obtain accurate results with the numerical truncation error ϵ≲ 1×10^-6; see more details in the Supplemental Materials (SM) <cit.>.
Phase diagram—The ground state phases in the J_1-J_2-J_χ model has been explored without the Zeeman effect, which results in a CSL with zero magnetization.
Our numerical results are consistent with previous work in the range of parameters studied in Ref. <cit.> with h_z=0; see more details in SM <cit.>.
By scanning over the parameter space of Eq. (<ref>) we establish a global quantum phase diagram spanned by the twisted angle and magnetic field, as shown in the Fig. <ref> (c). We identify three magnetic phases including a 120^∘ Neél phase, an UUD phase, and a stripe phase.
In the limit of B=0 Tesla the 120^∘ Neél order dominates the whole range of a_m, where the nearest-neighbor J_1 takes the leading order, and it smoothly expands to the nonzero B regime. At the larger B, we find a stripe phase at small a_m, and an UUD phase at larger a_m before the states become fully polarized. Both phases exhibit a finite spin polarization in the z-direction, which increases as B increases. In the intermediate B regime at small a_m, a CSL with partial polarization emerges. The topological nature of the CSL is identified as the ν = 1/2 SU(2)_1 Laughlin state through the flux insertion simulations and the entanglement spectrum, which will be explained in the later sections. The CSL extends to smaller a_m, but we are mainly interested in a_m>5.5 nm, which corresponds to the twisted angle θ _m≲ 3.5^∘ where the moiré band fillings are fully tunable by the electrical gating <cit.>. The partially polarized stripe and UUD phases also extend beyond B=17 Tesla in our model. However, the energy scale of the Zeeman interaction may interfere with other moiré bands, which goes beyond the scope of the present work.
This is especially true considering the larger g-factor for excitons <cit.>.
Magnetic orders— The 120^∘ Neél phase can be characterized by the static spin structure factor defined as S(𝐤) = 1/N_m∑_i,j (⟨S_i·S_j⟩ - ⟨S_i⟩⟨S_j⟩ ) e^i𝐤· (𝐫_i-𝐫_j), where we sum over N_m = L_y× L_y middle sites to minimize boundary effect. As shown in Fig. <ref>(a), S(𝐤) has prominent peaks at the 𝐊 points suggesting the 120^∘ spin correlations. Tuning a finite B into the PP-CSL phase, the peaks disperse along the edge of the Brillouin zone with similar intensities but there is no distinctive peak [Fig. <ref>(b)]. Further increasing B into the stripe phase moderate peaks become concentrated at the 𝐌 points [Fig. <ref>(c)], indicating the stripe correlations.
For larger a_m in the UUD phase there are mild peaks in the 𝐊 points as shown in Fig. <ref>(d). The magnitude of the peaks decreases with the increase of B, which is consistent with a classical state aligning in the z-direction. In order to determine the UUD order, where the spins in the three-sublattice component form ↑↑↓, we define the order parameter as M_UUD=1/N∑ _i | ⟨ S^z_i⟩ - 1/N∑ _i⟨ S^z_i⟩ | <cit.>. As shown in Fig. <ref> (a), the M_UUD has a sudden increase around B=9 Tesla at a_m=7.2 nm, indicating a first-order phase transition to the UUD phase.
The inset of Fig. <ref> (a) shows the ground state energy as well as the entanglement entropy. The entanglement entropy exhibits a kink near the phase boundary which is consistent with a first-order phase transition. Meanwhile, we find smooth changes in energy and magnetization near the phase boundary; see SM <cit.>. The 120^∘ Neél order and the UUD order share the same symmetry, therefore, we cannot rule out the possibility of a phase crossover.
In Fig. <ref> (b), we show the structure factor peak at the 𝐊 points for various B. Fixing a_m=5.6 nm, the S(𝐊) has a sudden drop at B=5 Tesla which indicates the phase transition to the CSL. The phase transition could also be seen from the peak in the first order derivative of S(𝐊), as shown in the inset of In Fig. <ref> (b).
Chiral spin liquids—
The entanglement spectrum extracted from the ground state provides an unbiased way to identify topological states through the one-to-one correspondence to the gapless edge excitation spectrum <cit.>.
In the PP-CSL regime at a_m=5.6 nm, B=15 Tesla, we find that the entanglement spectrum of the ground state shows a quasidegenerate pattern with decreasing momentum on both L_y=6 and 8, as shown in Fig. <ref> (a) and (b), respectively. The quasidegenerate eigenvalues form a pattern of {1, 1, 2, 3, ...} in the lowest two S_z sectors, which agrees with the tower of states of the spinon sector <cit.> by the SU(2)_1 Wess-Zumino-Witten theory of the ν =1/2 Laughlin state. Higher degenerate levels are not found because of the limited discrete momentum number in the finite size lattices. For the CSL, the spinon sector is found as the ground state in the presence of a magnetic field, where the vacuum sector has a higher energy.
Besides the entanglement spectrum, the spin flux insertion simulation can also identify the topological nature of the CSL <cit.>.
The spin flux θ_F is adiabatically inserted through the cylinder as illustrated in Fig. <ref> (b), which adds a phase factor S_i^+S_j^-→ e^iθ _F/L_y(r^y_j-r^y_i)S_i^+S_j^- for the spin flip terms in y-direction. A 2π flux quantum leads to a quantized spin pumping that corresponds to the topological response of the ground state.
We measure the pumped spin using the reduced density matrix as Q_s = ∑ _αλ _αS^z_α where λ _α is the eigenvalue and S^z_α is the corresponding S^z of the α eigenstate. Thus, the accumulated spin for different θ_F can be obtained as Δ Q_s(θ _F)=Q_s (θ _F) - Q_s (0). As shown in Fig. <ref>, the net spin pumping of Δ Q_s(θ _F = 2π) ≈ 0.5 and Δ Q_s(θ _F = 4π) ≈ 1
which corresponds to a spin Chern number C=0.5 <cit.>, suggesting again the Laughlin type CSL. This is similar to the 1/3 fractional Chern insulator in the Haldane honeycomb lattice model <cit.>. On the contrary, in the stripe and 120^∘ Neél phase the spin pumping is not quantized <cit.> and becomes smaller but finite due to a finite non-coplanar chiral order. In addition, the spin pumping for the classical UUD phase is almost 0. Thus, the phase boundaries between the PP-CSL and the stripe phase can be determined by the flux insertion as well as the topological entanglement spectrum.
The spin gap of the CSL can be used to determine the partial polarization of the state in the thermodynamic limit, which is defined as the spin-1 gap Δ _s= E_0(S^total_z=1)-E_0(S^total_z=0). We find that the spin gap in the bulk is almost the same for different L_x, showing little finite size effect. In the thermodynamic limit, if the Zeeman energy of flipping one spin is larger than the finite spin gap, the CSL is partially polarized and becomes gapless. We always find that the bulk spin gap is smaller in the CSL regime, and thus the CSL has partial polarization in the whole regime. For example, at a_m=5.6 nm, B=15 Tesla, a finite stripe-like magnetization in the z-direction appears with an averaged value of 0.027 per site. Such stripe has also been found on honeycomb <cit.> and square lattice models <cit.> that coexists with the CSL.
Summary and discussion— We have explored the magnetic field-induced phases in the TMD moiré heterobilayer _2/_2 by studying a single-band Hubbard model that can be realized for twisted angles θ _m≲ 3.5 <cit.>. We have derived an effective spin model at half filling of the moiré bands with a finite out-of-plane magnetic field. Through extensive DMRG simulations on a quasi-1D cylinder, we mapped out the global quantum phase diagram spanned by the twisted angle and the magnetic field. In particular, we have identified a CSL with partial spin polarization for intermediate magnetic fields, and determined its nature as a Laughlin type Abelian CSL. Furthermore, the partially polarized CSL is surrounded by three magnetic phases including a 120^∘ Neél phase, an UUD phase, and a stripe phase.
The CSL emerges from the competing orbital and spin effect of the magnetic field B, which produces effective chiral and Zeeman interactions simultaneously. The moiré superlattices with tremendously large unit cell reduce the required B to generate strong enough chiral interactions for stabilizing the CSL. The transition from CSL to the stripe phase with further increased B can also be qualitatively compared to experiments of field-induced magnetic transitions of spin liquid-like phases <cit.>.
For small to intermediate onsite Hubbard repulsion U, the electron degrees of freedom cannot be neglected. Future studies may explore the field-induced phases directly on the Hubbard model, where the CSL may be stabilized for a small range of intermediate U even without B <cit.>. Besides half filling, other commensurate fillings of the moiré bands also provide a platform to explore CSL phase, where Wigner crystals are formed with charges localized in triangular and other types of superlattices <cit.>. Finally, we note that our study might also be relevant to a triangular-based spin-1/2 metal-organic framework, where the lattice constant can still be much larger than that of typical crystalline solids.
We thank Dr. Fengcheng Wu for helpful discussions.
This work was carried out under the auspices of the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA) under Contract No. 89233218CNA000001. It was supported by Center for Integrated Nanotechnologies (Y.H.), a DOE BES user facility, in partnership with the LANL Institutional Computing Program for computational resources, and Quantum Science Center (J.-X.Z.), a U.S. DOE Office of Science Quantum Information Science Research Center. D.N.S. was supported by U.S. DOE BES under Grant No. DE-FG02-06ER46305.
Supplemental Material for “Magnetic field-induced partially polarized chiral spin liquid in a transition-metal dichalcogenide Moiré system”
In the Supplemental Material, we provide more numerical results to support the conclusions we have discussed in the main text.
In Sec. <ref>, we provide examples of the finite bond dimension scaling of the ground state energy as well as entanglement entropy in the partially polarized-chiral spin lqiuid (PP-CSL) phase and show the numerical convergence of DMRG results.
In Sec. <ref>, we show the magnetization of different phases in real space.
In Sec. <ref>, we show the phase diagram without considering the Zeeman interactions and compare it with our main results.
In Sec. <ref>, we show the magnetization evolution with increasing B from the 120^∘ Néel phase to UUD phase.
§ DMRG CONVERGENCE
The convergence of DMRG calculation can be examined by the finite bond dimension (M) extrapolation of the ground state energy. We show the obtained ground-state energy per site E_0 versus the inverse DMRG bond dimension (1/M) for L_y=6 from both finite DMRG and infinite DMRG results. For finite DMRG, in order to minimize the boundary effect, we extract the energy using the middle 3/4 (which is L_x/8 < x ≤ 7L_x/8) of the lattices and calculate the entanglement entropy using the left and right subsystems divided at L_x/2.
We keep the bond dimensions up to M = 6000. In Fig. <ref> (a) and (b), we show the energies E_0^Mid and entanglement entropy at a_m=5.6 nm, B=15 Tesla on L_y=6 with finite and infinite DMRG results, as an example in the PP-CSL phase. The E_0^Mid and entropy are extrapolated by the second-order polynomials C(1/M ) = C(0) + a/M + b/M^2, where C(0) is the extrapolated result in the infinite M limit. The energies converge smoothly with bond dimension and the extrapolated energies are very close to the lowest energies we obtain, indicating the good convergence of the results.
The spin flux insertion results in the PP-CSL show slight difference for different M. As shown in Fig. <ref>, the spin pumping at smaller M=1200 have large variations due to a finite magnetization and the pumping is less than 1 at θ _F=4π. As M increases, the pumping becomes more uniform and the pumped spin is closer to the quantized value of 1.
§ MAGNETIZATION IN REAL SPACE FOR DIFFERENT PHASES
We show the magnetization in real space with infinite DMRG for the stripe phase, the UUD phase, and the 120^∘ Néel phase in Fig. <ref> (a), (b), and (c), respectively. In the stripe phase there are small spin polarization in z-direction which is stripe-like as shown in Fig. <ref>(a), consistent with the stripe spin-spin correlations given in the main text. In the UUD phase the z-component of the spin dominates and the spins in the three-sublattice component form ↑↑↓, which indicates a classical spin alignment, as shown in Fig. <ref>(b). For the 120^∘ Neél phase at small but finite B we also find a very small spin polarization as shown in Fig. <ref>(c). The phase is consistent with the 120^∘ Néel order at B=0 Tesla where the spin-spin correlations are mainly in the xy plane.
§ PHASE DIAGRAM WITHOUT ZEEMAN INTERACTIONS
To connect with previous numerical studies of the chiral spin liquids on the triangle lattice with finite chiral interactions, we obtain the phase diagram without considering the Zeeman term (h_z=0) based on the L_y=6 system. The resulting J_1-J_2-J_χ model has spin SU(2) symmetry and ∑ _i S_i^z=0 in the ground state with no spin polarization. As shown in Fig. <ref>, the 120^∘ Néel phase is stabilized at small B, and CSL at large B. For even larger B, a tetrahedral order is expected as the chiral interactions become dominant, leading to a magnetically ordered state <cit.>. Notice that the CSL phase has a larger regime compared to the phase diagram with the Zeeman term, we believe that the spin Zeeman effect of the magnetic field favors the magnetically ordered state while the orbital effect of the magnetic field promotes the CSLs.
§ MAGNETIZATION FROM THE 120^∘ NÉEL PHASE TO THE UUD PHASE
The magnetization is obtained by using the middle half of the lattice to minimize the boundary effect. In order to access more total spin sectors, we use the N=60× 6 lattice with finite DMRG methods. As shown in Fig. <ref>, the magnetization is 0 in the limit of B=0 Tesla and it smoothly increases as B increases from the 120^∘ Néel phase to the UUD phase.
|
http://arxiv.org/abs/2306.01948v1
|
20230602230728
|
Excitations of quantum Ising chain CoNb2O6 in low transverse field: quantitative description of bound states stabilized by off-diagonal exchange and applied field
|
[
"Leonie Woodland",
"Izabella Lovas",
"M. Telling",
"D. Prabhakaran",
"Leon Balents",
"Radu Coldea"
] |
cond-mat.str-el
|
[
"cond-mat.str-el"
] |
We present experimental and theoretical evidence of novel bound state formation
in the low transverse field ordered phase of the quasi-one-dimensional
Ising-like material CoNb_2O_6. High resolution single crystal inelastic
neutron scattering measurements observe that small transverse fields lead to a
breakup of the spectrum into three parts, each evolving very differently upon
increasing field. We show that this can be naturally understood starting from
the excitations of the ordered phase of the transverse field Ising model, domain
wall quasiparticles (solitons). The transverse field and a staggered
off-diagonal exchange create one-soliton hopping terms with opposite signs. This
leads to a rich spectrum and a special field, when the strengths of the
off-diagonal exchange and transverse field match, at which solitons become
localized; the highest field investigated is very close to this special regime.
We solve this case analytically and find three two-soliton continua, along with
three novel bound states. We also present calculations using exact
diagonalization of a recently refined Hamiltonian model for CoNb_2O_6 and
using diagonalization of the two-soliton subspace, both of which provide a
quantitative agreement with the observed spectrum. The theoretical two-soliton
model qualitatively and quantitatively captures a variety of non-trivial
features in the observed spectrum, providing insight into the underlying physics
of bound state formation.
These authors contributed equally to this work
Clarendon Laboratory, University of Oxford Physics Department,
Parks Road, Oxford, OX1 3PU, UK
These authors contributed equally to this work
Kavli Institute for Theoretical Physics, University of California,
Santa Barbara, 93106, California, USA
ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot OX11
0QX, UK
Clarendon Laboratory, University of Oxford Physics Department,
Parks Road, Oxford, OX1 3PU, UK
Kavli Institute for Theoretical Physics, University of California,
Santa Barbara, 93106, California, USA
Canadian Institute for Advanced Research, Toronto, Ontario, Canada
Clarendon Laboratory, University of Oxford Physics Department,
Parks Road, Oxford, OX1 3PU, UK
Excitations of quantum Ising chain CoNb_2O_6 in low transverse field:
quantitative description of bound states stabilized by off-diagonal exchange and
applied field
Radu Coldea
June 2, 2023
======================================================================================================================================================================
§ INTRODUCTION
The transverse field Ising chain (TFIC) is an important model in condensed
matter physics because it displays the key paradigms of both a continuous
quantum phase transition from an ordered phase to a quantum paramagnetic phase
as a function of field, as well as, in the ordered phase, of fractionalization
of local spin flips into pairs of domain wall quasiparticles (solitons)
<cit.>. The pure TFIC model can be mapped to non-interacting
fermions <cit.>, which in the ordered phase represent these
solitons. However, a variety of different additional subleading terms in the
spin Hamiltonian, such as a longitudinal field <cit.> or an XY
exchange <cit.> can stabilize two-soliton bound states. Here we
explore a regime where novel bound states can be stabilized by the interplay of
applied transverse field and off-diagonal exchange.
The material CoNb2O6 has been seen as a realization of TFIC physics for
over a decade
<cit.>. Among the
key experimental observations is the qualitative change in the nature of
quasiparticles from domain walls in the ordered phase to coherently propagating
spin flips in the high-field paramagnetic phase. Moreover, a fine structure of
bound states was observed just below the critical transverse field, consistent
with predictions for a universal E8 spectrum expected in the presence of a
perturbing longitudinal field, which in this case arises from mean-field effects
of the three dimensional magnetic order <cit.>. The
crystal structure is orthorhombic (space group Pbcn), with Co^2+ ions
with effective spin-1/2 arranged in zigzag chains running along the
c-axis, with dominant nearest-neighbour ferromagnetic Ising exchange (see
Fig. 1A). At the lowest temperatures, small three-dimensional interactions
between chains stabilize a ground state with ferromagnetic ordering along the
zigzag chains and with an antiferromagnetic pattern between chains
<cit.>.
While the dominant magnetic physics in CoNb2O6 can be captured by a TFIC
Hamiltonian, additional terms in the Hamiltonian beyond the dominant Ising
exchange are needed to explain various features of the spectrum
<cit.>. In particular, a staggered off-diagonal exchange
term was recently proposed on symmetry grounds and shown to reproduce well the
zero-field spectrum using density matrix renormalization group numerics
<cit.>.
In this work, we present high resolution single crystal inelastic neutron
scattering (INS) data as a function of low to intermediate transverse field in
the ordered phase. This regime has also been explored by THz measurements
<cit.>, which probe the zone-centre (𝐐=0) excitations. The
INS data reveal a rich evolution of the magnetic spectrum with increasing field:
the spectrum splits into three parts with each part behaving very differently.
The top two parts are sharp modes, with the top mode becoming progressively
flatter and the middle one progressively more dispersive in field, while the
lowest energy part is a continuum where the intensity moves from bottom to top
upon increasing field.
We seek to understand this rich behaviour in terms of a recently refined
Hamiltonian model for CoNb2O6, which proposed all relevant additional
exchange terms down to 2% of the Ising exchange <cit.>. We find
that the experimental data agree very well with the results obtained using
numerical exact diagonalization (ED) calculations for this full Hamiltonian,
where interchain coupling effects are treated in a mean field approximation. To
obtain a physical understanding of the spectrum, we start from a picture of
soliton quasiparticles, which hop due to both the applied transverse field and
the off-diagonal exchange. The competition between these hopping effects leads
to soliton hopping terms that alternate along the two legs of the zigzag chain,
resulting in two bands with dispersions that are tuned by the applied field. We
note that the relevance of a model with alternating hopping of solitons for
explaining features in the THz spectrum of CoNb2O6 was already mentioned in
<cit.>. A spin-flip neutron scattering process creates two soliton
excitations (Fig. <ref>C). Hard core repulsion and various nearest
neighbour interactions between solitons mean that in order to compute the INS
spectrum it is not sufficient to treat the solitons as non-interacting. We solve
a minimal model in the two-soliton subspace and find three continua and up to
three bound states depending on the values of the Hamiltonian parameters. To
understand the character of these bound states, we first focus on the limit
where solitons are localized due to the hopping term on alternate bonds being
zero, a theoretical situation not previously explored. In this limit, novel
bound states arise due to hard core repulsion. We then perturb away from this
limit in first order perturbation theory, obtaining analytic expressions for the
dispersions and intensities in INS, which give strong qualitative agreement with
the data. The results indicate that this regime is indeed realized in
CoNb2O6 at intermediate transverse field.
The rest of this paper is organized as follows: Sec. <ref> provides
details of the inelastic neutron scattering experiments while Sec.
<ref> introduces the model Hamiltonian and provides a
qualitative overview of the experimental results. In Sec. <ref>, we
solve the model Hamiltonian in first order perturbation theory in the two
soliton subspace. In Sec. <ref>, we provide a physical picture of
the spectrum by starting from the limit where individual solitons are localized
and perturbing around this limit. Sec. <ref> contains our
conclusions, and the Appendices give further technical details of the
calculations.
§ EXPERIMENTAL DETAILS
Inelastic neutron scattering measurements of the magnetic excitations were
performed on a large single crystal (6.76 g) of CoNb2O6 grown using a
floating-zone technique <cit.> and already used in previous INS experiments
<cit.>. The magnetic field was applied along the crystallographic
b-direction, which is transverse to the local Ising axis of all the spins. The
measurements were performed using the indirect geometry time-of-flight OSIRIS
spectrometer at the ISIS facility. OSIRIS was operated with PG(002) analyzers to
measure the inelastic scattering of neutrons with a fixed final energy of
E_f=1.82 meV as a function of energy transfer and wavevector transfer in the
horizontal (h0l) scattering plane. Throughout this paper, we express the
wavevector transfer in the inelastic neutron scattering experiments as
𝐐=(2π h/a,0,2π l/c) where (h,0,l) are expressed in reciprocal
lattice units of the orthorhombic structural unit cell, with lattice parameters
a=14.1337 Å, b=5.7019 Å and c=5.0382 Å at 2.5 K
<cit.>. The sample was attached to the cold finger of a dilution
refrigerator inside a vertical 7.5 T cryomagnet and measurements were taken at a
temperature of 0.1 K. The average counting time at each field was around
7 hours.
For each field, two sample orientations were measured (c-axis oriented in the
scattering plane at 25^∘ and 60^∘ with respect to the incident
beam direction). Throughout this paper, the data panels presented are a
combination of data from these two orientations, with the wavevector projected
along the chain direction l as the physics considered is one-dimensional. The
two orientations were chosen such that the projected l values covered a large
part of the Brillouin zone along the chain direction. The INS data at one of the
measured fields (2.5 T, in Fig. <ref>Q) was briefly reported in
<cit.>.
The data shown have had an estimate of the non-magnetic background subtracted
off, and have then been divided by the squared isotropic Co^2+ magnetic
form factor f^2(𝐐) and by the neutron polarization factor. The latter
was calculated under the assumption that all inelastic scattering is in the
polarizations perpendicular to the Ising (z) axes and that the dynamical
structure factor satisfies
S^xx(𝐐,ω)=S^yy(𝐐,ω), an approximation which is
found to be valid to a large extent for the model Hamiltonian
(<ref>) in the low transverse field regime. Here,
S^xx(𝐐,ω)=∑_λ_f|⟨λ_f|S^x(𝐐)|
GS⟩|^2δ(E_λ_f-ħω),
where the sum extends over all excited states |λ_f⟩ of energy
E_λ_f relative to the ground state |GS⟩
and where S^x(𝐐)=∑_jexp(i𝐐·𝐫_j)S^x_j, with
j running over all sites. Under the above assumption, the wavevector
dependence of the neutron polarization factor is
𝒫(𝐐)=1+(2π
h/a)^2sin^2γ+(2π
l/c)^2cos^2γ/𝐐^2.
Dividing the raw inelastic neutron scattering intensities by
𝒫(𝐐)f^2(𝐐) then gives S^xx(𝐐,ω)
up to an overall scale factor. Eq. (<ref>) is appropriate for the
experimentally observed zero-field magnetic structure of CoNb2O6 and takes
into account the two different chains per crystallographic unit cell with Ising
directions at an angle of ±γ to the c-direction in the ac-plane. We
have taken γ to be 30^∘<cit.>.
§ EVOLUTION OF THE MAGNETIC EXCITATIONS WITH APPLIED
FIELD
In this section, we first introduce the model Hamiltonian and relate this to the
zero field spectrum, introducing the concept of two soliton states. In applied
field, the spectrum splits into three components with different evolution in
field. We show in the following sections that this rich behaviour can be
naturally understood in a picture of solitons with dispersions tuned by the
transverse field.
§.§ Model Hamiltonian
We use the recently refined Hamiltonian model, proposed for a single chain in
CoNb2O6 <cit.>. It is convenient to write this in three parts:
ℋ=ℋ_1 + ℋ_2 + ℋ_3
where
ℋ_1 = J ∑_j [-S_j^zS_j+1^z
-λ_S(S_j^xS_j+1^x+S_j^yS_j+1^y).
+ .(-1)^jλ_yz(S_j^yS_j+1^z+S_j^zS_j+1^y)]+
∑_j
h_yS_j^y ,
ℋ_2 = J ∑_j [-λ_A(S_j^xS_j+1^x -
S_j^yS_j+1^y) .
+ .λ_AFS_j^zS_j+2^z
+λ_AF^xy(S_j^xS_j+2^x+S_j^yS_j+2^y)] ,
and
ℋ_3 = J ∑_j 2λ_MF(⟨ S^y⟩ S^y_j -
⟨ S^z⟩ S^z_j),
and where z is the Ising direction (in the crystallographic ac plane,
defined as the direction of the moments in zero applied field), y is parallel
to the crystallographic b-direction and x completes a right-handed
coordinate system.
This Hamiltonian, with parameter values in Table <ref>, is a
refinement of the minimal model proposed in Ref. <cit.> and was
recently deduced using a simultaneous fit to the spectrum in zero field, large
transverse field, and large near-longitudinal field <cit.>. One can
regard ℋ_1 as the minimal Hamiltonian needed to qualitatively
reproduce all key features of the excitation spectrum, while the terms in
ℋ_2 are sub-leading and are added in order to achieve
quantitative agreement. This applies to both the data in
Ref. <cit.> as well as data in low transverse field presented here.
Finally, ℋ_3 captures the effects of the weak interchain
interactions at a mean-field level. The given form with a constant
λ_MF>0 applies throughout the field range explored here (0 to
2.5 T ∥ b) as the magnetic order pattern between chains does not
change in this field range and as, for this magnetic structure, all the
interchain interactions that have a net contribution to the mean field are
Heisenberg-like <cit.>. The change in sign in front of the S_j^y
and S_j^z terms reflects the fact that the S^y-components between
neighbouring chains are parallel (polarized by the applied field h_y), whereas
the S^z components are (spontaneously) aligned antiparallel by the
antiferromagnetic interchain interactions.
The dominant term in the model Hamiltonian is the first term in ℋ_1,
the nearest neighbour ferromagnetic Ising exchange.
The second term is a nearest neighbour ferromagnetic XY exchange term
(λ_S), which causes single spin flips to hop.
The third term is a staggered off-diagonal exchange term (λ_yz) which
causes solitons to hop with a sign that alternates along the legs of the zigzag
chain.
The transverse field term (h_y=g_yμ_BB_y) flips single spins; for a fixed
number of domain walls, this is equivalent to soliton hopping by one site. We
will show that the competition between these two one-soliton hopping terms leads
to a rich field-dependent spectrum.
The spectrum in zero field (Fig. <ref>A data,
<ref>B calculation) can be qualitatively understood in terms of
the minimal Hamiltonian ℋ_1+ℋ_3. A spin-flip neutron
scattering event creates a pair of solitons (domain walls) and for the pure
Ising chain, the energy is independent of the separation between these solitons
[see Fig. <ref>C]. In the presence of the staggered off-diagonal
exchange (λ_yz), the solitons become dispersive <cit.>,
resulting in a continuum of scattering in energy-momentum space, covering a
large energy extent near l=0.
A longitudinal interchain mean field (ℋ_3) acts as an effective
linear potential confining the solitons into a series of bound states, as seen
in Fig. <ref>A near l=0. The sharp mode in the data near l=1
is a two-soliton kinetic bound state stabilized by the XY (λ_S) exchange
<cit.>.
The terms in ℋ_2 do not change the qualitative content of the
spectrum but are important for quantitative agreement with the experimental
data. The first term in ℋ_2 is an antisymmetric diagonal nearest
neighbour exchange (λ_A). The second and third terms are a next nearest
neighbour antiferromagnetic XXZ exchange (λ_AF and
λ_AF^xy respectively). The second term is needed to account
for the energy of the kinetic bound state near l=1 <cit.>, while
the first and third are needed to explain the details of the dispersions seen in
very high field <cit.>. Fig. <ref>B demonstrates
that very good quantitative agreement has been achieved between the experimental
zero-field data and exact diagonalization calculations using the full
Hamiltonian (<ref>).
§.§ Spectrum in small to intermediate transverse field
The key feature of the evolution of the spectrum as a function of field, shown
in the first column of Fig. <ref>, is the break-up of the
observable spectrum into three parts, each evolving very differently upon
increasing field. The top part, which evolves out of the zero-field high-energy
kinetic bound state, is a sharp mode that becomes progressively flatter upon
increasing field and spreads out over the whole Brillouin zone. In contrast, the
middle part is dominated by a sharp mode that becomes progressively more
dispersive upon increasing field and appears to trade intensity with the top
mode. The lowest energy part is dominated by a continuum spread, with intensity
moving from bottom to top upon increasing field. All features and trends in the
INS data are quantitatively captured by exact diagonalization (ED) calculations
using the full Hamiltonian in (<ref>) with the expectation values
⟨ S^y ⟩ and ⟨ S^z ⟩ in the mean-field term H_3 calculated self-consistently; these calculations are shown in the second
column of Fig. <ref>.
The breakup of the spectrum in field is clearly illustrated in
Fig. <ref>I at 1 T. The lowest energy part of the spectrum
centred around 1.4meV shows a set of excitations extending over a broad
energy range with clear sharp modes visible both at the bottom and the top of
this range. Those excitations are clearly separated from another set of states
centred around 2.25 meV with a clear sharp mode near 2.5 meV. At higher
energies still, there is the vestige of the zero-field kinetic bound state near
l=1, now clearly separated from the rest of the spectrum. All the observed
features, both dispersions and wavevector-dependence of intensities, are well
captured by the ED calculations in Fig. <ref>J.
The 2.5 T data (Fig. 2Q) shows even more contrasting behaviour between the
different parts of the spectrum. The sharp mode at the top of the low energy
continuum now extends all the way from the Brillouin zone center to
l≈0.6 and has gained in intensity compared to the continuum below it. In
the middle energy region the sharp mode has become strongly dispersive, with the
middle continuum losing nearly all its scattering intensity, and the top sharp
mode has become almost entirely flat and spread out over almost all the
Brillouin zone. Again, all these features are well reproduced in
Fig. <ref>R.
The spectra at 0.5 T (Figs. <ref>E and F) and 1.5 T
(Figs. <ref>M and N) interpolate between 0, 1 and 2.5 T and show
the gradual evolution of the spectrum.
The ED calculations quantitatively capture every feature and trend described
above. We stress that the parameters used in this calculation were not
fit to the finite transverse field data presented here, but are fixed to the
values proposed in <cit.>. This excellent agreement between data
and calculation gives further support to the Hamiltonian proposed in
<cit.> and motivates our search for a physical picture of the
excitations. In the following sections, we will introduce a picture of solitons
on the zigzag chains and show that the breaking up of the spectrum and very
different evolution of the different parts in field can be captured
quantitatively and understood phenomenologically in terms of solitons hopping
and bound state formation.
§ TWO SOLITON MODEL
In (<ref>) to (<ref>), all λ terms are ≪1 such that the
dominant term is the ferromagnetic Ising term. This means that it is sufficient
for our purposes to consider two-soliton excitations, and neglect mixing with
four-or-more-soliton excitations, since those occur at much higher energy. More
systematically, we may write the Hamiltonian as
ℋ=ℋ_ Ising+𝒱,
with ℋ_ Ising denoting the Ising Hamiltonian, and 𝒱
containing all other terms in ℋ. We now treat 𝒱 as a
perturbation of order δ, and define a Schrieffer-Wolff transformation,
ℋ^'=e^𝒮ℋe^-𝒮,
with 𝒮^†=-𝒮. We require that
[ℋ^',ℋ_ Ising]=0,
i.e., that ℋ^' conserves the number of solitons (domain walls).
By expanding 𝒮=𝒮_1δ+𝒮_2δ^2+..., and
enforcing (<ref>) up to order δ^n with n∈ℤ^+,
we obtain a systematic perturbative series for the effective Hamiltonian within
subspaces with a fixed number of solitons. Up to lowest order in δ, we
arrive at
ℋ^'≈ℋ_ Ising+𝒱_
conserv=∑_i≥ 0𝒫_iℋ𝒫_i,
with 𝒱_ conserv denoting the soliton number conserving terms in
𝒱, and 𝒫_i standing for the projector to the sector with
exactly i solitons. We have verified that the second order terms,
∼δ^2, are negligible compared to this leading contribution.
Relying on these insights, we now start by considering the effect of the
effective Hamiltonian (<ref>), first on a single soliton, and then
within the two soliton sector. We focus on the minimal Hamiltonian
ℋ_1+ℋ_3, yielding a good qualitative understanding of the
spectrum, and leave the discussion of ℋ_2 to the Appendices. We
further assume that the most important contribution from the mean field
interchain coupling ℋ_3 is a z magnetic field,
ℋ_3≈ -h_z∑_j S_j^z, with h_z=2Jλ_
MF⟨ S^z⟩≈ Jλ_ MF,
assuming ⟨ S^z⟩=1/2.
For convenience, we perform the following analytical calculations in a spin
basis rotated by π/2 around the z axis, obtained via the canonical
transformation S^x_j→ -S^y_j and S^y_j→ S^x_j in
ℋ.
§.§ Action of the Hamiltonian on a single
soliton
In this section we consider the spectrum of deconfined solitons under the
Hamiltonian ℋ_1 projected to the single soliton sector. The
confining mean field ℋ_3 will be introduced in
Sec. <ref>, where we discuss the spectrum in the two
soliton sector.
Let us define the “left” soliton state, a single domain wall at the link
(j-1,j), separating up spins on the left and down spins on the right,
|j⟩_L = |
⋯↑↑↑_j-1↓_j↓↓⋯⟩.
Here, the arrows indicate the eigenstates of S_j^z with eigenvalues ± 1/2.
We now consider the action of the Hamiltonian ℋ_1 on this state,
term by term.
* Ising exchange: The single soliton state |j⟩_L is
an eigenstate, with excitation energy
ϵ_0=J/2 above the ground state.
* XY exchange λ_S: This term flips two adjacent
spins in opposite directions. Acting on |j⟩_L, it creates two
additional domain walls by flipping the spins at sites j-1 and j, and can
therefore be dropped.
* transverse field h_y: This term flips a single spin,
V_y=h_y2∑_j(S^+_j+S^-_j).
To conserve the number of solitons when acting on |j⟩_L, the flipped
spin must be either at j or j-1,
𝒫_1 V_y|j⟩_L = h_y/2( |j+1⟩_L
+|j-1⟩_L ).
Therefore, the transverse field gives rise to a nearest neighbour hopping term
for the domain wall.
* staggered off-diagonal exchange λ_yz: Similarly to the
transverse
field h_y, this term results in a single spin flip,
V_yz = Jλ_yz2∑_j (-1)^j ( S_j^+ +
S_j^-) ( S_j+1^z - S_j-1^z).
Here, the operator S_j^+ + S_j^- can flip the spin at site j, if and
only if the spins on sites j+1 and j-1 are opposite, due to the factor
S_j+1^z-S_j-1^z. Therefore, V_yz yields spin flip processes confined
to domain walls. We find
𝒫_1 V_yz|j⟩_L = Jλ_yz/2 (-1)^j+1(|j+1⟩_L - |j-1⟩_L),
a hopping term similar to the effect of the field h_y, but with a different
sign structure across links.
To obtain the spectrum of this hopping Hamiltonian, we write a Schrödinger
equation for the soliton. We define the state
|ψ_L⟩ = ∑_j'ψ_L(j') |j'⟩_L,
where ψ_L(j)=_L⟨ j|ψ⟩ is the wavefunction of
the soliton. Using the expressions derived for 𝒫_1ℋ_1
|j⟩_L above, the Schrödinger equation,
_L⟨ j| H |ψ_L⟩ = ω ψ_L(j),
can be rewritten as
1/2∑_Δ=± 1(h_y + (-1)^j Jλ_yzΔ)
ψ_L(j-Δ) = (ω - J/2)ψ_L(j).
This equation describes a staggered hopping of solitons, with hopping
amplitudes
h_±=12(h_y± Jλ_yz)
alternating on even/odd bonds.
Since ℋ is invariant under translations by two lattice sites, we can
use the following Bloch ansatz,
ψ_L(2p+σ) = ψ_Lσ e^ikpc, with σ=0,1.
where k=2π l/c is the soliton momentum. In the following, we will
interchangeably use the symbols k and l when referring to momentum along the
chain direction, with the only difference that k is in absolute units whereas
l is in reciprocal lattice units. In the above equation, the coefficients
ψ_Lσ differentiate between the even and odd sublattices, reflecting
the two-site unit cell of the Hamiltonian. From now on, we will reserve the
index p for labelling the unit cells, whereas j will be used as a label of
lattice sites. With this convention, the Schrödinger equation reduces to
(h_+ e^-i kc + h_- )ψ_L1 =
(ω-J/2)ψ_L0,
(h_+ e^i kc + h_- )ψ_L0 =
(ω-J/2)ψ_L1,
yielding a pair of bands, ω_±, with bonding / anti-bonding orbitals
(J/2-ω_±)^2 = (h_+ e^-i kc + h_- )
( h_+ e^ikc +h_-)
= h_+^2+h_-^2 +2 h_+ h_- cos kc.
Importantly, the dispersion vanishes if h_+=0 or h_-=0. In these limits,
the hopping amplitude vanishes either on odd or even bonds, and the domain wall
can only move between two sites, giving rise to flat localized bands. This
localized limit will serve as a convenient starting point for perturbative
considerations in Sec. <ref>, allowing us to obtain a simple
qualitative picture for the evolution of the INS spectrum with magnetic field
h_y.
Above we considered a single “left” soliton, describing a domain wall with
up spins on the left and down spins on the right. Another type of domain wall
excitation is a “right” soliton, separating a domain of down spins on the left
from up spins on the right,
|j⟩_R = |
⋯↓↓↓_j-1↑_j↑↑⋯⟩.
The arguments described above can be repeated for right solitons, with the only
difference being that the hopping induced by λ_yz is of opposite sign
compared to the case of left solitons. This interchanges the hopping amplitudes
h_+ and h_- in the Schrödinger equations, but leaves the dispersion
(<ref>) unaltered. Therefore, both solitons become localized at the same
critical magnetic field h_y.
§.§ Solution of the Hamiltonian in the two soliton
subspace
We now turn to the spectrum of the effective Hamiltonian within the two soliton
subspace. In Sec. <ref> we obtained two distinct soliton
dispersions, ω_±, describing bonding / anti-bonding orbitals.
Therefore, we expect three continua in the two-soliton subspace, arising from
the pairings (ω_+,ω_+), (ω_+, ω_-) and
(ω_-,ω_-). However, the solitons interact, due both to hard-core
repulsion and to nearest neighbour soliton-soliton interactions encoded in
ℋ_1. Moreover, the full Hamiltonian also includes a confining z
magnetic field, ℋ_3, yielding an attractive interaction between the
soliton pair. Below, we take into account all of these effects by considering
ℋ_1+ℋ_3 projected to the relevant subspace with
𝒫_2.
Assuming h_z>0, the relevant low energy excitations correspond to a single
domain of down spins inserted into a background of up spins. Therefore, it is
convenient to use the following basis, with a “left" soliton on the left and a
“right" soliton on the right,
|j_L, j_R⟩ = |
⋯↑↑_j_L-1↓_j_L⋯↓_j_R-1↑_j_R↑⋯⟩,
with j_L<j_R. Similarly to the procedure followed in
Sec. <ref>, we can derive a Schrödinger equation within the
two soliton subspace by considering the effect of 𝒫_2ℋ_1
and ℋ_3 on the basis states.
The transverse field h_y and staggered off-diagonal exchange λ_yz
again give rise to hopping terms for the left and right solitons, with a
correction term arising for nearest neighbor solitons j_R=j_L+1 due to hard
core repulsion,
𝒫_2 V_y|j_L, j_R⟩ = h_y/2'∑_Δ=± 1( |j_L+Δ, j_R⟩
+|j_L,j_R+Δ⟩),
and
𝒫_2 V_yz|j_L, j_R⟩ =
Jλ_yz/2'∑_Δ=±
1Δ((-1)^j_L|j_L-Δ, j_R⟩-
(-1)^j_R|j_L,j_R-Δ⟩).
Here '∑ denotes a restricted summation constrained to valid
basis states, by dropping the unphysical terms |j_L+1,j_R⟩ and
|j_L,j_R-1⟩ for neighboring solitons j_R=j_L+1.
Besides these familiar terms, two new types of contributions arise compared to
the single soliton case. The XY exchange term,
V_S=-Jλ_S/2∑_j( S_j^+S_j+1^- + S_j^-S_j+1^+),
gives rise to a nearest neighbor interaction term between solitons,
𝒫_2 V_S |j_L, j_R⟩ =-Jλ_S/2δ_j_R-j_L,1∑_Δ=± 1|j_L-Δ, j_R-Δ⟩.
This term shifts the center of mass coordinate of the soliton pair by one
lattice site, without changing the relative coordinate j_R-j_L. Finally, the
magnetic field h_z leads to an attractive potential between the left and right
soliton,
ℋ_3|j_L, j_R⟩ =h_z(j_R-j_L)|j_L, j_R⟩.
Based on these relations, we construct the two-particle Schrödinger equation
for the wavefunction Ψ(j_L,j_R) defined through
|Ψ⟩ = ∑_j_L<j_RΨ(j_L,j_R) |j_L,j_R⟩.
Relying on translational invariance for the center of mass coordinate, it is
convenient to write
Ψ(2p_L+σ_L,2p_R+σ_R) = e^i k c(p_L+p_R)/2Φ_σ_Lσ_R^(k)(p_R-p_L),
with p_L/R labeling the two site unit cells, σ_L/R=0,1
distinguishing the even and odd sublattice and c(p_L+p_R)/2 being the
position of the center of mass of the soliton pair. For a fixed center of mass
momentum k, we obtain coupled equations for
Φ_σ_Lσ_R^(k)(n), defined on the half line n≥ 0 with
boundary conditions
Φ_00^(k)(0)=Φ_11^(k)(0)=Φ_10^(k)(0)=0. We present more
details on the numerical solution of these equations in
Appendix <ref>.
The two-soliton Schrödinger equation derived above yields a spectrum
consisting of three continua and three bound states across a wide range of
parameters, see Appendix <ref>. As mentioned above, the
origin of the three continua can be understood as due to the three different
ways of combining the two bands of solitons into two-soliton continua. The
origin of the bound states, which we term ε bound states to avoid
confusion, will be explored in the following section, Sec. <ref>.
Before turning to the detailed study of the ε bound states, we
conclude this section by deriving a formula for the INS spectrum within the two
soliton model, showing that the dominant contribution stems from the three bound
states. To this end, we approximate the ground state |GS⟩
appearing in the dynamical structure factor (<ref>) as the ground state
of the Ising Hamiltonian, |GS⟩≈
|...↑↑↑...⟩. Under this approximation,
S^xx=S^yy such that the dynamical structure factor is invariant under the
canonical transformation performed above. Acting with the spin operator S^x(k)
creates a soliton pair,
S^x(k)|GS⟩≈12∑_j e^ik j c/2|j,j+1⟩,
where k=2π l/c is the soliton pair center of mass momentum. By substituting
the eigenstates |λ_f⟩ with the solutions of the two-soliton
Schrödinger equation constructed above, we arrive at the overlaps
|⟨λ_f|S^x(k)|GS⟩|^2∼|Φ_01^(k)(0)+
Φ_10^(k)(1)|^2.
Therefore, the ε bound states, which we will show to have a large
weight on the configurations with nearest neighbor domain walls, give the
dominant contribution to the dynamical spin structure factor (<ref>).
The INS intensity as calculated above is shown in the third column of
Fig. <ref>. The calculation uses the full Hamiltonian with the
same parameters as in the ED calculations, except that the spin vector
expectation value ⟨𝐒⟩ =(0,0,1/2) is assumed fixed,
rather than using a self-consistent value. This is a good approximation since
even at 2.5 T, the self-consistent value as calculated by ED is
⟨𝐒⟩=(0,-0.150,0.473). The agreement between the observed
spectrum and the model is still quantitative — all features and trends are
captured — but not quite as strong as for the exact diagonalization
calculation. For instance, there is a small overall energy shift, most visible
by comparing Figs. <ref>R and S; in the latter, energies are
shifted to lower values. However all key features are well reproduced at all
measured fields.
To gain more insight into the structure and magnetic field dependence of this
INS signal, we examine the ε bound states in the next section, by
relying on a perturbative argument around the localized limit h_-=0.
§ THE LOCALIZED LIMIT
As derived in Sec. <ref>, the staggered off-diagonal exchange
λ_yz and the transverse field h_y in the leading order Hamiltonian
ℋ_1 lead to hopping terms of opposite sign for the solitons. A
particularly interesting situation arises when these terms are matched, such
that h_-=0, resulting in localized single solitons. This localized limit
serves as a convenient starting point for perturbative considerations, shedding
light on the structure and magnetic field dependence of the ε bound
states, as well as the three two-soliton continua. First, in
Sec. <ref>, we set h_-=0, and study the two-soliton spectrum, in
particular, the nature of the two-soliton bound states. We then examine the
effect of a small non-zero delocalizing term h_- in
Sec. <ref>. Predictions for the evolution of the INS spectra
with decreasing transverse field h_y, and comparisons of these to the
experimental data, are discussed in Sec. <ref>. For most of this
section, we focus on the leading order Hamiltonian ℋ_1, with a brief
comment about the mean field ℋ_3 at the end of the section.
§.§ Localized limit h_-=0
For simplicity, we start by setting λ_S=0 as well, and only keep the
staggered off-diagonal exchange λ_yz and the transverse field h_y.
We will discuss the effect of the nearest neighbor exchange λ_S later.
Under these simplifications, a left soliton can hop between the two sites of a
unit cell p, 2p↔ 2p+1, with rate h_-, whereas it hops
between neighboring unit cells p-1 and p, through sites 2p-1↔
2p, with rate h_+, see Fig. <ref>A. For h_-=0, we
obtain the following eigenstates with energies ω_±,
|p⟩_L^±≡1√(2)(|2p-1⟩_L±|2p⟩_L
), ω_±=J/2±
h_+,
symmetric and antisymmetric under the inversion exchanging the even and odd
sublattices, respectively. For a right soliton, the role of h_- and h_+ is
interchanged, leading to symmetric / antisymmetric eigenstates localized within
a single unit cell p,
|p⟩_R^±≡1√(2)(|2p⟩_R±|2p+1⟩_R
),
with ω_±=J/2± h_+.
Relying on these observations, we can construct the localized two-soliton
eigenstates by considering a left soliton confined to sites 2p-1 and 2p, and
a right soliton on 2p^' and 2p^'+1. For p^'>p, the solitons
do not interact, and we obtain the eigenstates
|p⟩_L^-⊗|p^'⟩_R^-, with energy
ω_–=J-2h_+,
1√(2)[|p⟩_L^+⊗|p^'⟩_R^-±|p
⟩_L^-⊗|p^'⟩_R^+],
with ω_+-=J,
|p⟩_L^+⊗|p^'⟩_R^+, with ω_++=J+2h_+.
The eigenvalue ω_+-=J is doubly degenerate, and the eigenstates were
chosen to be symmetric / antisymmetric under inversion. We now construct
delocalized eigenstates with a well defined center of mass momentum k as
follows
|n,k⟩^±=1√(N)∑_cells,pe^ik
c(2p+n)/2|p⟩_L^±⊗|p+n⟩_R^±,
|n,k⟩^0_±=1√(2N)∑_cells,pe^ik
c(2p+n)/2(|p⟩_L^+⊗|p+n⟩_R^-.
.±|p⟩_L^-⊗|p+n
⟩_R^+),
with N denoting the number of unit cells, and n≥ 1. These eigenstates
correspond to the three two-soliton continua arising from the different pairing
of bonding / anti-bonding orbitals. In the localized limit considered here, we
obtain highly degenerate flat bands at energies J ± 2 h_+ and J, reflected
by the free index n standing for the relative coordinate between the left and
right solitons.
Placing the left soliton to sites 2p-1 and 2p, and the right soliton to
2p^' and 2p^'+1 with p=p^' gives rise to interaction
through hard core repulsion, see Fig. <ref>B. In this case,
the eigenstates can be obtained by diagonalizing a 3× 3 matrix acting on
the three allowed configurations, yielding
|ε_±,p⟩=12(|2p-1,2p⟩+|2p,2p+1⟩.
.±√(2)|2p-1,2p+1⟩),
|ε_0,p⟩=1√(2)(|2p-1,2p⟩-|2p,2p
+1⟩),
with energies ε_±=J±√(2)h_+ and ε_0=J. The
corresponding momentum eigenstates form non-degenerate flat bands
(Fig. <ref>A) given by
|ε_α,k⟩=1√(N)∑_cells,pe^ik
pc|ε_α,p⟩, for α=±,0.
These states are the origin of the ε bound states found through the
numerical solution of the two-soliton Schrödinger equation in
Sec. <ref>.
We now consider the effect of a weak symmetric exchange λ_S, while
keeping h_-=0. As discussed in Sec. <ref>, this term only affects
nearest neighbor solitons by shifting the center of mass coordinate. Therefore,
the three continua, (<ref>), with solitons residing in different
unit cells are not affected. In contrast, the symmetric exchange acts
non-trivially on the bound states (<ref>), inducing an energy shift
calculated perturbatively as
ε_α⟶ε_α(k)≈ε_
α+⟨ε_α,k | V_S |ε_α,k⟩,
yielding dispersive bands
ε_±(k)=J±√(2)h_+-Jλ_S4(1+coskc),
ε_0(k)=J+Jλ_S2(1+coskc).
The full spectrum of the localized limit h_-=0, with weak symmetric exchange
λ_S is illustrated in Fig. <ref>B, showing the
three highly degenerate flat continua, and the three dispersive ε
bound states, which become delocalized by λ_S. Note that in
CoNb2O6, λ_S is large enough that the dispersion causes the
ε_0 and ε_+ bands to cross, leading to band inversion.
This effect is discussed quantitatively in Appendix <ref> and
illustrated in Fig. <ref>C. The band inversion further
suppresses the dispersion of the top mode, as well as mixing the character of
the two bands.
§.§ Effects of weak delocalizing hopping h_-
As shown above, the localized limit h_-=0 provides remarkable insight into the
structure of the two soliton spectrum obtained near to this limit. We can gain a
more detailed understanding of the evolution of the spectrum with decreasing
transverse field by considering the effect of a weak delocalizing hopping term
h_- perturbatively. The first important effect of h_-≠ 0 is lifting the
high degeneracy of the three continua and broadening these bands. Secondly, a
finite h_- can mix the ε bound states constructed in the previous
section with the continua in k regions where they overlap in energy.
We first explore the first effect, by focusing on the continua, and applying
degenerate perturbation theory within each band separately. We note that for
weak h_- and λ_S, the bottom and top bands remain well separated from
the ε bound states. Assuming also λ_S≫ h_-, the middle
continuum overlaps with the middle bound state only in the vicinity of
k=π/c, see Fig. <ref>D. Therefore, our treatment of
neglecting the hybridization between the continua and the ε bound
states is justified in the limit of weak couplings h_+≫λ_S≫ h_-,
apart from the case of the middle band in the vicinity of k=π/c. While the
experimental parameters lie outside of this well controlled region, we will
demonstrate below that a first order perturbative expansion grants valuable
insight into the evolution of the INS spectra for the whole range of applied
transverse fields.
Denoting the hopping term by V_h_-, we find that the matrix elements between
the single soliton eigenstates of the localized limit are
_α^±⟨
p^'|V_h_-|p⟩_α^±=±h_-2(δ_p^',p
+1+δ_p^',p-1), α=L,R.
Up to first order in perturbation theory, the energy shifts of the three
continua can be evaluated by considering the matrix elements of V_h_-
between the two-soliton eigenstates of the localized limit, (<ref>),
within each band separately.
Relying on the relations (<ref>), we obtain for the top and bottom
bands
^±⟨ n^',k^'|V_h_- |n,k⟩^±=
±δ_k,k^'
h_-cos(kc/2)(δ_n,n^'+1+δ_n,n^
'-1).
For a fixed center of mass momentum k, this equation corresponds to an
effective nearest neighbor hopping Hamiltonian for the relative coordinate n,
with hopping amplitude ± h_-cos(kc/2), subject to the hard core constraint
n>0. Therefore, for the bottom and top bands we find the spectrum,
ω_σσ = J+2σ(h_+ + h_-
cos(kc/2)cos(qc/2)), σ=±1,
with approximate unnormalized eigenstates
|q,k⟩^±∼∑_cell separation,
n>0sin(qnc/2) |n,k⟩^±.
Here, k is the total momentum of the soliton pair and q is the relative
momentum of the two solitons, and the factor sin(qnc/2) reflects hard core
repulsion n>0. Thus, the weak hopping h_- broadens the bands, most strongly
around k=0, but the high degeneracy still persists at k=π/c, see
Fig. <ref>D.
Turning to the middle continuum, we have to calculate four types of matrix
elements between the two-soliton states |n,k⟩_±^0. We obtain
_-^0⟨ n^',k^'|V_h_-|n,k⟩_+^0= _+^0⟨
n^',k^'|V_h_-|n,k⟩_-^0
=δ_k,k^' i
h_-sin(kc/2)(δ_n^',n+1-δ_n^
',n-1),
_-^0⟨ n^',k^'|V_h_-|n,k⟩_-^0= _+^0⟨
n^',k^'|V_h_-|n,k⟩_+^0 =0.
We can now calculate the broadening of this band by diagonalizing this matrix
within a given total momentum sector k. We find that the eigenstates in the
middle band remain at least twofold degenerate everywhere, yielding the spectrum
ω_+-=J-2 h_-
sin(kc/2)sin(qc/2),
with approximate unnormalized eigenstates
|q,k⟩^0_1∼∑_cell separation,
n>0( e^iqcn/2-e^-i(qc/2+π)n)|n,k⟩^0_ξ(n),
|q,k⟩^0_2∼∑_cell separation,
n>0( e^iqcn/2-e^-i(qc/2+π)n)|n,k⟩^0_ξ(n+1),
with ξ(n) = ± for n even/odd, and q again standing for the relative
momentum. Thus, the middle band remains highly degenerate around k=0, but it
is broadened away from this point, most strongly around k=π/c.
We note that these dispersions of the continua can be understood based on the
single-soliton spectrum as given in (<ref>).
In the limit of small h_-, (<ref>) becomes
ω_±=J/2±(h_+ + h_-cos kc).
The three continua constructed above correspond to the three different types of
soliton pairs. Denoting the individual solitons' momenta by k_1 and k_2, we
obtain the energies
ω_++= J+2h_+ + h_-(cos k_1 c+cos k_2 c)
= J+2h_+ + 2h_-cos(kc/2)cos(qc/2),
ω_–= J-2h_+ - h_-(cos k_1 c+cos k_2 c)
= J-2h_+ - 2h_-cos(kc/2)cos(qc/2),
ω_+-= J + h_-(cos k_1 c-cos k_2 c)
= J - 2h_-sin(kc/2)sin(qc/2),
with total wavevector k=k_1+k_2, and relative momentum q=k_1-k_2, in
accordance with the expressions derived above.
The second important effect of the hopping h_- is mixing the ε
bound states with the continua where they overlap in energy. In these regions,
the originally bound soliton pair can become delocalized, broadening out the INS
signal, as shown in Fig. <ref>E.
§.§ Comparison with INS data
We conclude this section by describing the INS intensity predicted by these
perturbative arguments. In the localized limit h_-=0, the continua
(<ref>) have no overlap with nearest neighbor soliton pairs
|j,j+1⟩ so do not contribute to the overlap (<ref>) and to
the resulting INS spectrum. The ε bound states (<ref>), on
the other hand, have the property that
⟨ε_±,k|S^x(k)|GS⟩ =
√(N)e^ikc/2+1/4,
⟨ε_0,k|S^x(k)|GS⟩=
√(N)e^ikc/2-1/2√(2),
following from comparing (<ref>) to (<ref>). Therefore, the
different bound states contribute differently to the INS spectrum
S^xx(Q,ω). This structure is inherited by the bound states away from
the limit h_-=0.
In the absence of band inversion (Jλ_S<2√(2)h_+/3), the three bound
states can be labeled by ε_±,0, such that the bottom and top
modes ε_± yield strong signals in the vicinity of l=0 and are
suppressed near l=1 [note that l=kc/(2π)], whereas the middle band
ε_0 behaves in the opposite way, see
Fig. <ref>B. If λ_S is large enough for band
inversion to occur, the top and middle bands acquire new labels
ε_1,2, with ε_1 retaining the structure of
ε_+ around l=0.5 but inheriting the character of ε_0
around l=0 and l=1, and the reverse holding for ε_2. This
results in a transfer of intensity between the top two bound states in the
vicinity of l=0 and l=1. That is, the top mode is strong at l=1 and weak
at l=0, and this is opposite to the middle band
(Fig. <ref>C).
For small but finite h_-, the lowest ε bound state mode
ε_-, which is strong around l=0 in the limit h_-=0, is pushed
into the bottom continuum due to the strong broadening of the continuum with
h_-. The mixing between these states leads to a signal smeared out across a
larger range of energies. Similarly, the middle bound state mode mixes with the
middle continuum around l=0.5, smearing and eventually almost completely
washing out the signal, see Fig. <ref>E.
However, for λ_S large enough to lead to band inversion, the top mode
does not hybridize with the continuum around l=1, even for large h_-. This
is because the upper continuum consists of states that are even under exchange
of the even and odd sublattices, whereas the bound state ε_0 is odd,
shedding light on the remarkably sharp INS spectrum of the top state around
l=1, even far from the localized limit h_-=0. That is, the top mode
ε_1 is sharp at all fields in Fig. <ref> in the data
(first column) and in the calculation (third column), even though there are
regions where it overlaps with states originating from the top continuum, as
illustrated in Fig. <ref>, last column.
This right-most column of Fig. <ref> shows the INS intensity as
calculated in this section, using the same parameters as for the middle two
columns but with no longitudinal mean field (ℋ_3=0). Comparing
Fig. <ref>T and P with Q and M respectively, it is seen that the
above description is in remarkable qualitative agreement with the experimental
results in large fields 2.5 T and 1.5 T. The agreement is especially good at
2.5 T (compare Figs. <ref>S and T), which is close to the
localized limit. This agreement is strongly supportive of the model presented in
this section, especially given that the model is relatively simple and
completely analytically tractable. At lower fields, the agreement is expected to
be less good as the solitons are now more delocalized and so a perturbative
treatment around the localized limit is expected to be less quantitatively
accurate.
Nevertheless, several key trends are still reproduced: in particular, the top
mode ε_1 is captured at all fields, the middle mode ε_2
is captured down to 1 T, and the lowest mode ε_- is captured down
to 1.5 T.
The calculations in the right-most column of Fig. <ref> include
the effect of band inversion on the top two bound states, as well as the terms
in ℋ_2, as explained in Appendix <ref>. We note that
the full Hamiltonian also contains a z magnetic field, ℋ_3,
changing the nature of the continua. This term introduces a linear confinement,
splitting the continua into confinement bound states, which cannot be captured
in this calculation. Despite these important effects, the tightly bound
ε bound states remain relatively unaffected by the confinement, and
the qualitative predictions for the INS spectra presented in this section still
hold, see Appendix <ref>. We note that, at first order, a
finite longitudinal mean field (ℋ_3) would be expected to increase
the energies of all the ε-bound states, which would bring the
calculated dispersions in the right-most column of Fig. <ref>
into closer agreement with the experimental data (left-most column).
§ CONCLUSIONS
We investigated the spectrum of the Ising chain material CoNb2O6 as a
function of low to intermediate transverse field in the ordered phase using
inelastic neutron scattering experiments. We compared the measured spectrum to
predictions based on a recently refined Hamiltonian containing all relevant
sub-leading terms beyond the dominant Ising exchange and found strong
quantitative agreement. We then sought a physical picture of the excitations. We
found that by restricting the Hilbert space to the two soliton subspace at first
order in perturbation theory, very good agreement between the calculation and
experiment was still achieved. The resulting spectrum in general has three
continua and three bound states, of which only the bound states contribute
significant weight to the inelastic neutron scattering intensity. In order to
understand the character of the bound states, we considered the localized limit,
in which the soliton hopping term on alternate bonds is zero. This occurs when
the applied field matches the strength of the off-diagonal exchange. We found
that the bound states in this limit are of two solitons in adjacent unit cells,
stabilized by hardcore repulsion leading to a change in delocalization energy.
The bound states survive well away from the localized limit, suggesting that
this picture has a broader domain of validity than might initially be expected.
Using this physical picture, we have been able to gain both qualitative and
quantitative understanding of the low energy spectrum of CoNb2O6 in the low
transverse field ordered phase.
L.W. acknowledges support from a doctoral studentship funded by Lincoln College
and the University of Oxford. I.L. acknowledges support from the Gordon and
Betty Moore Foundation through Grant GBMF8690 to UCSB and from the National
Science Foundation under Grant No. NSF PHY-1748958. D.P. acknowledges support
from the Engineering and Physical Sciences Research Council grant number
GR/M47249/01. L.B. was supported by the NSF CMMT program under Grant No.
DMR-2116515, and by the Simons Collaboration on Ultra-Quantum Matter, which is a
grant from the Simons Foundation (651440). R.C. acknowledges support from the
European Research Council under the European Union’s Horizon 2020 research and
innovation programme Grant Agreement Number 788814 (EQFT). The neutron
scattering measurements at the ISIS Facility were supported by a beamtime
allocation from the Science and Technology Facilities Council.
§ SOLUTION OF THE TWO SOLITON SCHRÖDINGER EQUATION
In this Appendix we present more details on the derivation and numerical
solution of the two-soliton Schrödinger equation constructed in
Sec. <ref>. Using the action of 𝒫_2ℋ_1 and
ℋ_3 on the basis states, the Schrödinger equation,
⟨ j_L,
j_R|ℋ_1𝒫_2+ℋ_3|Ψ⟩=ωΨ(j_L,j_R),
yields
12'∑_Δ=±
1[(h_y+Δ(-1)^j_LJλ_yz) Ψ(j_L-Δ,j_R)
+(h_y-Δ(-1)^j_RJλ_yz)Ψ(j_L,j_R-Δ)]
-Jλ_S/2δ_j_R-j_L,1∑_Δ=±
1Ψ(j_L-Δ,j_R-Δ)+h_z(j_R-j_L)Ψ(j_L,j_R) =(ω-2ϵ_0)
Ψ(j_L,j_R).
Here ϵ_0=J/2 is the energy cost of a single domain wall, and
'∑ stands for a constrained summation restricted to the
physical domain of Ψ(j_L^',j_R^'), j_L^'<j_R^'.
Introducing the center of mass momentum k, and rewriting this equation in
terms of Φ_σ_Lσ_R^(k)(n) according to (<ref>), with
n labeling the distance between two-site unit cells, leads to
h_+ e^-ikc/2Φ_10^(k)(n+ 1)+h_- Φ_10^(k)(n) + h_- e^-i kc/2Φ _01^(k)(n-1) + h_+ Φ_01^(k)(n)+2 n h_z Φ_0,0^(k)(n)=
(ω-2ϵ_0) Φ_00^(k)(n), n≥ 1,
h_- Φ_00^(k)(n) + h_+ e^i kc/2Φ_00^(k)(n-1) + h_- e^-i
kc/2Φ_11^(k)(n-1)
+ h_+ Φ_11^(k)(n) + h_z(2n-1)
Φ_10^(k)(n)-Jλ_S δ_n,1cos(kc/2)Φ_01^(k)(n-1) =
(ω-2ϵ_0)Φ_10^(k)(n), n≥ 1,
h_+ e^-i kc/2Φ_11^(k)(n+1)+h_- Φ_11^(k)(n) + h_+
Φ_00^(k)(n)+ h_- e^i kc/2Φ_00^(k)(n+1)
+h_z(2n+1) Φ_01^(k)(n)- Jλ_S δ_n,0cos(kc/2) Φ_10^(k)(n+1) = (ω-2ϵ_0)
Φ_01^(k)(n), n≥ 0,
h_- Φ_01^(k)(n)+ h_+ e^i kc/2Φ_01^(k)(n-1) +
h_+ Φ_10^(k)(n) + h_- e^i kc/2Φ_10^(k)(n+1)+2n
h_zΦ_11^(k)(n) = (ω-2ϵ_0) Φ_11^(k)(n), n≥ 1,
with boundary conditions
Φ_00^(k)(0)=Φ_11^(k)(0)=Φ_10^(k)(0)=0. This set of
equations can be solved numerically by truncating them at a large maximal
distance between the solitons, n_ max or by using periodic boundary
conditions on a finite ring. By defining the vector
Φ^(k) =
[ Φ_01^(k)(0); Φ_00^(k)(1); Φ_11^(k)(1); Φ_10^(k)(1); Φ_01^(k)(1); Φ_00^(k)(2); Φ_11^(k)(2); ⋮ ],
we obtain a matrix equation, allowing us to determine the low energy spectrum.
§ EFFECT OF SUBLEADING TERMS ℋ_2 IN THE TWO-SOLITON
PICTURE
In this Appendix we briefly discuss the correction terms to the two-soliton
Schrödinger equation derived in Appendix <ref> arising from the
subleading couplings in ℋ_2. We examine the action of
ℋ_2 term by term.
Following the convention of Sec. <ref>, we rewrite ℋ_2
in a rotated basis, S^x_j→ -S^y_j and S^y_j→ S^x_j.
We first consider the anti-symmetric nearest neighbor coupling,
V_A=Jλ_A/2∑_sites,j( S_j^+S_j+1^+ +
S_j^-S_j+1^-),
raising or lowering two neighboring spins. Projected to the single soliton
subspace, this term moves the soliton by two sites,
𝒫_1
V_A|j⟩_L=Jλ_A2(|j-2⟩_L+|j+2⟩_L).
In the two-soliton region, we obtain a next to nearest neighbor hopping term for
the left and right solitons, whenever permitted by the hard core constraint
j_R>j_L. Applying the representation (<ref>), we get the following
contribution to the left hand side of the Schrödinger equation for
(ω-2ϵ_0)Φ_σ_Lσ_R^(k)(n),
Jλ_A cos(kc/2)[Φ_σ_Lσ_R^(k)(n+1)
.
.+(1-δ_n,0-δ_n,1(1-δ_σ_L,0δ_σ_R,1))
Φ_σ_Lσ_R^(k)(n-1)].
The perturbation
V_AF^xy=Jλ_AF^xy/2∑_sites,j(S_j^+S_j+2^-
+ S_j^-S_j+2^+)
flips a pair of next nearest neighbor spins in opposite directions. Acting on a
single soliton, V_ AF^xy|j⟩_L always leaves the single soliton
subspace. In the presence of two solitons, |j_L,j_R⟩, however, we get a
non-vanishing short range contribution for j_R≤ j_L+2,
𝒫_2 V_ AF^xy|j_L,j_R⟩ = Jλ_
AF^xy2 ∑_Δ=±
1(δ_j_R,j_L+1 |j_L+2Δ,j_R+2Δ⟩.
+.δ_j_R,j_L+2 |j_L+Δ,j_R+Δ⟩).
This term shifts the center of mass coordinate by ± 2 or ± 1 sites for a
spin down domain of length j_R-j_L=1 and j_R-j_L=2, respectively. In the
Schrödinger equation, it leads to the following four extra contributions on
the left hand side,
Jλ_ AF^xycos(kc)Φ_01^(k)(0) ⟵
(ω-2ϵ_0)Φ_01^(k)(0),
Jλ_ AF^xycos(kc)Φ_10^(k)(1) ⟵
(ω-2ϵ_0)Φ_10^(k)(1),
Jλ_ AF^xy1+e^-ikc/2Φ_11^(k)(1) ⟵
(ω-2ϵ_0)Φ_00^(k)(1),
Jλ_ AF^xy1+e^ikc/2Φ_00^(k)(1) ⟵
(ω-2ϵ_0)Φ_11^(k)(1).
Finally, the effect of the perturbation
V_AF=Jλ_AF∑_sites,jS_j^zS_j+2^z
is to lower the energy of all states relative to the fully aligned (ground)
state. This term is diagonal in the two-soliton basis |j_L,j_R⟩,
yielding an energy shift depending on the size of the spin down domain. If there
is only a single spin flip, j_R-j_L=1, only two antiferromagnetic bonds are
satisfied, whereas if there are two or more spin flips, j_R-j_L≥ 2, four
antiferromagnetic bonds are satisfied, i.e.,
V_AF|j,j+1⟩ =-Jλ_AF|j,j+1⟩
V_AF|j,j+2⟩ =-2Jλ_AF|j,j+2⟩,
where only the energy difference between the excited state and the ground state
has been kept.
These considerations lead to the energy shift
ω-2ϵ_0 ⟶ω-2ϵ_0 -Jλ_
AF, if 2n+σ_R-σ_L=1,
ω-2ϵ_0 ⟶ω-2ϵ_0 -2Jλ_
AF, if 2n+σ_R-σ_L>1,
on the left hand side of the Schrödinger equation for
Φ_σ_Lσ_R^(k)(n).
§ BOUND STATES IN THE TWO SOLITON SPECTRUM
In this Appendix, the two soliton spectrum in various regimes is briefly
discussed. The left column of Fig. <ref> shows the two soliton
spectrum as a function of field for the minimal Hamiltonian ℋ_1. In
this case, the spectrum consists of three continua and three bound states, whose
origins are discussed in Sec. <ref>, at every non-zero field. The
right hand column of Fig. <ref> shows the same calculation for the
full Hamiltonian ℋ_1+ℋ_2+ℋ_3, where
ℋ_3 is the confining mean field. In this case, the mean field splits
the continua into a series of confinement bound states, but the ε
bound states are left mostly intact, because they are tightly bound.
§ BOUND STATE INVERSION AND MATRIX ELEMENTS OF ℋ_2 IN THE
LOCALIZED LIMIT
In this Appendix, we consider the spectrum near the localized limit, h_-=0,
obtained when the transverse field satisfies h_y=Jλ_yz. When
3Jλ_S/(2√(2)) > h_+, as is the case for the experimentally relevant
parameters, the top two ε bound state modes cross each other, so we
must use degenerate perturbation theory within the subspace of these top two
modes to calculate the resulting spectrum.
The unperturbed bound states and their energies are given in (<ref>).
In the following, we consider the effects of various other terms in the
Hamiltonian in (<ref>) to (<ref>), starting with the second term in
ℋ_1.
We consider the matrix elements between the two highest energy modes
(|ε_+, k⟩ and |ε_0, k⟩) for the perturbation
V_S=-Jλ_S/2∑_sites,j S_j^+S_j+1^- + S_j^-S_j+1^+.
This perturbation allows single spin flips to hop by one site.
The diagonal matrix elements are
⟨ε_+, k |V_S|ε_+, k'⟩ = -Jλ_S/4(1+cos kc) δ_k,k'
⟨ε_0, k |V_S|ε_0, k'⟩ = +Jλ_S/2(1+cos kc) δ_k,k' ,
and are consistent with the expressions given in Sec. <ref>.
Within the degenerate subspace, off-diagonal matrix elements are
⟨ε_0,k|V_S|ε_+,k'⟩=iJλ_S/2√(2)sin(kc) δ_k,k'
and Hermitian conjugate.
Eigenvalues and eigenvectors are then obtained by direct diagonalization, with
the dynamical correlations S^xx obtained from the eigenvectors as described
in Sec. <ref>. We term the resulting modes ε_1 and
ε_2.
We also derive the matrix elements of the next nearest neighbour terms in
ℋ_2. Consistent with the considerations above, we only keep the
diagonal matrix elements between bound states of the same type, as well as the
off-diagonal matrix elements between the middle and top modes ε_0
and ε_+, which show a strong mixing at the experimental parameters.
The effect of the perturbation V_AF, (<ref>),
is to lower the energy of all states relative to the fully aligned (ground)
state. However, as discussed in Appendix <ref>, this energy
shift is different for states with a single spin flip compared to states with at
least two spin flips, corresponding to two and four satisfied antiferromagnetic
bonds, respectively.
This leads to matrix elements
⟨ε_+,k|V_AF|ε_+,k'⟩=⟨ε_-,k|V_AF|ε_-,k'⟩= -3/2J
λ_AFδ_k,k'
⟨ε_0,k|V_AF|ε_0,k'⟩= -Jλ_AFδ_k,k'
⟨ε_0,k|V_AF|ε_+,k'⟩= 0.
This perturbation also shifts the energies of all the continua by
-2Jλ_AF.
The effect of the perturbation V_AF^xy, (<ref>),
is to hop single spin flips by two sites. This leads to matrix elements
⟨ε_+,k|V_AF^xy|ε_+,k'⟩=⟨ε_-,k|V_AF^xy|ε_-,k'⟩= J
λ_AF^xy/2cos(kc)δ_k,k'
⟨ε_0,k|V_AF^xy|ε_0,k'⟩= Jλ_AF
^xycos(kc)δ_k,k'
⟨ε_0,k|V_AF^xy|ε_+,k'⟩= 0.
To first order, this perturbation vanishes when acting on the continua. However,
it mixes the continua with the bound states.
The interchain mean field term, ℋ_3, is
V_z= -h_z∑_sites,jS_j^z
under the approximation that ⟨𝐒⟩=(0,0,1/2).
The effect of this term on the bound states is determined by the matrix elements
⟨ε_+,k|V_z|ε_+,k'⟩=⟨ε_-,k|V_z|ε_-,k'⟩= 3/2h_zδ_k,k'
⟨ε_0,k|V_z|ε_0,k'⟩= h_zδ_k,k'
⟨ε_0,k|V_z|ε_+,k'⟩= 0.
The effect of this term on the continua is to confine the soliton pairs into a
series of bound states; this effect cannot be captured within the current
picture.
Finally we note that the first term in ℋ_2, V_A, (<ref>),
vanishes when projected to the subspace of bound states, since this term causes
solitons to hop by two sites at a time. To understand the effect of this term on
the continua, we note that V_A, a hopping term to a neighboring unit cell,
shifts the single soliton dispersion relations as
ω_±⟶ω_± + Jλ_Acos(kc).
In contrast to the effect of the hopping h_-, V_A induces the same shift in
the energies of bonding / antibonding orbitals. As a result, this perturbation
leads to the same energy change for the three continua,
ω_σ,σ^' ⟶ω_σ,σ^'+J
λ_A[cos(k_1c)+cos(k_2c)]
=ω_σ,σ^'+Jλ_Acos(kc2)cos(qc2),
for σ,σ^'=±, with total and relative momenta given by
k=k_1+k_2 and q=k_1-k_2, respectively.
Alternatively, these spectra can be obtained by applying first order
perturbation theory around the localized limit, similarly to the analysis of the
hopping h_- presented in the main text. To this end, we first evaluate the
effect of V_A on the states |n,k⟩ constructed in
Sec. <ref>. We find
^±⟨ n^', k^'|V_A|n,k⟩^±= ^0_±⟨
n^', k^'|V_A|n,k⟩^0_±
=Jλ_Acos(kc/2)(δ_n^',n+1+
δ_n^',n-1)δ_k,k^',
corresponding to an effective nearest neighbor hopping Hamiltonian for the
relative coordinate n, with hopping amplitude Jλ_Acos(kc/2), subject
to the hard core constraint n>0.
For the top and bottom continua, the hopping amplitude due to V_h_- is also
real, so the |q,k⟩^± states constructed in
Sec. <ref> are also eigenstates of V_A and the change in the
energies is
^±⟨ q,k|V_A|q,k⟩^±/^±⟨ q,k|q,k⟩^± =
2Jλ_Acos(kc/2)cos(qc/2).
For the middle continuum, we consider the effect of V_h_-+V_A on the
plane wave
∑_n e^iqcn/2(|n,k⟩^0_++|n,k⟩^0_-).
We find that the perturbation corresponds to an effective nearest neighbour
hopping Hamiltonian for the relative coordinate n, with complex hopping
amplitude t=Jλ_Acos(kc/2)+i
h_-sin(kc/2)=t^'+it^''=|t|e^iφ.
This yields the dispersion
ω^A_+- = J+2|t|cos(qc/2+φ)
=J+2t^'cos(qc/2)-2t^''sin(qc/2)=
J-2h_-sin(kc/2)sin(qc/2)+2Jλ_A
cos(kc/2)cos(qc/2).
The eigenstates satisfying the hard core repulsion boundary condition at n=0
can be obtained by noting that the plane wave with relative momentum q is
degenerate with the plane wave with relative momentum -q-4φ/c. Mixing
these plane waves leads to the unnormalized eigenstates satisfying the hard core
constraint,
∑_cell separation,
n>0(e^iqcn/2-e^-i(qc/2+2φ)n)(|n,k⟩^0_++|n,k
⟩^0_-).
For the plane wave
∑_n e^iqcn/2(|n,k⟩^0_+-|n,k⟩^0_-),
the effect of V_h_-+V_A corresponds to an effective nearest neighbour
hopping Hamiltonian for the relative coordinate n, with complex hopping
amplitude t^*, such that the argument above applies with φ→
-φ. Thus, the effect of the perturbation V_A is to add
2Jλ_Acos(kc/2)cos(qc/2) to the energies of all continua, as
anticipated based on the single soliton dispersion relation. We also note that
the argument above yields the following two degenerate eigenstates in the
presence of V_h_- but without V_A, i.e. for φ=π/2,
∑_ n>0(e^icq_± n/2- e^-i(cq_±
/2+π)n)(|n,k⟩^0_+± |n,k⟩^0_-),
with q_+-q_-=π. The eigenstates constructed in the main text are the
symmetric / antisymmetric combinations of these eigenstates.
For λ_A<0 and h_->0 such as is found experimentally, the perturbation
V_A leads the top continuum to narrow and the bottom continuum to broaden, and
the middle continuum to broaden around what would otherwise be the nodes. The
plots in the right-most column of Fig. <ref> include the effects
of all terms in ℋ_1 and ℋ_2, but not ℋ_3
since it is not possible to include the effects of this last term on the
continua in this framework.
18
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Sachdev(1999)]Sachdev1999
author author S. Sachdev, @noop title Quantum Phase
Transitions (publisher Cambridge University Press, address London, year 1999)NoStop
[Pfeuty(1970)]Pfeuty1970
author author P. Pfeuty, title title The one-dimensional
Ising model with a transverse field, https://doi.org/https://doi.org/10.1016/0003-4916(70)90270-8 journal journal Ann. Phys. (NY) volume
57, pages 79 (year 1970)NoStop
[Lieb et al.(1961)Lieb,
Schultz, and Mattis]Lieb1961
author author E. Lieb, author T. Schultz, and author D. Mattis, title title Two soluble models of an antiferromagnetic
chain, https://doi.org/https://doi.org/10.1016/0003-4916(61)90115-4 journal journal Ann. Phys. (NY) volume
16, pages 407 (year 1961)NoStop
[McCoy and Wu(1978)]McCoy1978
author author B. M. McCoy and author T. T. Wu, title title Two-dimensional Ising field theory
in a magnetic field: Breakup of the cut in the two-point function, https://doi.org/10.1103/PhysRevD.18.1259 journal journal Phys. Rev. D volume 18, pages 1259 (year 1978)NoStop
[Coldea et al.(2010)Coldea,
Tennant, Wheeler, Wawrzynska,
Prabhakaran, Telling, Habicht, Smeibidl, and Kiefer]Coldea2010
author author R. Coldea, author D. A. Tennant,
author E. M. Wheeler, author E. Wawrzynska, author
D. Prabhakaran, author
M. Telling, author K. Habicht, author P. Smeibidl, and author K. Kiefer, title title Quantum
criticality in an Ising chain: Experimental evidence for emergent E8
symmetry, https://doi.org/10.1126/science.1180085 journal journal Science volume
327, pages 177 (year 2010)NoStop
[Kinross et al.(2014)Kinross, Fu, Munsie, Dabkowska, Luke, Sachdev, and Imai]Kinross2014xe
author author A. W. Kinross, author M. Fu,
author T. J. Munsie, author H. A. Dabkowska, author
G. M. Luke, author
S. Sachdev, and author
T. Imai, title title Evolution of quantum fluctuations near the quantum critical point of
the transverse field Ising chain system CoNb_2O_6, https://doi.org/10.1103/PhysRevX.4.031008 journal journal Phys. Rev. X volume 4, pages
031008 (year 2014)NoStop
[Morris et al.(2014)Morris,
Valdés Aguilar, Ghosh, Koohpayeh, Krizan, Cava, Tchernyshyov, McQueen, and Armitage]Morris2014ux
author author C. M. Morris, author R. Valdés Aguilar, author A. Ghosh, author S. M. Koohpayeh, author J. Krizan,
author R. J. Cava, author O. Tchernyshyov, author
T. M. McQueen, and author
N. P. Armitage, title
title Hierarchy of bound states in the one-dimensional
ferromagnetic Ising chain CoNb_2O_6 investigated by
high-resolution time-domain terahertz spectroscopy, https://doi.org/10.1103/PhysRevLett.112.137403 journal
journal Phys. Rev. Lett. volume 112, pages 137403 (year 2014)NoStop
[Liang et al.(2015)Liang,
Koohpayeh, Krizan, McQueen,
Cava, and Ong]Liang2015
author author T. Liang, author S. M. Koohpayeh, author J. W. Krizan, author T. M. McQueen,
author R. J. Cava, and author N. P. Ong, title title Heat capacity peak at the quantum critical point
of the transverse Ising magnet CoNb_2O_6, https://doi.org/10.1038/ncomms8611 journal journal Nat. Commun. volume 6, pages
7611 (year 2015)NoStop
[Amelin et al.(2020)Amelin,
Engelmayer, Viirok, Nagel,
Rõõm, Lorenz, and Wang]Amelin2020ov
author author K. Amelin, author J. Engelmayer,
author J. Viirok, author U. Nagel, author
T. Rõõm, author
T. Lorenz, and author
Z. Wang, title title Experimental observation of quantum many-body excitations of E8
symmetry in the Ising chain ferromagnet CoNb_2O_6, https://doi.org/10.1103/PhysRevB.102.104431 journal journal Phys. Rev. B volume 102, pages 104431 (year 2020)NoStop
[Maartense et al.(1977)Maartense, Yaeger, and Wanklyn]Maartense1977ef
author author I. Maartense, author I. Yaeger, and author B. M. Wanklyn, title title Field-induced magnetic transitions of
CoNb_2O_6 in ordered state, @noop journal
journal Solid State Commun. volume
21, pages 93 (year 1977)NoStop
[Scharf et al.(1979)Scharf,
Weitzel, Yaeger, Maartense, and Wanklyn]Scharf1979gi
author author W. Scharf, author H. Weitzel,
author I. Yaeger, author I. Maartense, and author B. M. Wanklyn, title title Magnetic-structures of CoNb_2O_6, https://doi.org/https://doi.org/10.1016/0304-8853(79)90044-1
journal journal J. Magn. Magn. Mater. volume 13, pages 121 (year
1979)NoStop
[Mitsuda et al.(1994)Mitsuda, Hosoya, Wada, Yoshizawa, Hanawa, Ishikawa, Miyatani, Saito, and Kohn]Mitsuda1994
author author S. Mitsuda, author K. Hosoya,
author T. Wada, author
H. Yoshizawa, author
T. Hanawa, author M. Ishikawa, author K. Miyatani, author K. Saito, and author K. Kohn, title title Magnetic
ordering in one-dimensional system CoNb_2O_6 with competing
interchain interactions, @noop journal journal J. Phys. Soc. Jpn. volume 63, pages 3568 (year 1994)NoStop
[Heid et al.(1995)Heid,
Weitzel, P., Bonnet,
Gonschorek, Vogt, Norwig, and Fuess]Heid1995bt
author author C. Heid, author H. Weitzel,
author B. P., author
M. Bonnet, author W. Gonschorek, author T. Vogt, author J. Norwig, and author H. Fuess, title title Magnetic phase-diagram of
CoNb_2O_6 - a neutron-diffraction study, @noop
journal journal J. Magn. Magn. Mater. volume 151, pages 123 (year
1995)NoStop
[Weitzel et al.(2000)Weitzel, Ehrenberg, Heid, Fuess, and Burlet]Weitzel2000ei
author author H. Weitzel, author H. Ehrenberg,
author C. Heid, author
H. Fuess, and author
P. Burlet, title title Lifshitz point in the three-dimensional magnetic phase diagram of
CoNb_2O_6, @noop journal journal Phys. Rev. B volume 62, pages 12146 (year 2000)NoStop
[Fava et al.(2020)Fava,
Coldea, and Parameswaran]Fava2020
author author M. Fava, author R. Coldea, and author S. A. Parameswaran, title title Glide symmetry breaking and Ising
criticality in the quasi-1d magnet CoNb_2O_6, https://doi.org/10.1073/pnas.2007986117 journal journal Proc. Natl. Acad. Sci. USA volume 117, pages 25219 (year 2020)NoStop
[Morris et al.(2021)Morris,
Desai, Viirok, Hüvonen,
Nagel, Rõõm, Krizan,
Cava, McQueen, Koohpayeh,
Kaul, and Armitage]Morris2021
author author C. M. Morris, author N. Desai,
author J. Viirok, author D. Hüvonen, author
U. Nagel, author T. Rõõm, author J. W. Krizan, author R. J. Cava, author T. M. McQueen, author S. M. Koohpayeh, author R. K. Kaul, and author N. P. Armitage, title title Duality and domain wall
dynamics in a twisted Kitaev chain, https://doi.org/10.1038/s41567-021-01208-0 journal journal Nat. Phys. volume 17, pages
832 (year 2021)NoStop
[Woodland et al.(2023)Woodland, Coldea et al.]Woodland2023
author author L. Woodland, author R. Coldea,
et al., title title Tuning the
confinement potential between spinons in the Ising chain
CoNb_2O_6 using longitudinal fields and quantitative determination
of the microscopic Hamiltonian (year 2023), note
in preparationNoStop
[Prabhakaran et al.(2003)Prabhakaran, Wondre, and Boothroyd]PB
author author D. Prabhakaran, author F. Wondre, and author A. Boothroyd, title title Preparation of large
single crystals of ANb_2O_6 (A=Ni, Co, Fe, Mn) by the
floating-zone method, https://doi.org/https://doi.org/10.1016/S0022-0248(02)02229-7 journal journal J. Cryst. Growth volume 250, pages 72 (year 2003)NoStop
|
http://arxiv.org/abs/2306.10453v2
|
20230618015859
|
Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking
|
[
"Juanhui Li",
"Harry Shomer",
"Haitao Mao",
"Shenglai Zeng",
"Yao Ma",
"Neil Shah",
"Jiliang Tang",
"Dawei Yin"
] |
cs.LG
|
[
"cs.LG",
"cs.SI"
] |
Succulent rings and images of hairy Schwarzschild black holes
Jian-Pin Wu
July 31, 2023
==============================================================
Link prediction attempts to predict whether an unseen edge exists based on only a portion of edges of a graph. A flurry of methods have been introduced in recent years that attempt to make use of graph neural networks (GNNs) for this task. Furthermore, new and diverse datasets have also been created to better evaluate the effectiveness of these new models. However, multiple pitfalls currently exist that hinder our ability to properly evaluate these new methods. These pitfalls mainly include: (1) Lower than actual performance on multiple baselines, (2) A lack of a unified data split and evaluation metric on some datasets, and (3) An unrealistic evaluation setting that uses easy negative samples.
To overcome these challenges, we first conduct a fair comparison across prominent methods and datasets, utilizing the same dataset and hyperparameter search settings. We then create a more practical evaluation setting based on a Heuristic Related Sampling Technique (HeaRT), which samples hard negative samples via multiple heuristics. The new evaluation setting helps promote new challenges and opportunities in link prediction by aligning the evaluation with real-world situations. Our implementation and data are available at <https://github.com/Juanhui28/HeaRT>.
§ INTRODUCTION
The task of link prediction is to determine the existence of an edge between two unconnected nodes in a graph. Existing link prediction algorithms attempt to estimate the proximity of different pairs of nodes in the graph, where node pairs with a higher proximity are more likely to interact <cit.>. Link prediction is applied in many different domains including social networks <cit.>, biological networks <cit.>, and recommender systems <cit.>.
Graph neural networks (GNNs) <cit.> have gained prominence in recent years with many new frameworks being proposed for a variety of different tasks. Corresponding to the rise in popularity of GNNs, there has been a number of studies that attempt to critically examine the effectiveness of different GNNs on various tasks. This can be seen for the task of node classification <cit.>, graph classification <cit.>, knowledge graph completion (KGC) <cit.>, and others <cit.>.
However, despite a number of new GNN-based methods being proposed <cit.> for link prediction, there is currently no work that attempts to carefully examine recent advances in link prediction methods. Upon examination, we find that there are several pitfalls in regard to model evaluation that impede our ability to properly evaluate current methods. This includes:
* Lower than Actual Performance. We observe that the current performance of multiple models is underreported. For some methods, such as standard GNNs, this is due to poor hyperparameter tuning. Once properly tuned, they can even achieve the best overall performance on some metrics (see SAGE <cit.> in Table <ref>). Furthermore, for other methods like Neo-GNN <cit.> we can achieve around an 8.5 point increase in Hits@50 on ogbl-collab relative to the originally reported performance. This results in Neo-GNN achieving the best overall performance on ogbl-collab in our study (see Table <ref>).
Such problems obscure the true performance of different models, making it difficult to draw reliable conclusions from the current results.
* Lack of Unified Settings on the Planetoid Datasets. For the Cora, Citeseer, and Pubmed datasets <cit.>, there exists no unified data split and evaluation metrics utilized. For the data split, some works <cit.> use a single fixed train/valid/test split with percentages 85/5/10%. More recent works <cit.> utilize 10 random splits of size 70/10/20%. In terms of the evaluation metrics, some studies <cit.> utilize ranking-based metrics such as MRR or Hits@K while others <cit.> report the area under the curve (AUC). This is despite multiple studies that argue that AUC is a poor metric for evaluating link prediction <cit.>. This lack of a unified setting hampers our ability to determine which methods perform best on these datasets.
* Unrealistic Evaluation Setting.
During the evaluation, we are given a set of true samples (i.e., positive samples) and a set of false samples (i.e., negative samples). We are tasked with learning a classifier f that assigns a higher probability to the positive samples than the negatives. The current evaluation setting uses the same set of randomly selected negative samples for each positive sample. We identify two potential problems with the current evaluation procedure. (1) It is not aligned with real-world settings. In a real-world scenario, we typically care about predicting links for a specific node. For example, in friend recommendations, we aim to recommend friends for a specific user u. To evaluate such models for u, we strive to rank node pairs including u. However, this does not hold in the current setting as u is not included in most of the negative samples.
(2) The current evaluation setting makes the task too easy. As such, it may not reflect the model performance in real-world applications. This is because the nodes in a randomly selected negative “node pair” are likely to be unrelated to each other. As shown in Figure <ref>, almost all negative samples in the test data have no common neighbors, a typically strong heuristic, making them trivial to classify them.
To account for these issues, we propose to first conduct a fair and reproducible evaluation among current link prediction methods under the existing evaluation setting. We then design a new evaluation strategy that is more aligned with a real-world setting and detail our results. Our key contributions are summarized below:
* Reproducible and Fair Comparison. We conduct a fair comparison of different models across multiple common datasets. To ensure a fair comparison, we tune all models on the same set of hyperparameters. We further evaluate different models using multiple types of evaluation metrics. For the Planetoid datasets <cit.>, we further utilize a unified data split to facilitate a point of comparison between models. To the best of our knowledge, there are no recent efforts to comprehensively benchmark link prediction methods (several exist for KGC <cit.>). Furthermore, we open-source the implementation in our analysis to enable others in their analyses.
* New Evaluation Setting. We recognize that the current negative sampling strategy used in evaluation is unrealistic and easy. To counter these issues, we first utilize a more realistic setting of tailoring the negatives to each positive sample. This is achieved by restricting them to be corruptions of the positive sample (i.e., containing one of its two nodes). Given the prohibitive cost of utilizing all possible corruptions, we opt instead to only rank against K negatives for each positive sample. In order to choose the most relevant and difficult corruptions, we propose a Heuristic Related Sampling Technique (HeaRT),
which selects them based on a combination of multiple heuristics. This creates a more challenging task than the previous evaluation strategy and allows us to better assess the capabilities of current methods.
The rest of the paper is structured as follows. In Section <ref> we introduce the models, datasets, and settings used for conducting a fair comparison between methods. In Section <ref> we show the results of the fair comparison under the existing evaluation setting and discuss our main observations. Lastly, in Section <ref> we introduce our new evaluation setting. We then detail and discuss the performance of different methods using our new setting.
§ PRELIMINARIES
§.§ Link Prediction Methods
Link prediction aims to predict the likelihood of a connection between two nodes given the existing graph. Conventional methods <cit.> often exploit hand-craft graph structural properties (i.e., heuristics) between node pairs. GNNs attempt to learn the structural information to facilitate link prediction <cit.>. Given the strong performance of pairwise-based heuristics <cit.>, some recent works leverage both GNNs and pairwise information, demonstrating strong performance.
For our study, we consider both traditional and state-of-the-art GNN-based models. They can be roughly organized into four categories. 1) Heuristic methods: Common Neighbor (CN) <cit.>, Adamic Adar
(AA) <cit.>, Resource Allocation (RA) <cit.>, Shortest Path <cit.>, and Katz <cit.>. These methods define a score to indicate the link existence based on the graph structure. Among them, CN, AA, and RA are based on the common neighbors, while Shortest Path and Katz are based on the path information. 2) Embedding methods: Matrix factorization (MF) <cit.>, Multilayer Perceptron (MLP) and Node2Vec <cit.>. These methods are trained to learn low-dimensional node embeddings that are used to predict the likelihood of node pairs existing. 3) GNN methods: GCN <cit.>, GAT <cit.>, SAGE <cit.>, and GAE <cit.>. These methods attempt to integrate the multi-hop graph structure based on the message passing paradigm. 4) GNN + Pairwise Information methods: Standard GNN methods, while powerful, are not able to capture link-specific information <cit.>. As such, works have been proposed that augment GNN methods by including additional information to better capture the relation between the nodes in the link we are predicting.
SEAL <cit.>, BUDDY <cit.>, and NBFNet <cit.> leverage the subgraph features. Neo-GNN <cit.>, NCN <cit.>, and NCNC <cit.> are based on common neighbor information. Lastly, PEG <cit.> utilizes the positional encoding derived from the graph structure.
§.§ Datasets and Experimental Settings
In this section we summarize the datasets and evaluation and training settings. We note that the settings depend on the specific dataset.
More details are given in Appendix <ref>.
Datasets. We limit our experiments to homogeneous graphs, which are the most commonly used datasets for link prediction. This includes the small-scale datasets, i.e., Cora, Citeseer, Pubmed <cit.>, and large-scale datasets in the OGB benchmark <cit.>, i.e., ogbl-collab, ogbl-ddi, ogbl-ppa, and ogbl-citation2. We summarize the statistics and split ratio of each dataset in Appendix <ref>.
Metrics. For evaluation, we utilize both the area under the curve (AUC) and ranking-based metrics, i.e., mean reciprocal rank (MRR) and Hits@K. For Cora, Citeseer, and Pubmed we adopt K ∈{1, 3,10, 100}. We note that K=100 is reported in some recent works <cit.>). However due to the small number of negatives used during evaluation (e.g., ≈ 500 for Cora and Citeseer) K=100 is likely not informative. For the OGB datasets, we adopt K ∈{20, 50, 100} to keep consistent with the original study <cit.>.
Hyperparameter Ranges. We conduct a hyperparameter search across a comprehensive range of values. For Cora, Citeseer, and Pubmed this includes: learning rate (0.01, 0.001), dropout (0.1, 0.3, 0.5), weight decay (1e-4, 1e-7, 0), number of model layers (1, 2, 3), number of prediction layers (1, 2, 3), and the embedding size (128, 256). Due to the large size of the OGB datasets, it's infeasible to tune over such a large range. Therefore, following the most commonly used settings among published hyperparameters, we fix the weight decay to 0, the number of model and prediction layers to be 3, and the embedding size to be 256. The best hyperparameters are chosen based on the validation performance.
We note that several exceptions exist to these ranges when they result in significant performance degradations (see Appendix <ref> for more details). We further follow the existing setting and only sample one negative sample per positive sample during training.
Existing Evaluation Settings. In the evaluation stage, the same set of randomly sampled negatives are used for all positive samples. We note that one exception is ogbl-citation2, where they randomly sample 1000 negative samples per positive sample. For Cora, Citeseer, and Pubmed the number of negative samples is equal to the number of positive samples. For the OGB datasets, we use the existing fixed set of randomly chosen negatives found in <cit.>.
Furthermore, for ogbl-collab we follow the existing protocol <cit.> and include the validation edges in the training graph during testing. This setting is adopted on ogbl-collab under both the existing and new evaluation setting.
§ FAIR COMPARISON UNDER THE EXISTING SETTING
In this section, we conduct a fair comparison among link prediction methods. This comparison is spurred by the multiple pitfalls noted in Section <ref>, which include lower-than-actual model performance, multiple data splits, and inconsistent evaluation metrics.
These pitfalls hinder our ability to fairly compare different methods.
To rectify this, we conduct a fair comparison adhering to the settings listed in section <ref>.
The results are split into two tables. The results for Cora, Citeseer, and Pubmed are shown in Table <ref> and OGB in Table <ref>. For simplicity, we only present the AUC and MRR for Cora, Citeseer, and Pubmed. For OGB datasets, we include AUC and the original ranking metric reported in <cit.> to allow a convenient comparison (Hits@20 for ogbl-ddi, Hits@50 for ogbl-collab, Hits@100 for ogbl-ppa, and MRR for ogbl-citation2). We use “>24h" to denote methods that require more than 24 hours for either training one epoch or evaluation. OOM indicates that the algorithm requires over 50Gb of GPU memory.
Additional results in terms of other metrics are presented in Appendix <ref>. We have several noteworthy observations concerning the methods, the datasets, the evaluation settings, and the overall results. We highlight the main observations below.
Observation 1: Better than Reported Performance. We find that for some models we are able to achieve superior performance compared to what is reported by recent studies. For instance, in our study Neo-GNN <cit.> achieves the best overall test performance on ogbl-collab with a Hits@50 of 66.13. In contrast, the reported performance in <cit.> is only 57.52, which would rank seventh under our current setting. This is because the original study <cit.> does not follow the standard setting of including validation edges in the graph during testing. This setting, as noted in Section <ref>, is used by all other methods on ogbl-collab. However it was omitted by <cit.>, resulting in lower reported performance.
Furthermore, with proper tuning, conventional baselines like GCN <cit.> and GAE <cit.> generally exhibit enhanced performance relative to what was originally reported across all datasets.
For example, we find that GAE can achieve the second best MRR on Citeseer and GCN the third best Hits@20 on ogbl-ddi.
A comparison of the reported results and ours are shown in Table <ref>. We note that we report AUC for Cora, Citeseer, Pubmed as it was used in the original study.
These observations suggest that the performance of various methods are better than what was reported in their initial publications. However, many studies <cit.> only report the original performance for comparison, which has the potential to lead to inaccurate conclusions.
Observation 2: Divergence from Reported Results on ogbl-ddi. We observe that our results in Table <ref> for ogbl-ddi differ from the reported results. Outside of GCN, which reports better performance, most other GNN-based methods report a lower-than-reported performance. For example, for BUDDY we only achieve a Hits@20 of 29.60 vs. the reported 78.51 (see Appendix <ref> for a comprehensive comparison among methods). We find that the reason for this difference depends on the method. BUDDY <cit.> reported [https://github.com/melifluos/subgraph-sketching] using 6 negatives per positive sample during training, leading to an increase in performance. Neo-GNN <cit.> first pretrains the GNN under the link prediction task, and then uses the pretrained model as the initialization for Neo-GNN [https://github.com/seongjunyun/Neo-GNNs]. For a fair comparison among methods, we only use 1 negative per positive sample in training and we don't apply the pretraining.
For other methods, we find that a weak relationship between the validation and test performance complicates the tuning process, making it difficult to find the optimal hyperparameters. Please see Appendix <ref> for a more in-depth study and discussion.
Observation 3: High Model Standard Deviation. The results in Tables <ref> and <ref> present the mean performance and standard deviation when training over 10 seeds. Generally, we find that for multiple datasets the standard deviation of the ranking metrics is often high for most models. For example, the standard deviation for MRR can be as high as 8.82, 8.96, or 7.75 for Cora, Citeseer, and Pubmed, respectively. Furthermore, on ogbl-ddi the standard deviation of Hits@20 reaches as high as 10.47 and 15.56.
A high variance indicates unstable model performance. This makes it difficult to compare results between methods as the true performance lies in a larger range. This further complicates replicating model performance, as even large differences with the reported results may still fall within variance (see observation 2).
Later in Section <ref> we find that our new evaluation can reduce the model variance for all datasets (see Table <ref>). This suggests that the high variance is related to the current evaluation procedure.
Observation 4: Inconsistency of AUC vs. Ranking-Based Metrics.
The AUC score is widely adopted to evaluate recent advanced link prediction methods <cit.>. However, from our results in Tables <ref> and <ref> we observe that there exists a disparity between AUC and ranking-based metrics.
In some cases, the AUC score can be high when the ranking metric is very low or even 0. For example, the Shortest Path heuristic records a Hits@K of 0 on ogbl-collab and ogbl-ppa. However, the AUC score on both datasets is >95%. Furthermore, even though RA records the third and fifth best performance on ogbl-ppa and ogbl-collab, respectively, it has a lower AUC score than Shortest Path on both. Previous works <cit.> argued that AUC is not a proper metric for link prediction. This is due to the inapplicability of AUC for highly imbalanced problems <cit.>.
§ NEW EVALUATION SETTING
In this section, we introduce a new setting for evaluating link prediction methods.
We first discuss the unrealistic nature of the current evaluation setting in Section <ref>. Based on this, we present our new evaluation setting in Section <ref>, which aims to align better with real-world scenarios. Lastly, in Section <ref>, we present and discuss the results based on our new evaluation setting.
§.§ Issues with the Existing Evaluation Setting
The existing evaluation procedure for link prediction is to rank a positive sample against a set of K randomly selected negative samples. The same set of K negatives are used for all positive samples (with the exception of ogbl-citation2 which utilizes 1000 per positive sample). We demonstrate that there are multiple issues with this setting, making it difficult to properly evaluate the effectiveness of current models.
Issue 1: Non-Personalized Negative Samples. The existing evaluation setting uses the same set of negative samples for all positive samples (outside of ogbl-citation2). This strategy, referred to as global negative sampling <cit.>, is not a commonly sought objective. Rather, we are often more interested in predicting links that will occur for a specific node. Take, for example, a social network that connects users who are friends. In this scenario, we may be interested in recommending new friends to a user u. This requires learning a classifier f that assigns a probability to a link existing. When evaluating this task, we want to rank links where u connects to an existing friend above those where they don't. For example, if u is friends with a but not b, we hope that f(u, a) > f(u, b). However, the existing evaluation setting doesn't explicitly test for this. Rather it compares a true sample (u, a) with a potentially unrelated negative sample, e.g., (c, d). This is not aligned with the real-world usage of link prediction on such graphs.
Issue 2: Easy Negative Samples. The existing evaluation setting randomly selects negative samples to use. However given the large size of most graphs (see Table <ref> in Appendix <ref>), randomly sampled negatives are likely to choose two nodes that bear no relationship to each other. Such node pairs are trivial to classify. We demonstrate this by plotting the distribution of common neighbors (CN), a strong heuristic, for all positive and negative test samples in Figure <ref>. Almost all the negative samples contain no CNs, making them easy to classify. We further show that the same problem afflicts even the smaller datasets in Figure <ref> in Appendix <ref>.
These observations suggest that a more realistic evaluation strategy is desired. At the core of this challenge is which negative samples to use during evaluation. We discuss our design for solving this in the next subsection.
§.§ Heuristic Related Sampling Technique (HeaRT)
In this subsection, we introduce new strategy for evaluating link prediction methods. To address the concerns outlined in Section <ref>, we design a new method for sampling negatives during evaluation. Our strategy, HeaRT, solves these challenges by: (a) personalizing the negatives to each sample and (b) using heuristics to select hard negative samples. This allows for the negative samples to be directly related to each positive sample while also being non-trivial. We further discuss how to ensure that the negative samples are both personalized and non-trivial for a specific positive sample.
From our discussion in Section <ref>, we are motivated in personalizing the negatives to each positive sample. Since the positive samples in the current datasets are node pairs, we seek to personalize the negatives to both nodes in the positive sample. Extending our example in Section <ref>, this is analogous to restricting the negatives to contain one of the two users from the original friendship pair. As such, for a positive sample (u, a), the negative samples will belong to the set:
S(u, a) = {(u', a) | u' ∈𝒱}∪{(u, a') | a' ∈𝒱},
where 𝒱 is the set of nodes. This is similar to the setting used for knowledge graph completion (KGC) <cit.> which utilizes all such samples for evaluation. However, one drawback of evaluating each positive sample against the entire set of possible corruptions is the high computational cost. To mitigate this issue we consider only utilizing a small subset of S(u, a) during evaluation.
The key challenge is how to generate a subset of S(u, a). If we randomly sample from S(u, a), we risk only utilizing easy negative samples. This is one of the issues of the existing evaluation setting (see Issue 2 in Section <ref>), whereby randomly selecting negatives, they unknowingly produce negative samples that are too easy. We address this by selecting the negative samples via a combination of multiple heuristics. Since heuristics typically correlate well with performance, we ensure that the negative samples will be non-trivial to classify. This is similar to the concept of candidate generation <cit.>, which only ranks a subset of candidates that are most likely to be true.
An overview of the generation process is given in Figure <ref>. For each positive sample, we generate K negative samples. To allow personalization to both nodes in the positive sample equally, we sample K/2 negatives with each node. For the heuristics, we consider RA <cit.>, PPR <cit.>, and feature similarity. A more detailed discussion on the negative sample generation is given in Appendix <ref>.
§.§ Results and Discussion
In this subsection we present our results when utilizing HeaRT. We follow the parameter ranges introduced in Section <ref>. For all datasets we utilize K=500 negative samples per positive sample during evaluation. Furthermore for ogbl-ppa we only use a small subset of the validation and test positive samples (100K each) for evaluation. This is because the large size of the validation and test sets (see Table <ref> in Appendix <ref>) makes HeaRT prohibitively expensive.
The results are shown in Table <ref> (Cora, Citeseer, Pubmed) and Table <ref> (OGB). For simplicity, we only include the MRR and Hits@10 for Cora, Citeseer, Pubmed, and the MRR and Hits@20 for OGB. Additional results for other metrics can be found in Appendix <ref>.
We highlight the main observations below.
Observation 1: Better Performance of Simple Models. We find that under HeaRT, “simple" baseline models (i.e., heuristic, embedding, and GNN methods) show a greater propensity to outperform their counterparts via ranking metrics than under the existing setting. Specifically, we focus on MRR in Table <ref>, <ref>, and <ref>, and the corresponding ranking-based metrics in Table <ref>.
Under the existing setting, such methods only rank in the top three for any dataset a total of 5 times. However, under HeaRT this occurs 10 times. Furthermore, under the existing setting only 1 “simple" method ranks best overall while under HeaRT there are 4. This suggests that recent advanced methods may have benefited from the easier negative samples in the existing setting.
Another interesting observation is that on ogbl-collab, heuristic methods are able to outperform more complicated models by a large margin. Specifically, we find that Katz is the best ranked method, Shortest Path the second, and RA the fourth. Furthermore, the MRR gap between the second ranked method (Shortest Path) and the third (BUDDY) is very large at 14.29 points.
This suggests that methods that utilize GNNs methods may not be optimal for certain graph topologies, necessitating further study.
r0.5
Mean model standard deviation for the existing setting and HeaRT. We utilize Hits@20 for ogbl-ddi, Hits@50 for ogbl-collab, Hits@100 for ogbl-ppa, and MRR otherwise.
Dataset Existing HeaRT % Change
Cora 5.19 0.79 -85%
Citeseer 5.94 0.88 -85%
Pubmed 4.14 0.35 -92%
ogbl-collab 1.49 0.96 -36%
ogbl-ppa 2.13 0.36 -83%
ogbl-ddi 7.34 3.81 -48%
ogbl-citation2 1.39 0.59 -58%
Observation 2: Lower Model Standard Deviation. We observed earlier that, under the existing evaluation setting, the model variance across seeds was high (see observation 3 in Section <ref>). This complicates model comparison as the model performance is unreliable. Interestingly, we find that HeaRT is able to dramatically reduce the variance for all datasets. We demonstrate this by first calculating the mean standard deviation across all models on each individual dataset. This was done for both evaluation settings with the results compared. As demonstrated in Table <ref>, the mean standard deviation decreases for all datasets. This is especially true for Cora, Citeseer, and Pubmed, which each decrease by over 85%.
Such a large decrease in standard deviation is noteworthy as it allows for a more trustworthy and reliable comparison between methods.
We posit that this observation is caused by a stronger alignment between the positive and negative samples under our new evaluation setting. Under the existing evaluation setting, the same set of negative samples is used for all positive samples. One consequence of this is that a single positive sample may bear little to no relationship to the negative samples (see Section <ref> for more discussion). However, under our new evaluation setting, the negatives for a positive sample are a subset of the corruptions of that sample. This allows for a more natural comparison via ranking-based metrics as the samples are more related and can be more easily compared.
Observation 3: Lower Model Performance.
We observe that the majority of datasets exhibit a significantly reduced performance in comparison to the existing setting. For example, under the existing setting, models typically achieve a MRR of around 30, 50, and 30 on Cora, Citeseer, and Pubmed (Table <ref>), respectively. However, under HeaRT the MRR for those datasets is typically around 20, 25, and 10 (Table <ref>). Furthermore, under the existing setting many models consistently achieve a Hits@50 of around 60 on ogbl-collab (Table <ref>). Under HearRT, the mean Hits@50 drops to 45 (Table <ref> in Appendix <ref>). For ogbl-citation2, the MRR of the best performing model falls from a shade under 90 on the existing setting to slightly over 20 on HeaRT. Lastly, we note that the performance on ogbl-ppa actually increases. This is because we only utilize a small subset of the total test set when evaluating on HeaRT, nullifying any comparison between the two settings.
These outcomes are observed despite HeaRT using much fewer negative samples than the original setting. This suggests that the negative samples generated by HeaRT are substantially more challenging than those used in the existing setting. This underscores the need to develop more advanced methodologies that can tackle harder negatives samples like in HeaRT.
§ CONCLUSION
In this work we have revealed several pitfalls that currently befall recent work in link prediction.
To overcome these pitfalls, we first establish a benchmarking that facilitates a fair and consistent evaluation across a diverse set of models and datasets.
By doing so, we are able to make several illuminating observations about the performance and characteristics of various models. Furthermore, based on several limitations we observed in the existing evaluation procedure,
we introduce a more practical setting called HeaRT (Heuristic Related Sampling Technique). HeaRT incorporates a more real-world evaluation setting, resulting in a better comparison among methods. By introducing a more rigorous and realistic assessment, HeaRT could guide the field towards more effective models, thereby advancing the state of the art in link prediction.
unsrt
§ CHECKLIST
* For all authors...
* Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
* Did you describe the limitations of your work?
See Appendix <ref>.
* Did you discuss any potential negative societal impacts of your work?
See Appendix <ref>.
* Have you read the ethics review guidelines and ensured that your paper conforms to them?
* If you are including theoretical results...
* Did you state the full set of assumptions of all theoretical results?
* Did you include complete proofs of all theoretical results?
* If you ran experiments (e.g. for benchmarks)...
* Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
See <https://github.com/Juanhui28/HeaRT>.
* Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
Yes. See Section <ref> and Appendix <ref>.
* Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
* Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
See Appendix <ref>.
* If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
* If your work uses existing assets, did you cite the creators?
* Did you mention the license of the assets?
See Appendix <ref>.
* Did you include any new assets either in the supplemental material or as a URL?
See <https://github.com/Juanhui28/HeaRT>.
* Did you discuss whether and how consent was obtained from people whose data you're using/curating?
* Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
* If you used crowdsourcing or conducted research with human subjects...
* Did you include the full text of instructions given to participants and screenshots, if applicable?
* Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
* Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
|
http://arxiv.org/abs/2306.02805v1
|
20230605115747
|
Fractional Crank-Nicolson Galerkin finite element analysis for coupled time-fractional nonlocal parabolic problem
|
[
"Pari J. Kundaliya"
] |
math.NA
|
[
"math.NA",
"cs.NA"
] |
Fractional Crank-Nicolson Galerkin finite element analysis for coupled time-fractional nonlocal parabolic problem
[Department of Mathematics, Institute of Infrastructure, Technology, Research and Management, Ahmedabad, Gujarat, India, ([email protected])]
In this article we propose a scheme for solving the coupled time-fractional nonlocal diffusion problem. The scheme consist of fractional Crank-Nicolson method with Galerkin finite element method (FEM) and Newton's method. We derive a priori error estimates for fully-discrete solutions in L^2 and H^1_0 norms. Results based on the usual finite element method are provided to confirm the theoretical estimates.
Keywords: Nonlocal problem, Uniform mesh, Fractional Crank-Nicolson method, Error estimate
AMS(MOS): 65M12, 65M60, 35R11
§ INTRODUCTION
Many real world problems are modeled more accurately by fractional differential equations than integer order differential equations <cit.>. Fractional order partial differential equations (PDEs) have numerous applications in the fields of science and engineering, such as physics, chemistry, biology and finance <cit.>. In the literature, several numerical methods are available for time discretization of time fractional PDEs. Mainly, these methods can be classified as convolution quadrature (CQ) and L1 type schemes <cit.>. There are many real-life problems which have more than one unknown function. As a motivation, authors in <cit.> considered a mathematical model related to two species rabbits and foxes in an island. In this model, the rate of change of the population of one species is depend on the rate of change of the population of other species. In this article, we consider the following coupled time-fractional nonlocal diffusion equation with unknowns u and v.
Find u and v such that
^CD^α_tu(x,t)-M_1(l(u),l(v))Δu(x,t) = f_1(u,v) Ω×(0,T],
^CD^α_tv(x,t)-M_2(l(u),l(v))Δv(x,t) = f_2(u,v) Ω×(0,T],
u(x,t) = v(x,t) = 0 ∂Ω×(0,T],
u(x,0) = v(x,0) = 0 Ω,
where Ω is a bounded domain in ℝ^d (d=1,2,3) with smooth boundary ∂Ω, f_1(u,v) and f_2(u,v) represent the forcing terms, l(u) = ∫_Ω u dx, l(v) = ∫_Ω v dx, ^CD^α_t represents the α ^th order Caputo fractional derivative <cit.>. Above problem (<ref>)-(<ref>) can be seen as a generalization of the integer order parabolic problem given in <cit.> to fractional order. Problems of this kind have applications in biology, where u, v can represent the densities of two populations which are interacting through the nonlocal functions M_1 and M_2 <cit.>. In recent years, many researchers have paid attention to the study of nonlocal PDEs <cit.>. In <cit.>, author solved coupled nonlocal parabolic problem using FEM. Also, for time discretization, the Backward Euler scheme <cit.> and the Crank-Nicolson scheme <cit.> are used. In <cit.>, coupled time fractional nonlinear diffusion system is solved using Galerkin finite element scheme along with fractional Crank-Nicolson method.
In <cit.>, authors used FEM with the L1 scheme on uniform mesh to find the numerical solution of the time-fractional nonlocal PDE
^CD^α_tu(x,t)-∇( a (∫_Ω u dx) ∇ u(x,t) ) = f(x,t) Ω× (0,T],
u(x,t) = 0 ∂Ω× (0,T],
u(x,0) = u_0 Ω.
This scheme gives O(τ^2- α) (where τ denotes the time step size) convergence in time. In this article, we use the Fractional Crank-Nicolson method on uniform mesh to discretize the Caputo fractional derivative which gives O(τ^2) convergence in temporal direction. This method is first proposed by Dimitrov <cit.> under compatibility conditions u ∈ C^3[0,T] and u(0)=u_t(0)=u_tt(0)=0. Here also we assume the same conditions for unknown functions u and v. To discretize the space variable, we use the finite element method with linear bases. To handle the nonlocal term and nonlinearity, Newton's method has been used.
Throughout the paper, C>0 denotes the generic constant independent of mesh parameters h and τ. Let (·,·) denotes the inner product and · denotes the norm on space L^2(Ω). For m ∈ℕ, H^m(Ω) represents the standard Sobolev space with the norm ·_m and H^1_0(Ω) := { w ∈ H^1(Ω) : w = 0 ∂Ω}.
The rest of the paper is organized as follows: In section 2, we write the weak formulation and fully-discrete formulation for the given problem (<ref>). We also derive a numerical scheme to solve (<ref>). The a priori bound and a priori error estimate for fully-discrete solutions are derived in section 3 and 4, respectively. Finally, the numerical experiments given in section 5 validate the theoretical findings.
For existence and uniqueness results as well as numerical analysis, we make the following hypotheses on the given problem data.
* H1: M_i:ℝ^2→ℝ is bounded with 0<m_1≤ M_i(x,y)≤ m_2, x, y ∈ℝ, i=1,2.
* H2: M_i:ℝ^2→ℝ is Lipschitz continuous with Lipschitz constants L'_i,K'_i>0.
|M_i(x_1,y_1)-M_i(x_2,y_2)|≤ L'_i|x_1-x_2|+K'_i|y_1-y_2|, x_1, x_2, y_1, y_2 ∈ℝ, i=1,2.
* H3: f_i:ℝ^2 →ℝ is Lipschitz continuous with Lipschitz constants L_i,K_i>0.
|f_i(x_1,y_1)-f_i(x_2,y_2)|≤ L_i x_1-x_2+K_i y_1-y_2, x_1, x_2, y_1, y_2 ∈ℝ, i=1,2.
§ FULLY-DISCRETE FORMULATION
In this section, first we write the weak formulation of (<ref>) and then discretize (<ref>) in both space and time variable. The weak formulation of problem (<ref>) is given below.
Find u(·,t), v(·,t) ∈ H^1_0(Ω) for each t ∈(0,T] such that
( ^CD^α_tu, w ) + M_1(l(u),l(v)) (∇ u, ∇ w) = (f_1(u,v), w ), ∀ w ∈ H^1_0(Ω).
( ^CD^α_tv, ω ) + M_2(l(u),l(v)) (∇ v, ∇ω) = (f_2(u,v), ω), ∀ω∈ H^1_0(Ω).
u(x,0) = v(x,0) = 0 Ω.
By using Faedo-Galerkin method <cit.>, one can prove under the hypotheses H1 and H3, the problem (<ref>) has a unique solution {u, v} for 0<α<1.
Now, to discretize equation (<ref>)-(<ref>) in space, we use finite element method. For that let Ω_h be a quasi uniform partition of Ω into disjoint intervals in ℝ^1 or triangles in ℝ^2 with step size h. Consider the M-dimensional subspace X_h of H^1_0(Ω) such that
X_h:={w∈ C^0(Ω̅): w_| T_k∈ P_1(T_k), ∀ T_k∈Ω_h w=0 ∂Ω}.
Now, we partitioned the time interval [0, T] into N number of sub-intervals using uniform step size τ = T/N. Let τ_N := { t_n : t_n = n τ, n=0, 1,..., N} be a partition of [0, T]. For each n=1,2,…,N, we denote u(t_n) and v(t_n) by u^n and v^n, respectively. Let U^n ≈ u^n and V^n ≈ v^n. Also, we set U^n, α = (1-α/2) U^n + α/2 U^n-1, V^n, α = (1-α/2) V^n + α/2 V^n-1.
To approximate the Caputo fractional derivative, we use the fractional Crank-Nicolson method.
We know that for any function w, if w(0)=0 then ^CD^α_t_n w = ^RD^α_t_n w, where ^RD^α_t_n w is the α^th order Riemann-Liouville fractional derivative of w. Authors in <cit.> derived the following approximation to ^RD^α_t_n- α/2w which gives O(τ^2) convergence in time.
^CD^α_t_n-α/2 w = ^RD^α_t_n-α/2 w ≈ D^α_τ w^n := τ^-α ∑_j=0^n b_n-j ϕ^j, n=1,2,…,N,
where
b_k= (-1)^k Γ( α +1)/Γ(k+1) Γ( α-k+1).
<cit.> If w∈ C^3[0,T] and w(0)=0, w_t(0)=0, w_tt(0)=0, then the error ^CD^α_t_n-α/2 w - D^α_τ w^n satisfies
^CD^α_t_n-α/2 w - D^α_τ w^n≤ C τ^2.
The fully-discrete scheme for (<ref>) is: For each n=1,2,…,N, find U^n and V^n∈ X_h such that
( ^CD^α_τU^n, w) + M_1 (l(U^n, α),l(V^n, α)) (∇ U^n, α, ∇ w ) = ( f_1 (U^n, α,V^n, α), w ), ∀ w ∈ X_h,
( ^CD^α_τV^n, ω) + M_2(l(U^n, α),l(V^n, α)) ( ∇ V^n, α, ∇ω) = ( f_2 (U^n, α,V^n, α), ω), ∀ω∈ X_h,
U^0 =0, V^0=0.
Using the definition of ^CD^α_τ, (<ref>) can be rewrite as
τ^- α b_0 (U^n, w ) + M_1 (l(U^n, α),l(V^n, α)) (∇ U^n, α, ∇ w ) = ( f_1 (U^n, α,V^n, α), w )
-τ^- α ∑_j=1^n-1b_n-j ( U^n, α, w_h ),
τ^- α b_0 ( V^n, ω ) + M_2 (l(U^n, α),l(V^n, α)) (∇ V^n, α, ∇ω ) =( f_2 (U^n, α, V^n, α), ω)
-τ^- α ∑_j=1^n-1b_n-j( V^j,ω).
Let {ψ_i(x) }_1≤ i≤ M be the M dimensional basis of X_h associated with nodes of Ω_h. Therefore, for U^n, V^n ∈ X_h, we can find some β_i^n, γ_i^n ∈ℝ such that
U^n = ∑_i=1^Mβ_i^n ψ_i, V^n = ∑_i=1^Mγ_i^n ψ_i .
Define β^n := [β_1^n, β_2^n,…,β_M^n]^' and γ^n := [γ_1^n, γ_2^n,…,γ_M^n]^'.
Now, substituting the values of U^n, V^n from (<ref>) into (<ref>), we obtain the nonlinear algebraic equations
G_i(β^n, γ^n)= G_i(U^n, V^n)=0, 1≤ i≤ M,
H_i(β^n, γ^n)= H_i(U^n, V^n)=0, 1≤ i≤ M,
where
G_i(U^n, V^n)=τ^- α b_0 ( U^n,ψ_i ) + τ^- α ∑_j=1^n-1 b_n-j ( U^j,ψ_i ) - (f_1 (U^n, α, V^n, α), ψ_i )
+ M_1 (l(U^n, α),l(V^n, α)) ( ∇ U^n, α, ∇ψ_i ),
H_i( U^n, V^n)=τ^- α b_0 ( V^n,ψ_i ) +τ^- α ∑_j=1^n-1 b_n-j ( V^j,ψ_i) - ( f_2(U^n, α,V^n, α), ψ_i)
+ M_2 (l(U^n, α),l(V^n, α)) ( ∇ V^n, α, ∇ψ_i ).
If we apply Newton's method to solve (<ref>), we get the Jacobian matrix J_1 as follows:
J_1 =[[ A B; C D ]] ,
where the elements of the matrices A, B, C and D take the form
(A)_ki = ∂ G_i/∂β_k^n = τ^- α b_0 ( ψ_k,ψ_i ) + (1-α/2) M_1 (l(U^n, α),l(V^n, α)) ( ∇ψ_k , ∇ψ_i )
+ (1-α/2) (∂ M_1 (l(U^n, α),l(V^n, α))/∂ l(U^n)) (∫_Ωψ_k dx ) ( ∇ U^n, α, ∇ψ_i )
- (1-α/2) ( ∂ f_1(U^n,V^n)/∂ U^nψ_k, ψ_i),
(B)_ki= ∂ G_i/∂γ_k^n = (1-α/2) (∂ M_1 (l(U^n, α),l(V^n, α))/∂ l(V^n)) ( ∫_Ωψ_k dx) ( ∇ U^n, α, ∇ψ_i )
- (1-α/2) ( ∂ f_1(U^n,V^n)/∂ V^nψ_k, ψ_i ),
(C)_pi=∂ H_i/∂β_p^n = (1-α/2) ( ∂ M_2 (l(U^n, α),l(V^n, α))/∂ l(U^n)) (∫_Ωψ_p dx ) ( ∇ V^n, α, ∇ψ_i )
- (1-α/2) ( ∂ f_2(U^n,V^n)/∂ U^nψ_p, ψ_i),
(D)_pi = ∂ H_i/∂γ_p^n = τ^- α b_0 ( ψ_p,ψ_i ) + M_2 (l(U^n, α),l(V^n, α)) ( ∇ψ_p, ∇ψ_i )
+ (1-α/2) (∂ M_2 (l(U^n, α),l(V^n, α))/∂ l(V^n)) ( ∫_Ωψ_p dx ) ( ∇ V^n, α, ∇ψ_i )
-(1-α/2) ( ∂ f_2(U^n,V^n)/∂ V^nψ_p, ψ_i ),
where 1≤ i, k, p≤ M. From equations (<ref>)-(<ref>), we can observe that none of the matrices A, B, C, D are sparse and therefore Jacobian matrix J_1 is not sparse <cit.>. We follow the idea given in <cit.> to overcome above issue of sparsity. This idea was first proposed in <cit.> to solve nonlocal elliptic boundary value problem. The modified problem is defined as follows:
Find d_1,d_2 ∈ℝ and U^n,V^n∈ X_h such that
l(U^n, α)-d_1=0, l(V^n, α)-d_2=0,
τ^- α b_0 ( U^n, ψ_i ) + M_1 (d_1,d_2) ( ∇ U^n, α, ∇ψ_i ) - ( f_1 (U^n, α,V^n, α), ψ_i)
+ τ^- α ∑_j=0^n-1 b_n-j ( U^j, ψ_i ) = 0,
τ^- α b_0 ( V^n, ψ_i ) + M_2 (d_1,d_2) ( ∇ V^n, α, ∇ψ_i ) - ( f_2(U^n, α,V^n, α), ψ_i )
+ τ^- α ∑_j=0^n-1 b_n-j ( V^j, ψ_i) =0.
Note that if (d_1,d_2,U^n,V^n) is the solution of the problem (<ref>), then {U^n,V^n} is the solution of the problem (<ref>) and the converse is also true <cit.>.
Now, to solve equation (<ref>) by using Newton's method, we rewrite (<ref>) as follows:
G_i(U^n, V^n, d_1, d_2)=0, 1≤ i≤ M+1,
H_i(U^n, V^n, d_1, d_2)=0, 1≤ i≤ M+1,
where
G_i(U^n, V^n, d_1, d_2) = τ^- α b_0 ( U^n,ψ_i ) + τ^- α ∑_j=1^n-1 b_n-j ( U^j,ψ_i ) - (f_1 (U^n, α, V^n, α), ψ_i )
+ M_1 (d_1, d_2) ( ∇ U^n, α, ∇ψ_i ), 1 ≤ i ≤ M,
G_(M+1)(U^n, V^n, d_1, d_2 ) = l(U^n, α)-d_1=0,
H_i(U^n, V^n, d_1, d_2) = τ^- α b_0 ( V^n,ψ_i ) +τ^- α ∑_j=1^n-1 b_n-j ( V^j,ψ_i) - ( f_2(U^n, α,V^n, α), ψ_i)
+ M_2 (d_1, d_2) ( ∇ V^n, α, ∇ψ_i ), 1 ≤ i ≤ M,
H_(M+1)(U^n, V^n, d_1, d_2 ) = l(V^n, α)-d_2=0.
Now, applying Newton's method to the system of equations (<ref>), we get the following matrix equation:
J
[ β^n; γ^n; β; γ ]
=
[ A_1 B_1 C_1 D_1; A_2 B_2 C_2 D_2; A_3 B_3 C_3 D_3; A_4 B_4 C_4 D_4 ][ β^n; γ^n; β; γ ]
=
[ G̅; H̅; G_(M+1); H_(M+1) ],
where J denotes the Jacobian matrix, β^n = [ β^n_1, β^n_2,..., β^n_M]', γ^n = [ γ^n_1, γ^n_2,..., γ^n_M]', G̅ = [G_1, G_2,..., G_M ]', H̅ = [H_1, H_2,..., H_M ]' and entries of matrices A_r, B_r, C_r, D_r, (r = 1,2,3,4) are given below. For 1 ≤ i, k ≤ M,
(A_1)_ik = τ^- α b_0 ( ψ_k,ψ_i ) + (1- α/2) { M_1(d_1,d_2) ( ∇ψ_k , ∇ψ_i ) - ( ∂ f_1(U^n,V^n)/∂ U^nψ_k, ψ_i)},
(B_1)_ik = - (1- α/2) ( ∂ f_1(U^n,V^n)/∂ V^nψ_k, ψ_i ),
(C_1)_i1= ∂ M_1/∂ d_1 (d_1, d_2) (∇ U^n, α, ∇ψ_i), (D_1)_i1 = ∂ M_1/∂ d_2 (d_1, d_2) (∇ U^n, α, ∇ψ_i).
(A_2)_ik = - (1- α/2) ( ∂ f_2(U^n,V^n)/∂ U^nψ_k, ψ_i ),
(B_2)_ik = τ^- α b_0 ( ψ_k,ψ_i ) + (1- α/2) { M_2(d_1,d_2) ( ∇ψ_k , ∇ψ_i ) - ( ∂ f_2(U^n,V^n)/∂ V^nψ_k, ψ_i)},
(C_2)_i1= ∂ M_2/∂ d_1 (d_1, d_2) (∇ V^n, α, ∇ψ_i), (D_2)_i1 = ∂ M_2/∂ d_2 (d_1, d_2) (∇ V^n, α, ∇ψ_i).
(A_3)_1k = (1- α/2) ∫_Ωψ_k dx, (B_3)_1k = 0, (C_3)_11 = -1, (D_3)_11 = 0.
(A_4)_1k = 0, (B_4)_1k = (1- α/2) ∫_Ωψ_k dx, (C_4)_11 = 0, (D_4)_11 = -1.
From equations (<ref>)-(<ref>), it can be seen that A_1, B_1, A_2, B_2 are sparse matrices <cit.> and hence J is a sparse matrix.
§ A PRIORI BOUND
In this section, we provide a priori bound for the fully-discrete scheme (<ref>). For this we need following Lemma.
<cit.> For any function ϕ(·, t) defined on τ_N, one has
1/2 ^CD^α_τϕ^n ^2 ≤ ( ^CD^α_τϕ^n, ϕ^n, α),
where ϕ^n, α := (1- α/2) ϕ^n + α/2ϕ^n-1, for n = 1,2,…,N.
For derivation of a priori bound and error estimate, we also use following Discrete fractional Grönwall type inequality.
<cit.> Suppose the nonnegative sequences {ω^n , ϕ^n | n=0,1,2,…} satisfy
^CD^α_τω^n ≤λ_1 ω^n + λ_2 ω^n-1 + ϕ^n, n≥ 1,
where λ_1 and λ_2 are nonnegative constants. Then there exists a positive constant τ^⋆ such that when τ≤τ^⋆,
ω^n ≤ 2(ω^0 + t_n^α/Γ(1+α)max_0 ≤ j ≤ nϕ^j) E_α(2λ t_n^α), 1≤ n≤ N,
where E_α(z) is the Mittag-Leffler function and λ=λ_1+λ_2/2-2^(1-α).
Let (U^n,V^n) (for 1 ≤ n ≤ N) be the solution of (<ref>). Then there exists a positive constant τ^* (independent of h) such that when τ≤τ^*, U^n, V^n satisfy
U^n+V^n≤ C,
∇ U^n+∇ V^n≤ C,
For ∀ w∈ X_h, from (<ref>), we have
( ^CD^α_τU^n, w )+ M_1(l(U^n, α),l(V^n, α)) (∇ U^n, α, ∇ w) = ( f_1(U^n, α,V^n, α), w ).
Choosing w=U^n, α and then using the Cauchy-Schwarz inequality along with the inequality ab ≤1/2 a^2 + 1/2 b^2, we can obtain
( ^CD^α_τU^n, U^n, α) + M_1 (l(U^n, α),l(V^n, α)) ∇ U^n, α^2≤1/2(f_1(U^n, α,V^n, α)^2+ U^n, α^2).
Since f_1 is Lipschitz continuous, we have
|f_1(U^n, α, V^n, α) - f_1(0, 0)| ≤f_1(U^n, α, V^n, α)-f_1(0, 0)
≤ L_1 U^n, α + K_1 V^n, α.
Therefore,
f_1(U^n, α, V^n, α) ≤ f_1(0,0) + L_1 U^n, α + K_1 V^n, α ≤ C (1+U^n, α+V^n, α).
By using the bound of M_1 and from equation (<ref>), we can write equation (<ref>) as
( ^CD^α_τU^n, U^n, α)+ m_1 ∇ U^n, α^2
≤ C ((1+ U^n, α+ V^n, α)^2+ U^n, α^2 ),
( ^CD^α_τU^n, α, U^n, α) ≤ C ( 1+ U^n, α^2+ V^n, α^2 ).
An application of Lemma <ref> in (<ref>), gives
^CD^α_τU^n^2≤ C(1+ U^n, α^2+ V^n, α^2).
Similarly, we can get the estimate for V^n as follows:
^CD^α_τV^n^2≤ C(1+ V^n, α^2+ U^n, α^2).
Adding (<ref>) and (<ref>), we get
^CD^α_τ(U^n^2+V^n^2)≤ C(1+ U^n, α^2+ V^n, α^2).
Using Lemma <ref> in (<ref>), we can arrive at
U^n^2+V^n^2≤ C.
For a,b≥ 0, using 1/2(a+b)^2≤ a^2+b^2 in (<ref>), we obtain
U^n+V^n≤ C.
Now, we choose w= ^CD^α_τU^n and then use Cauchy-Schwarz inequality in (<ref>) to obtain
^CD^α_τ U^n^2 + M_1 (l(U^n, α),l(V^n, α)) ( ∇ U^n, α, ∇ ^CD^α_τU^n) ≤ f_1(U^n, α,V^n, α) ^CD^α_τU^n.
We divide both sides of (<ref>) by M_1 (l(U^n, α),l(V^n, α)) and then use bound of M_1 to get
1/m_2 ^CD^α_τU^n^2+(∇ U^n, α,∇ ^CD^α_τU^n) ≤1/m_1( f_1(U^n, α,V^n, α) ^CD^α_τ U^n).
An application of inequality ab ≤ϵ/2 a^2 + 1/2 ϵ b^2 ( ϵ = m_2) gives
1/m_2 ^CD^α_τU^n^2 + (∇ U^n, α, ∇ ^CD^α_τU^n) ≤m_2/2 m_1^2 f_1(U^n, α,V^n, α)^2 + 1/2 m_2 ^CD^α_τ U^n.
Using Lemma <ref> and (<ref>) with Poincaré inequality in (<ref>), we can arrive at
^CD^α_τ∇ U^n^2
≤ C(1+∇ U^n, α^2+∇ V^n, α^2).
Similarly, we get an estimate for V^n as
^CD^α_τ∇ V^n^2 ≤ C (1+∇ V^n, α^2+∇ U^n, α^2).
On adding (<ref>) and (<ref>), we get
^CD^α_τ (∇ U^n^2+∇ V^n^2) ≤ C (1+∇ U^n, α^2+∇ V^n, α^2).
An application of Lemma <ref> to (<ref>) gives required result
∇ U^n+∇ V^n≤ C.
This completes the proof.
Let U^0, U^1,…, U^n-1 and V^0, V^1,…, V^n-1 are given. Then there exists a positive constant τ^* (independent of h) such that when τ≤τ^*, the problem (<ref>) has a unique solution U^n, V^n ∈ X_h, for all 1≤ n≤ N.
The proof follows in similar lines as given in of <cit.>.
§ A PRIORI ERROR ESTIMATE
In order to derive convergence estimate, we use Ritz projection operator <cit.> given by
(∇ϕ,∇ w)=( ∇ R_h ϕ,∇ w),∀ϕ∈ H_0^1(Ω), w ∈ X_h.
In following Theorem, we state an approximation property for the operator R_h which will be useful in the derivation of a priori error estimate.
<cit.> There exists C>0, independent of h such that
ϕ-R_hϕ_L^2(Ω) + h ∇ (ϕ-R_hϕ)_L^2(Ω) ≤ Ch^2 Δϕ_L^2(Ω), ∀ϕ∈ H^2(Ω) ∩ H^1_0(Ω).
Using the intermediate projection R_h, we split the error into two parts as
u(x,t_n)-U^n=u^n-U^n= (u^n-R_hu^n)+(R_hu^n-U^n)=ζ_1^n+χ_1^n,
v(x,t_n)-V^n=v^n-V^n= (v^n-R_hv^n)+(R_hv^n-V^n)=ζ_2^n+χ_2^n.
Let (U^n,V^n) be the solution of fully-discrete scheme (<ref>). Then under the sufficient regularity assumptions on the solution u, v∈ C^3([0,T];L^2(Ω))∩ C^2([0,T];H^2(Ω) ∩ H^1_0(Ω)), ( ∂^i u/∂ t^i)_t=0 = ( ∂^i v/∂ t^i)_t=0 = 0, for i=0,1,2 of (<ref>)-(<ref>), there exists a positive constant τ^* (independent of h) such that when τ≤τ^*, the following estimates hold.
u^n-U^n+v^n-V^n≤ C (h^2 + τ^2 ),
∇ u^n - ∇ U^n + ∇ v^n - ∇ V^n≤ C (h + τ^2 ),
where n= 1,2,…,N.
For any w ∈ X_h, we have
( ^CD^α_τχ_1^n, w ) + M_1 (l(U^n, α), l(V^n, α)) (∇χ_1^n, α, ∇ w)
= ( ^CD^α_τ R_hu^n, w ) + M_1 (l(U^n, α), l(V^n, α)) (∇ R_hu^n, α, ∇ w) - ( ^CD^α_τ U^n, w )
- M_1 (l(U^n, α), l(V^n, α)) (∇ U^n, α, ∇ w).
= ( ^CD^α_τ R_hu^n, w ) + M_1 (l(U^n, α), l(V^n, α)) (∇ u^n, α, ∇ w) - ( f_1(U^n, α, V^n, α), w )
+ ( f_1(u^n-α/2, v^n-α/2), w ) - ( ^CD^α_t_n-α/2u, w ) - M_1 (l(u^n-α/2), l(v^n-α/2)) (∇ u^n-α/2, ∇ w)
+ M_1 (l(u^n-α/2), l(v^n-α/2)) (∇ u^n, α, ∇ w) - M_1 (l(u^n-α/2), l(v^n-α/2)) (∇ u^n, α, ∇ w)
= ( ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u , w ) + M_1 (l(u^n-α/2), l(v^n-α/2)) (∇ u^n, α - ∇ u^n-α/2, ∇ w)
+ { M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) } (∇ u^n, α, ∇ w)
+ ( f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α), w ).
Setting w=χ_1^n, α in (<ref>) to get
( ^CD^α_τχ_1^n, χ_1^n, α) + M_1 (l(U^n, α), l(V^n, α)) (∇χ_1^n, α, ∇χ_1^n, α)
= ( ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u , χ_1^n, α) + M_1 (l(u^n-α/2), l(v^n-α/2)) (∇ u^n, α - ∇ u^n-α/2, ∇χ_1^n, α)
+ { M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) } (∇ u^n, α, ∇χ_1^n, α)
+ ( f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α), χ_1^n, α).
Also, from the assumptions on solution u and v, we can find R_u, R_v >0 such that
u^n, α_ H^2(Ω)≤ R_u, v^n, α_ H^2(Ω)≤ R_v.
Using the bound of M_1, (<ref>) and Cauchy-Schwarz inequality in (<ref>), we get
( ^CD^α_τχ_1^n, χ_1^n, α) + m_1 ∇χ_1^n, α^2
≤ ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u χ_1^n, α + m_2 ∇ u^n, α - ∇ u^n-α/2 ∇χ_1^n, α
+ R_u | M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) | ∇χ_1^n, α
+ f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α) χ_1^n, α.
Applying Poincaré inequality and the inequality ab ≤ϵ/2 a^2 + 1/2 ϵ b^2 ( ϵ = 4/m_1) to (<ref>), we can obtain
( ^CD^α_τχ_1^n, χ_1^n, α) + m_1 ∇χ_1^n, α^2
≤2/m_1 ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u ^2 + 2/m_1 f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α) ^2
+ 2 R_u^2/m_1 | M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) |^2
+ 2 m_2^2/m_1 ∇ u^n, α - ∇ u^n-α/2^2 + m_1/2 ∇χ_1^n, α^2.
Lipschitz continuity of functions M_1 and f_1 gives
| M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) |
≤ L'_1 |l(U^n, α) - l(u^n-α/2) | + K'_1 |l(V^n, α) - l(v^n-α/2) |
≤ L'_1 C U^n, α - u^n-α/2 + K'_1 C V^n, α - v^n-α/2
≤ C {ζ_1^n, α + χ_1^n, α + ζ_2^n, α + χ_2^n, α + u^n, α - u^n-α/2 + v^n, α - v^n-α/2}
and
f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α)
≤ L_1 C U^n, α - u^n-α/2 + K_1 C V^n, α - v^n-α/2
≤ C {ζ_1^n, α + χ_1^n, α + ζ_2^n, α + χ_2^n, α + u^n, α - u^n-α/2 + v^n, α - v^n-α/2}.
Thus, from equations (<ref>)-(<ref>) and Lemma <ref>, we have
^CD^α_τχ_1^n ^2 ≤ C { ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u ^2 + ∇ u^n, α - ∇ u^n-α/2^2 + ζ_1^n, α^2
+ χ_1^n, α^2 + ζ_2^n, α^2 + χ_2^n, α^2 + u^n, α - u^n-α/2^2 + v^n, α - v^n-α/2^2 },
where C is dependent on m_1, m_2, R_u, L'_1, K'_1, L_1, K_1.
Now, it follows from Taylor's theorem that
u^n, α - u^n-α/2 ≤ α/2( 1- α/2) τ∫_t_n-1^t_n u_tt (η) dη ≤ C τ^2.
Similarly,
v^n, α - v^n-α/2 ≤ C τ^2, ∇ u^n, α - ∇ u^n-α/2 ≤ C τ^2.
From Lemma <ref> and (<ref>), we have
^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u ≤ ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2R_hu + ^CD^α_t_n-α/2R_hu - ^CD^α_t_n-α/2u
≤ C ( τ^2 + h^2 ).
Using (<ref>) and (<ref>)-(<ref>) in (<ref>), we get
^CD^α_τχ_1^n ^2 ≤ C (χ_1^n, α^2 + χ_2^n, α^2 + ( h^2 + τ^2 )^2 ).
Similarly, we can get an estimate for χ_2^n, α as follows:
^CD^α_τχ_2^n ^2 ≤ C (χ_1^n, α^2 + χ_2^n, α^2 + ( h^2 + τ^2 )^2 ).
Therefore, from (<ref>) and (<ref>)
^CD^α_τ( χ_1^n ^2 + χ_2^n ^2 ) ≤ C ( 1 - α/2)^2 (χ_1^n ^2 + χ_2^n ^2 ) + C α^2/4(χ_1^n-1^2 + χ_2^n-1^2 )
+ C ( h^2 + τ^2 )^2.
An application of Lemma <ref> ( λ_1 = C ( 1 - α/2)^2, λ_2 = C α^2/4, ϕ^n = C ( h^2 + τ^2 )^2) gives
χ_1^n ^2 + χ_2^n ^2 ≤ C ( h^2 + τ^2 )^2,
where C is dependent on λ_1, λ_2, α, T.
Therefore,
χ_1^n + χ_2^n ≤ C ( h^2 + τ^2 ).
Finally, applying the triangle inequality, we can get
u^n-U^n + v^n-V^n ≤ ζ_1^n + χ_1^n + ζ_2^n + χ_2^n
≤ C ( h^2 + τ^2 ).
In order to derive an error estimate in H^1_0(Ω) norm, we rewrite (<ref>) as
( ^CD^α_τχ_1^n, w ) + M_1 (l(U^n, α), l(V^n, α)) (∇χ_1^n, α, ∇ w)
= ( ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u , w ) + M_1 (l(u^n-α/2), l(v^n-α/2)) (Δ u^n, α - Δ u^n-α/2, w)
+ { M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) } (Δ u^n, α, w)
+ ( f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α), w ).
We choose w = ^CD^α_τχ_1^n in (<ref>) to get
( ^CD^α_τχ_1^n, ^CD^α_τχ_1^n ) + M_1 (l(U^n, α), l(V^n, α)) (∇χ_1^n, α, ∇ ^CD^α_τχ_1^n)
= ( ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u , ^CD^α_τχ_1^n ) + ( f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α), ^CD^α_τχ_1^n )
+ { M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) }(Δ u^n, α, ^CD^α_τχ_1^n)
+ M_1 (l(u^n-α/2), l(v^n-α/2)) (Δ u^n, α - Δ u^n-α/2, ^CD^α_τχ_1^n).
Now, dividing (<ref>) by M_1 (l(U^n, α), l(V^n, α)) and then using bound of M_1, Cauchy-Schwarz inequality, we can arrive at
1/m_2 ^CD^α_τχ_1^n ^2 + (∇χ_1^n, α, ∇ ^CD^α_τχ_1^n)
≤1/m_1 ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u ^CD^α_τχ_1^n + m_2/m_1 Δ u^n, α - Δ u^n-α/2 ^CD^α_τχ_1^n
+ 1/m_1 | M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) | Δ u^n, α ^CD^α_τχ_1^n
+ 1/m_1 f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α) ^CD^α_τχ_1^n .
Further for a,b>0, using the inequality ab≤a^2/2ϵ+ ϵ b^2/2, with ϵ =4 m_2 and from (<ref>), we have
1/m_2 ^CD^α_τχ_1^n ^2 + (∇χ_1^n, α, ∇ ^CD^α_τχ_1^n)
≤2 m_2/m_1^2 ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u ^2 + 2 m_2^2/m_1^2 Δ u^n, α - Δ u^n-α/2^2
+ 2 R_u^2 m_2/m_1^2 | M_1 (l(U^n, α), l(V^n, α)) - M_1 (l(u^n-α/2), l(v^n-α/2)) |^2
+ 2 m_2/m_1^2 f_1(u^n-α/2, v^n-α/2) - f_1(U^n, α, V^n, α) ^2 + 1/2 m_2 ^CD^α_τχ_1^n ^2.
Using (<ref>) and (<ref>) along with Poincaré inequality in (<ref>), we get
(∇χ_1^n, α, ∇ ^CD^α_τχ_1^n) ≤ C { ^CD^α_τ R_hu^n - ^CD^α_t_n-α/2u ^2 + Δ u^n, α - Δ u^n-α/2^2
+ ∇ζ_1^n, α^2 + ∇χ_1^n, α^2 + ∇ζ_2^n, α^2 + ∇χ_2^n, α^2
+ u^n, α - u^n-α/2^2 + v^n, α - v^n-α/2^2 },
where C is dependent on m_1, m_2, L'_1, K'_1, L_1, K_1, R_u.
Also, from Taylor's theorem, we have
Δ u^n, α - Δ u^n-α/2 ≤ C τ^2.
Using Lemma <ref>, (<ref>), (<ref>)-(<ref>) and (<ref>) in (<ref>), we can obtain
^CD^α_τ∇χ_1^n ^2 ≤ C (∇χ_1^n, α^2 + ∇χ_2^n, α^2 + ( h + τ^2 )^2 ).
Similarly, one can obtain an estimate for ∇χ_2^n, α as follows:
^CD^α_τ∇χ_2^n ^2 ≤ C (∇χ_1^n, α^2 + ∇χ_2^n, α^2 + ( h + τ^2 )^2 ).
Adding (<ref>) and (<ref>), we get
^CD^α_τ( ∇χ_1^n ^2 + ∇χ_2^n ^2 ) ≤ C ( h + τ^2 )^2 + C ( 1 - α/2)^2 (∇χ_1^n ^2 + ∇χ_2^n ^2 )
+ C α^2/4(∇χ_1^n-1^2 + ∇χ_2^n-1^2 ).
Applying Lemma <ref> ( λ_1 = C ( 1 - α/2)^2, λ_2 = C α^2/4, ϕ^n = C ( h + τ^2 )^2), we can get
∇χ_1^n ^2 + ∇χ_2^n ^2 ≤ C ( h + τ^2 )^2,
where C is dependent on λ_1, λ_2, α, T.
Finally, we use the Triangle inequality together with the estimates (<ref>), (<ref>) to obtain
∇ u^n - ∇ U^n + ∇ v^n - ∇ V^n ≤ ∇ζ_1^n + ∇χ_1^n + ∇ζ_2^n + ∇χ_2^n
≤ C ( h + τ^2 ).
This completes the proof.
In present work, the convergence of the proposed scheme is proved without considering the weak singularity of the solution. We leave the convergence analysis of weak singularity case as a future work.
§ NUMERICAL EXPERIMENTS
In this section, we perform some numerical experiments by considering two different problems with known exact solutions. In both problems, we take the final time T=1 and tolerance ϵ=10^-7 for stopping Newton's iteration. We denote the number of sub-intervals in time by N. Moreover, let (M_s+1) be the number of node points in each spatial direction. In order to obtain the order of convergence in spatial direction in L^2(Ω) and H^1_0(Ω) norms, we take N= M_s for different values of M_s. Similarly, to calculate the convergence rate in temporal direction in L^2(Ω) norm, we take M_s=N for different values of N.
For our first example, in (<ref>), we consider the spatial domain Ω = (0, 1), M_1(z,w)=3+sin z + cos w, M_2(z,w)=5+cos z + sin w. The functions f_1 and f_2 are chosen such that the analytical solutions of the equation (<ref>) be u(x,t)= t^2+ αsin 2 π x and v(x,t)= t^3- αsinπ x.
Error and convergence rate in the spatial direction in L^2(Ω) and H^1_0(Ω) norms are given in Tables <ref> and <ref>, respectively. Furthermore, Table <ref> shows errors and convergence rates in the temporal direction in L^2(Ω) norm.
In this example, we take Ω = (0, 1) × (0,1), M_1(z,w)=3+sin z + cos w, M_2(z,w)=5+cos z + sin w. We choose f_1 and f_2 such that the analytical solution of equation (<ref>) be u(x,y,t)= t^3 sin 2 π x sin 2 π y and v(x,y,t)= t^4 sinπ x sinπ y.
Error and convergence rate in the spatial direction in L^2(Ω) and H^1_0(Ω) norms are given in Tables <ref> and <ref>, respectively. Table <ref> shows errors and convergence rates in the temporal direction in L^2(Ω) norm. For α = 0.5, the graphs of exact and numerical solutions are shown in Figures <ref> and <ref>.
Declarations:
Conflict of interest- The author declares no competing interests.
20
[me] S. Chaudhary, Finite element analysis of nonlocal coupled parabolic problem using Newton's method, Comput. Math. Appl. 75-3 (2018), 981-1003.
[sk] S. Chaudhary, V. Srivastava, V. V. K. Srinivas Kumar, B. Srinivasan, Finite element approximation of nonlocal parabolic problem, Numer. Methods Partial Differ. Eq., 33 (2017) 786-313.
[CNS] S. Chaudhary, Crank-Nicolson-Galerkin finite element scheme for nonlocal coupled parabolic problem using the Newton method, Math. Meth. Appl. Sci., 41-2 (2018), 724-749.
[sp1] S. Chaudhary, P. J. Kundaliya, L1 scheme on graded mesh for subdiffusion equation with nonlocal diffusion term, Math. Comput. Simul., 195 (2022), 119-137.
[CNCD] D. Kumar, S. Chaudhary, V. V. K. Srinivas Kumar, Galerkin finite element schemes with fractional Crank-Nicolson method for the coupled time-fractional nonlinear diffusion system, Comput. Appl. Math., 38, Article number: 123 (2019).
[Podlubny] I. Podlubny, Fractional differential equations, Academic press, (1999).
[Anatoly] A. A. Kilbas, H. M. Srivastava, J. J. Trujillo, Theory and applications of fractional differential equations, Elsevier, (2006).
[Bruce] B. J. West, Fractional calculus in bioengineering, J. Stat. Phys., 126 (2007):1285-1286.
[r5new] Y. Luchko, M. Rivero, J. Trujillo, M. Pilar Velasco, Fractional models, non-locality, and complex systems, Comput. Math. Appl., 59-3 (2010), 1048-1056.
[ch] M. Chipot, B. Lovat, On the asymptotic behaviour of some nonlocal problems, Positivity 3 (1999), 65-81.
[r6new] S. B. Menezes, Remarks on weak solutions for a nonlocal parabolic problem, Int. J. Math. Math. Sci., 2006 (2006), 1-10.
[r1] K. Diethelm, The Analysis of Fractional Differential Equations: An Application-Oriented Using Differential Operators of Caputo Type, Lecture Notes in Mathematics, Springer, (2010).
[Alm] R. M. P. Almeida, S. N. Antontsev, J. C. M. Duque, J. Ferreira, A reaction-diffusion model for the non-local coupled system: existence, uniqueness, long-time behaviour and localization properties of solutions, IMA J. Appl. Math., 81-2 (2016), 344-364.
[JCMD] J. C. M. Duque, R. M. P. Almeida, S. N. Antontsev, J. Ferreira, The Euler-Galerkin finite element method for a nonlocal coupled system of reaction-diffusion type, J. Comput. Appl. Math., 296 (2016), 116-126.
[C.A.] C. A. Raposo, M. Sepúlveda, O. V. Villagrán, D. C. Pereira, M. L. Santos, Solution and asymptotic behavior for a nonlocal coupled system of reaction-diffusion, Acta Appl. Math., 102 (2008), 37-56.
[LI] L. Li, L. Jin, S. Fang, Existence and uniqueness of the solution to a coupled fractional diffusion system, Adv. Differ. Equ., 2015, Article number: 370 (2015).
[cnm] B. Jin, B. Li, and Z. Zhou, An analysis of the Crank-Nicolson method for subdiffusion, IMA J. Numer. Anal., 38-1 (2017), 518-541.
[yd] Y. Dimitrov, Numerical approximations for fractional differential equations, J. fractional calc. & appl., 5-22 (2014), 1-45.
[scs] Guang Hua Gao, Hai Wei Sun, Zhi Zhong Sun, Stability and convergence of finite difference schemes for a class of time-fractional sub-diffusion equations based on certain superconvergence, J. Comput. Phys., 280 (2015), 510-528.
[CND] D. Kumar, S. Chaudhary, V. V. K. Srinivas Kumar, Fractional Crank-Nicolson-Galerkin finite element scheme for the time-fractional nonlinear diffusion equation, Numer. Methods Partial Differ. Eq., 35 (2019), 2056-2075.
[r3] T. Gudi, Finite element method for a nonlocal problem of Kirchhoff type, SIAM J. Numer. Anal., 50-2 (2012), 657-668.
[mn3] J. Manimaran, L. Shangerganesh, Error estimates for Galerkin finite element approximations of time-fractional nonlocal diffusion equation, Int. J. Comput. Math., 98-7 (2020), 1365-1384.
[vth] V. Thomée, Galerkin Finite Element Methods for Parabolic Problems, Second revised and expanded ed., Springer, Berlin, (2006).
[hr12] C. Huang, M. Stynes, Optimal spatial H^1-norm analysis of a finite element method for a time-fractional diffusion equation, J. Comput. Appl. Math., 367 (2020), 112435.
[r02] M. Stynes, E. O'Riordan, J. Gracia, Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation, SIAM J. Numer. Anal., 55 (2017), 1057-1079.
[AAl2] A. Alikhanov, A new difference scheme for the time fractional diffusion equation, J. Comput. Phys., 280 (2015) 424-438.
[r11] N. Kopteva, Error analysis of the L1 method on graded and uniform meshes for a fractional derivative problem in two and three dimensions, Math. Comp., 8 (2019) 2135-2155.
[r8] B. Jin, B. Li, Z. Zhou, Numerical analysis of nonlinear subdiffusion equations, SIAM J. Numer. Anal., 56(1) (2018) 1-23.
[r5a] J. Ren, H. Liao, J. Zhang, Z. Zhang, Sharp H1-norm error estimates of two time-stepping schemes for reaction-subdiffusion problems, J. Comput. Appl. Math., 389 (2021), 113352.
[mk] M. Al-Maskari, S. Karaa, The time-fractional Cahn-Hilliard equation: analysis and approximation, IMA J. Numer. Anal., 2021. Doi:10.1093/imanum/drab025.
[cl1] C. Lubich, Discretized fractional calculus, SIAM J. Math. Anal., 17 (1986), 704-719.
[cl2] C. Lubich, Convolution quadrature and discretized operational calculus. I, Numer. Math., 52 (1988), 129-145.
|
http://arxiv.org/abs/2306.02180v1
|
20230603192654
|
The Absolute Age of M92
|
[
"Jiaqi",
"Ying",
"Brian Chaboyer",
"Emily M. Boudreaux",
"Catherine Slaughter",
"Michael Boylan-Kolchin",
"Daniel Weisz"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.CO",
"astro-ph.GA"
] |
Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755, USA
Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755, USA
Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755, USA
Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755, USA
Leiden Observatory, Leiden University, NL-2300 RA Leiden, the Netherlands
Department of Astronomy, The University of Texas at Austin, TX 78712, USA
Department of Astronomy, University of California Berkeley, CA 94720, USA
The absolute age of a simple stellar population is of fundamental interest for a wide range of applications but is difficult to measure in practice, as it requires an understanding of the uncertainties in a variety of stellar evolution processes as well as the uncertainty in the distance, reddening and composition. As a result, most studies focus only on the relative age by assuming that stellar evolution calculations are accurate and using age determinations techniques that are relatively independent of distance and reddening. Here, we construct 20,000 sets of theoretical isochrones through Monte Carlo simulation using the Dartmouth Stellar Evolution Program to measure the absolute age of the globular cluster M92. For each model, we vary a range of input physics used in the stellar evolution models, including opacities, nuclear reaction rates, diffusion coefficients, atmospheric boundary conditions, helium abundance, and treatment of convection. We also explore variations in the distance and reddening as well as its overall metallicity and α enhancement. We generate simulated Hess diagrams around the main-sequence turn-off region from each set of isochrones and use a Voronoi binning method to fit the diagrams to HST ACS data. We find the age of M92 to be 13.80 ± 0.75 Gyr. The 5.4% error in the absolute age is dominated by the uncertainty in the distance to M92 (∼ 80% of the error budget); of the remaining parameters, only the total metallicity, α element abundance, and treatment of helium diffusion contribute significantly to the total error.
§ INTRODUCTION
Globular clusters (GCs) are stable, tightly bound clusters of stars. They are often modeled as simple stellar populations, as stars in a GC are believed to have the same origin and have similar composition and age. As a result, GCs are important observational basis for understanding composite stellar populations both inside and outside of Milky Way as they could be used as building blocks for the stellar population synthesis <cit.>.
GCs are also the oldest objects in the Galaxy whose age may be accurately determined. Using JWST data, <cit.> found globular clusters formed at z > 9, only ∼ 0.5 Gyr after the big bang. Therefore, most GCs are relics of high-redshift star formation, and contain the fossil imprint of earliest phases of galaxy formation. Moreover, the bimodality of GCs (each galaxy generally has a metal-poor and a metal-rich sub-population) permit the investigation of two distinct phases of galaxy assembly (stellar halo and bulge) far beyond the Local Group <cit.>. M92 is one of the oldest and most metal-poor galactic GCs <cit.>. The age of M92, therefore, provides a limit to the age of the universe <cit.> and insights into star formation in the early universe.
Due to its richness, relative proximity and low reddening, M92 often serves as a benchmark for studies of low-metallicity stellar systems. For example, <cit.> studied the stellar population of 6 ultra-faint dwarf galaxies (UFDs) using a combination of high-precision photometry data and pointed out that all 6 UFDs they studied showed that the stars of the smallest galaxies in the Universe was formed before reionization because the unusual similarity between their stellar population and that of the M92. As another example of the benchmark nature of M92, it was among the first objects observed with JWST as part of the early release science program 1334 <cit.>.
To first order, stars in a GC can be assumed to form at same time with the same composition; as a result, theoretical isochrone age fitting is the most widely-used method to determine the age of GCs. Theoretical isochrones can be generated by finding the common phase of stellar evolution shared by stellar evolution model with different mass <cit.>. A variety of methods have been applied to determine the best-fit isochrones for observational data and, therefore, constrain the age of GCs.
Several examples exist for M92. <cit.> used the luminosity of the main sequence (MS) turn-off, combined with the color difference between the turn-off and the base of the giant branch, to find the age of M92 to be 12.3 ± 0.9 Gyr. Using the luminosity of the main-sequence turn-off, but a different distance estimate, <cit.> found the age of M92 to be 14.8 ± 2.5 Gyr. Utilizing the shape of the main sequence turn-off region as an age indicator, along with a new estimate of distance modulus (DM) and reddening, <cit.> estimated the age of M92 to be 13.5 ± 1.0 Gyr. <cit.> combined data from three different photometric systems — Sloan Digital Sky Survey (SDSS), Johnson-Kron-Cousins, and Advanced Camera for Surveys (ACS) — and, using the morphology, and number counts of stars in the main-sequence turn-off, red giant branch and horizontal branch, found the age of M92 to be 11 ± 1.5 Gyr. <cit.> derived ridge lines in Color Magnitude Diagram (CMD) from the ACS data to perform relative MS fitting between clusters and found the age of M92 to be 13.18 ± 0.51. In general, the uncertainty in age determinations for GCs typically take into account the uncertainties in the observed properties of a globular cluster (distance, reddening, and composition), but do not include the uncertainty associated with the theoretical stellar models and isochrones which are used to determine the age.
Modern stellar evolution codes can generate theoretical stellar models quickly for a wide range of initial conditions. Theoretical isochrones are sensitive to the input parameters used to generate these stellar models. Most previous studies using theoretical isochrones are limited in that they do not take into consideration the wide range of uncertainty in constructing stellar models. In this paper, we utilize a Monte Carlo (MC) approach to generate uncertainties in theoretical isochrones. We generate isochrones by varying various input physics in the stellar evolution models. For a given stellar model, we will prescribe its mass and heavy element composition and then vary the opacities, nuclear reaction rates, microscopic diffusion coefficients, atmospheric boundary conditions, helium abundance, and treatment of convection. All those parameters (shown in Table <ref>) are varied during the MC simulation based upon their known uncertainties. The resultant isochrones provide a good estimation of the uncertainty associated with modern stellar evolution calculations.
A variety of methods have been used to compare theoretical isochrones to observational data in order to determine the age of a stellar population. Typically, certain age-sensitive aspects of the observed color-magnitude diagram are compared to stellar models/isochrones. The main sequence turn-off region is particularly sensitive to age and is therefore often used to determine the ages of GCs <cit.>. However, the morphology of the horizontal branch has also been used in measuring GC ages <cit.>. <cit.> used both the main sequence turn-off region and the horizontal branch to determine the age a few globular clusters, including M92. These previous studies have focused on comparing the shape/morphology of the observational data to theoretical isochrones in order to determine their age.
In this paper, we present a new isochrone age fitting method which uses Voronoi binning and fits the number density of stars in the main sequence turn-off region to determine ages. By utilizing the density of stars in the color-magnitude diagram (usually referred to as a Hess diagram) to determine the age of a cluster, we utilize all of the observational information to constrain the age. This may lead to smaller uncertainties compared to previous work. This paper is structured as follows. In <ref>, we introduce the observational data; <ref> covers the process of isochrone construction; <ref> presents the details our isochrone age fitting method and our best age measurement; and <ref> includes a discussion of the sources of error and covariance.
§ OBSERVATIONAL DATA
§.§ Calibration Stars
There are two single, metal-poor main sequence stars that have accurate HST ACS photometry (in the same filters as the M92 ACS data) and virtually the identical composition as M92: HIP 46120 and HIP 106924. The HST photometry is presented in <cit.> and the high resolution spectroscopic abundances in <cit.>. These two stars have accurate Gaia EDR3 parallaxes of 14.776± 0.014mas and 15.019± 0.012mas respectively <cit.>.
The EDR3 parallaxes are known to suffer from systematic zero-point errors, and a calibration of this error has been given by <cit.>. This correction is -0.012mas for HIP 46120 and -0.020mas for HIP 106924. However, the zero-point correction is not that well calibrated for bright stars like these two stars <cit.> and there is evidence that the zero-point correction may be an over-correction for bright stars <cit.> so we elected to add in half the zero-point correction to the quoted EDR3 parallax. The uncertainty in the parallaxes was taken to be the value of the zero-point correction added in quadrature with the uncertainty in parallax given in EDR3.
Combining the parallaxes with the HST ACS photometry, we measure M_F606W = 5.7867± 0.0026mag for HIP 46120 and M_F606W = 6.0406± 0.0037mag for HIP 106294. These stars have zero reddening <cit.> and observed colors of (F606W-F814W)= 0.566± 0.002 and (F606W-F814W)=0.601± 0.005. These accurate colors and absolute magnitudes will be used to test the isochrones in <ref>.
§.§ M92
To estimate the age of M92, we use calibrated data for M92 from the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) globular cluster survey treasury program <cit.>. The survey obtained photometry with S/N >10 for main sequence stars with masses > 0.2 M_⊙ using the ACS Wide Field Channel. Artificial star tests provide an accurate estimate of the photometric uncertainties and completeness as a function of magnitude and cluster position <cit.>. Since this paper focus on determining the age of M92, we use a subset of stars around the main sequence turn-off to fit isochrones whose position is most sensitive to variations in age, and relatively insensitive to the present day mass function. These stars have a 15.925 < F606W < 19.925, which is ± 2magnitudes of the point on the subgiant branch which is 0.05mag redder than the main sequence turn-off (MSTO). Additionally, we remove blue straggler stars and outliers by selecting stars that are within 0.08mag in F606W of the median ridgeline in an magnitude-magnitude diagram of F814W and F606W. With these cuts, our observational sample contains 18,077 stars.
We note that previous studies (e.g., ) demonstrate that M92, like other old globular clusters, hosts multiple stellar populations. These multiple stellar populations typically have somewhat different lighter element abundances, the origin of which is not currently known. These multiple populations are observed in UV filters such as F275W and F336W from the HST. However, these populations are indistinct from each other in the F606W and F814W data used in this paper. As a result, these multiple populations will not be considered in this study.
§ ISOCHRONE CONSTRUCTION AND TESTING
We use the Dartmouth Stellar Evolution Program (DSEP) <cit.> to generate stellar models and isochrones and generally use literature estimates when adopting uncertainties for each parameter (see Table <ref>). One area where we consider a wider range of uncertainties is in the treatment of convection: even though nearly all models use a solar-calibrated mixing length, a variety of studies have demonstrated that this may not be the most appropriate value for other stars. <cit.> studied metal-poor stars including M92 and discovered that solar-calibrated value of the mixing length parameter α_ was ineffective at reproducing their observed properties. As a result, we adopt a wider input range for the mixing length parameter α_ to cover the range of empirical calibrated mixing length parameter α_. Another source of uncertainty associated with the treatment of convection in stellar models is the amount of convective overshoot, which may occur at the formal edge of a convection zone (which is defined by the buoyancy force being zero). Various studies have calibrated the amount of convective overshoot by comparing to observations (e.g., ). In general, these studies have found a fairly small value of 0.0 to 0.2 pressure scale heights; we therefore adopt this range for convective overshoot in our analysis.
We generate 20,000 sets of input parameters by doing Monte Carlo simulations on parameters shown in Table <ref> from their associated probability distribution functions. Each set of input parameters is used to evolve 21 stellar models with mass from 0.65 M_⊙ to 1.5 M_⊙ with an increment of 0.05 M_⊙ and 12 lower-mass stellar models with mass from 0.3 M_⊙ to 0.63 M_⊙ with an increment of 0.03 M_⊙. The lower-mass models use the FreeEOS-2.2.1 <cit.>, while the higher-mass models use an analytical equation of state which include the Debyre-Huckel correction <cit.>. These stellar models are used to generate 41 theoretical isochrones from 8Gyr to 16Gyr with an increment of 200Myr. Those 41 isochrones of different ages corresponding to the same set of MC input parameters, are considered a single MC isochrone set. Each isochrone is constructed with a dense grid of 400 equal evolutionary points in order to ensure that the output isochrones have a high density of points to avoid any interpolation errors when constructing simulated color-magnitude diagrams (sCMDs)[The Monte Carlo isochrones created for this project are available at <https://doi.org/10.5281/zenodo.7758605>. The file is stored in HDF5 format, and a sample python program is provided which gives details on on extracting isochrones from the HDF5 file.]. In summary, we generated 20,000 isochrone sets. Each isochrone set consists of 41 isochrones of different ages, for a total of 20,000 × 41 = 820,000 individual isochrones. Figure <ref> shows the distribution of all 20,000 13-Gyr isochrones generated for this project. The extensive range covered by the single age isochrones in the color-magnitude plane affirms our hypothesis that varying the MC input parameters can significantly influence the resulting isochrones. Hence, it is imperative to take into account the uncertainty in these parameters when provide an accurate determination of the age of M92.
§.§ Testing the Isochrones
As HIP 46120 and HIP 106924 have known absolute magnitude and colors, and nearly identical compositions to M92, these stars provide an empirical baseline for an isochrone goodness-of-fit metric. Specifically, we preform a χ^2 goodness-of-fit test between the two calibrating stars and each age of each MC isochrone set.
The lowest χ^2 value for a given MC isochrone set is then used to compute a weighting function for that entire set of MC isochrones (i.e., for a given MC isochrone set, we assume the best-fitting isochrone gives an indication of the age of the calibrating star, which may be different from the age of M92). Essentially, how well any given MC isochrone set fits to the observed calibration data will determine the weight that a MC isochrone from that set is given when fitting to M92. We use the inverse of the probability that a given MC isochrone is inconsistent with the calibration data as the weighting function.
We define the χ^2 metric for of each isochrone as the quadrature sum of the differences between that isochrone's and calibrating star's F606W magnitude and F606W-F814W color. In order to account for uncertainty in the calibrating stars photometry the differences used are normalized by the magnitude and color uncertainties. χ^2 is found for each age in each of the 20,000 sets of isochrones. The age with the minimum χ^2 is then selected for each set of isochrones. From these minimized χ^2 values, we directly compute the weighting function.
The two calibrating stars provide 2 degrees of freedom, n, for the χ^2 distribution; in the case where n=2, the cumulative distribution function (CDF) for a χ^2 distribution is given by
CDF = 1 - e^-x/2 .
The weighting function used, p, for any given isochrone is then
p = 1-CDF .
Figure <ref> shows the cumulative distribution of minimized χ^2 values, and Figure <ref> shows the cumulative distribution of the weight p for 20,000 sets of MC isochrones. Approximately 20% of the isochrone sets have χ^2 < 2 and so provide a good fit to the calibration stars. Only about 35% of the isochrone sets have a CDF < 0.95, which corresponds to a weighting function > 0.05%.
§.§ Simulated Color-Magnitude Diagram
Each MC set of theoretical isochrones are used to create a set of simulated color-magnitude diagrams (sCMDs) of M92 which will be used to compare with the observational CMD of <cit.>. A sCMD is constructed by randomly creating a four million point sample for each isochrone in the following steps:
* A random distance from the center of the cluster is selected from the observed distribution <cit.>.
* A random mass is selected using the present day mass function determined by <cit.>, who found a power law mass function with a slope of -1.02 using the same ACS M92 data. The magnitudes (F606W and F814W) of this simulated star are then determined from the isochrone.
* The simulated star is randomly assigned to be a member of a binary system, using the observed binary mass fraction of 0.02 <cit.>.
* If a star is a member of a binary system, then a secondary star is created, assuming a flat secondary mass distribution with mass ratio q=0.5 to 1.0. The magnitudes of this secondary star is determined from the isochrone and added to the magnitude of the primary to arrive at the magnitude of the binary star system; which is considered to be a single star in the photometry.
* It is determined if the star would be recovered in the photometric reduction, using the photometric completeness function from <cit.> for the M92 ACS data. This photometric completeness function depends on the magnitudes of the star, and its distance from the center of the cluster.
* If the star is found to be observable in the previous step, then photometric errors will be randomly selected from their observed distribution (which is a function of magnitude and distance from the cluster center). The observed distribution of photometric errors is determined from the artificial star tests of <cit.>.
* The photometric error in F606W and F814W are added to the magnitude of the simulated star, and these magnitudes are used in creating the sCMD.
Once the sCMD is created, the same colour and magnitude filters which were applied to the observed M92 data are applied to the sCMD. After this filtering, the sCMD consists of about two million simulated data points for each theoretical isochrone.
We note that after we completed our age determination for M92, we discovered that <cit.> found a strong correlation between the distance from the center the GC and the present-day mass function (PDMF) slope. <cit.> found a PDMF slopes in the inner region of M92 which ranged from -1 to +2, while our sCMD assumed -1.02 <cit.>. Since we are only fitting stars which are around the main-sequence turn-off, which all have similar masses, the exact PDMF slope should have little impact on our results. To test this, we created sCMDs with PDMF slopes ranging from -2.02 to 1.02 and found the change in PDMF slope indeed had a negligible impact on the estimated age for M92.
§ ISOCHRONE FITTING
To estimate the age of M92, each sCMD is compared to the observational CMD and the fit probability is calculated. In order to compare CMDs, we divide the 2D CMD into multiple subsections and estimate the goodness of fit using a χ^2 method:
χ^2 = ∑_i( O_i - E_i )^2/E_i,
where E_i is the number of data points in a subset of the CMD in the observational data and O_i is the number of sampled data points in the same subset from the sCMD. Since the number of stars in the sCMD is ∼ 100times larger than the number of stars in the observed CMD, the uncertainty in the number counts for simulated stars is negligible in comparison to the uncertainty in the number counts for the observed stars. The age determination is done in a series of steps, as discussed below.
§.§ Voronoi Binning
To compare the sCMD with the observational CMD, a method to partition the 2D CMD was required. The most intuitive method will be dividing the CMD using a uniform grid. However, the distribution of stars in the CMD is highly biased. As the result, if the bins of a 2D CMD were defined by evenly spaced grids, there would be a wide distributions of expected data points in each bin, with some bins being empty (either in the real, or simulated data). Therefore equation (<ref>) could not be used for that bin. A better approach of a non-uniform partition of the 2D CMD result in roughly equal number of points per bin is required.
To achieve this requirement, we use the adaptive Voronoi binning method of <cit.>. The algorithm sets up initial bins based on a Voronoi Tessellation formed by the simulated CMD. It iteratively combines bins nearby and thus raises the signal to noise ratio until it reaches the target. It satisfies three requirements:
* Topological requirement: there will be no data points which are not in a bin, and no bins overlap,
* Morphological requirement: the bin shape will be as “compact” (or “round”) as possible so that two pixels from two corners across the CMD will not be put into one bin,
* Uniformity requirement: the resulted bins will have similar number of data points as targeted. Therefore, all bins can be considered as having equal statistical significance.
This Voronoi binning is extremely computational demanding, and to save CPU time, 180,000 points from each sCMD are randomly selected to generate Voronoi bins. Each set of Voronoi bins contain 800 bins with average of 225 points in each bin. Because the Voronoi binning method of <cit.> was designed to deal with images and required the pixel-size to be the same on both axis, we rescaled the magnitude of F606W-F814W before doing the Voronoi binning. Because the linear transformation does not change the topology of the data, the original data can be easily recovered for further analysis. Different combinations of number of data points used and number of Voronoi bins were tested and the combination being used in this paper shows a balance in computational time and accuracy. After the Voronoi bin are determined, the entire sCMD will be binned and contribute to the expectation in equation (<ref>).
Figure <ref> shows an example of the Voronoi binning <cit.>. The CMD is divided into 800 different subsets with different sizes. Most bins locate near the isochrone (red line) and few bins locate outside where density of simulated data points (blue dots) is low.
For each set of Voronoi bins, the observational CMD is shifted by a range of distance modulus and reddening, with the ranges chosen to encompass the observed uncertainties in these quantities. The distance to M92 is estimated using main sequencing fitting using the two calibration stars which good HST photometry and parallaxes from Gaia EDR3 <cit.>. Assuming a reddening of E_F606W-F814W = 0.02, the distance modulus of (m-M)_F606W= 14.80 ± 0.02 is found. Assuming a reddening of E_F606W-F814W = 0.01, the main sequence fitting yields a distance modulus of (m-M)_F606W= 14.75 ± 0.02. The evidence favours the higher reddening value. An independent distance estimate to M92 by <cit.> is (m-M)_V= 14.62 and E_B-V = 0.023 based upon fitting ground based data to their isochrones (which assumes no uncertainty in their isochrones). <cit.> found a distance modulus of (m-M)_V= 14.82 and E_B-V = 0.025 from fitting an different set of isochrones to a different ground based dataset. <cit.> used a variety of methods (including EDR3 parallaxes of cluster stars, and main sequence fitting) to estimate distance to M92 to be 8.48 ± 0.17kpc which is (m-M)_o= 14.64 ± 0.04. Assuming a reddening of E_B-V = 0.01, this corresponds to (m-M)_F606W= 14.67. Based upon the above, the distance modulus used in this paper range from 14.62 to 14.82 with an increment of 0.01, and reddening range from 0.0 to 0.05 with an increment of 0.01. For each combination of distance modulus and reddening, a χ^2 value was calculated using equation (<ref>) for each of the 41 ages in each MC isochrone set. The minimum χ^2 was then selected as the best estimate of the age, distance modulus and reddening for that particular MC isochrone.
§.§ Empirical χ^2 Distribution
<cit.>
demonstrated that with large data sets, using the p-value-based hypothesis testing method no longer provides scientifically reliable results. Therefore, with the M92 data, it is inappropriate to estimate the goodness of fit using the standard χ^2 fit probability function. To interpret χ^2 values calculated in section <ref>, a statistical method to determine the empirical χ^2 distribution is required. To do so, we re-sample the observational data using the photometric error and completeness from the artificial star test <cit.>. From the observed data, 10,000 CMDs each with about two million data points are generated. Using the same method described in section <ref>, for each re-sampled CMD, a set of Voronoi bins is determined and a χ^2 value is calculated using equation (<ref>). As a result, an empirical χ^2 distribution is determined, and is used to compare with theoretical values.
Figure <ref> shows the empirical χ^2 distribution and the χ^2 values determined when comparing the MC isochrones to the observations. From the 20,000 sets of theoretical isochrones created in this study, 1,100 isochrones are within 3 σ of the mean of the empirical distribution. The other 18,900 MC isochrones yielded a very poor fit to the observed data. Figure <ref> is a zoomed-in version of Figure <ref> and shows 1,100 isochrones are within 3 σ of the mean of the empirical distribution. Most of these 1,100 isochrones also provided a good fit to the calibration star data. However, 66 of the isochrones provided relatively poor fits to the calibration stars, leading to a calibration star weighting value of less than 0.10. The shape of the empirical χ^2 distribution can be fit with a normal distribution χ^2 = 3712 ± 39 and spans a very narrow region in χ^2. Figure <ref> shows that the mean of the empirical χ^2 distribution can be several orders of magnitude smaller than the χ^2 of isochrones which poorly fits the data, which shows the sensitivity of isochrones to MC parameters and the selectivity of our age determination technique. Figure <ref>(a) and Fig. <ref>(b) shows examples of the sCMDs which are generated from those 1,100 theoretical isochrones. Figure <ref>(a) has a χ^2 value which is within 1 σ of the mean of the empirical distribution and is considered as a “high probability fit" for M92, with an higher weight in the final age estimation. Figure <ref>(b) has a χ^2 value which is almost 3 σ greater than the mean of the empirical distribution and is considered a “low probability fit" for M92. Although it is taken into consideration in the age estimation, it had a much lower weight.
Figure <ref>(c) and Fig. <ref>(d) are the corresponding χ^2 values for each of the Voronoi bins of the two sCMDs shown in Fig. <ref>(a) and Fig. <ref>(b), respectively. Figure <ref>(c) and Fig.<ref>(d) shows a difference mostly in main sequence and MSTO which favors the sCMD shown in Fig. <ref>(a).
§.§ Age Estimation
Figure <ref> shows a clear bias towards χ^2 values higher than the mean of the empirical χ^2 distribution. As a result, the χ^2 values smaller than the mean of the empirical χ^2 distribution was considered to have a “fit probability" of 1 while the “fit probability" of χ^2 values higher than the mean of the empirical χ^2 distribution was defined by the empirical χ^2 distribution where the cumulative distribution function is shown in Fig. <ref>.
The “fit probability" from the empirical χ^2 distribution was multiplied by the probability found in section <ref> to result in a final weight for each of the 1,100 isochrones. The age distribution is shown in Figure <ref>. The weighted average of the age of the 1,100 isochrones is equal to 13.80 Gyr and the weighted standard deviation is 0.75 Gyr. Thus, we measure the absolute age of M92 to be 13.80 ± 0.75 Gyr. At 95% confidence, we find the age to be in the range 12.4-15.4 Gyr.
§ DISCUSSION
§.§ Distance Modulus and Reddening
As described in section <ref>, we tested distance moduli ranging from 14.62 to 14.82 (with an increment of 0.01) and reddening ranging from 0.0 to 0.05 (with an increment of 0.01) for each isochrone. The best fitting age corresponding to each distance modulus and age is shown in Figure <ref>. This figure clearly (and unsurprisingly) indicates that the lower best-fit age favors higher distance modulus. The Pearson correlation coefficient between distance modulus and best-fit age = -0.780, indicating a strong negative correlation between the distance modulus and the best-fit age. This result is expected: a higher distance modulus will shift the theoretical isochrone in the sCMD in the opposite direction of a shift in the sCMD due to a lower isochrone age and will remain close to the true distribution of stars observed in M92. Figure <ref> also shows a strong preference toward lower reddening values.
The weighted average of distance modulus is μ = 14.72 ± 0.04 mag (D=8.79 ± 0.16 kpc), which is the similar to <cit.>, but lower than <cit.> and slightly higher than <cit.>, who found μ = 14.66 ± 0.04. Our result has a strong preference towards a low reddening: there are no well-fitting isochrones with E ( B - V ) = 0.04 or 0.05. Since the distribution of reddening is non-symmetric, we select the central 68 % of the distribution and find that the reddening of M92 is in the range E (B - V) = 0.005 ∼ 0.025 mag, with the distribution skewed to smaller reddening values. This is within the range of previous results as shown Table <ref>
Most of the studies on GC age-dating <cit.> rely mainly on main sequence turn-off (MSTO) stars to determine the age of a cluster. However, we include a wider range of stars with F606W magnitude from 15.925 to 19.925. As a result, our study includes not only MSTO stars but also a subset of stars lying on the main sequence (MS) and giant branch (GB). Although MS stars and GB stars are not very sensitive to age, we include them in this study to constrain the distance modulus and reddening of M92.
To test this idea, we applied the method described in section <ref> and section <ref> to M92 data with only MSTO stars. Due to computational limitations, we generated and fitted 2,000 sets of sCMDs (1000 of which had been found to be good fits in our previous analysis, and 1000 which were poor fits in our previous analysis). By exclusively using MSTO stars, we were able to determine the age of M92 as = 13.88 ± 0.81 Gyr with distance modulus μ = 14.68 ± 0.05 and reddening E (B - V) = 0.005 ∼ 0.045. These results, which exhibit a slightly higher age and larger uncertainty compared to those in Table <ref>, follow expectations: removing MS and GB stars provides more freedom in selecting distance modulus and reddening, thus partially offsetting the impact of the change in isochrone age. In this case, we suggest that the wider range of best-fit reddening values is the cause of the higher uncertainty in age estimated and the slightly lower distance modulus value is likely due to the strong negative correlation between it and the age.
§.§ Monte Carlo Parameters
To determine if the observational data is best fit by a limited range in our MC parameters, we compare the distribution of input MC parameters (see Table <ref>) to the distribution of the MC parameters in the set of 1,100 best-fit isochrones. Most parameters have similar distributions, while differences are found for a few parameters. For example, the distribution of the mixing length is shown in Fig. <ref>. While a uniform distribution from 1.0 to 2.5 was used as input, the best-fitting isochrones show a very strong preference for mixing length values between 1.5 and 2.0.
The value of solar-calibrated mixing length depends strongly on the the surface boundary conditions which are used in constructing a stellar model. The correlation between the mixing length and surface boundary condition for the 1,100 sets of best-fitting isochrones is shown in Fig. <ref>. DSEP determines the conditions at the surface of the star using model atmospheres. The three options used in the MC were PHOENIX model atmospheres (based upon a sophisticated radiative transfer code; see ) which has a solar calibrated mixing length of 1.7, the simple Eddington gray model atmosphere which has a solar calibrated mixing length α_ = 1.7, and the empirical solar Krishna-Swamy atmosphere which has solar calibrated mixing length α_ = 2.0. Both the Gray and Phoenix models of atmosphere prefer lower mixing length value while the Krishna-Swamy model of atmosphere prefers higher mixing length values when fit to M92. The resulted double-peak feature is shown in Fig. <ref>.
To determine which MC parameters are most important for in determining the uncertainty in the age estimate for M92, the error budget for each parameter was calculated. For all the MC parameters in Table <ref>, along with distance modulus and reddening, we performed a maximum likelihood estimation using weighted least square regression method, including the calculation of the covariance matrix. The estimate of the error contributed from each MC parameter is shown in Figure <ref>. Parameters that contribute less than 0.05% are combined as “others" and their contribution could be the result of their correlation with other parameters. The distance modulus is the dominant source of error.
Since the distance modulus parameter contributes most to the error and might dominate other parameters the value of the distance modulus and reddening was fixed, and the maximum likelihood estimation was repeated without those two parameters. The result shows a similar contribution from each MC parameters which agree with Figure <ref>. The 4 MC parameters which contribute the most to the error budget were selected and their correlation with estimated age is shown in Figure <ref>. There is no significant correlation between [Fe/H] and age. This is because M92 has a relatively well determined [Fe/H] value, and since it is very metal-poor, the uncertainty in the log-scaled [Fe/H] corresponds to a small change in the mass fraction of heavy elements (Z). There is a negative correlation between [α/Fe] and age, as best-fit isochrones prefer a lower α abundance which leads to a higher estimated age. The helium diffusion coefficient has a MC distribution that is weighted toward a somewhat smaller value than was found in <cit.> and displays an anti-correlation with age.
§ CONCLUSION
We determine the age of M92 using a statistical approach with Monte Carlo simulations which takes into account the uncertainties in the theoretical stellar evolution models and isochrones along with the observed uncertainties in the distance modulus, reddening and composition of M92. We created 20,000 sets of Monte Carlo input parameters with 20 variables, which were used to generate 20,000 sets of theoretical isochrones over an abundance range of -2.40 ≤ [] ≤ -2.20 dex. We use DSEP to construct a set of isochrones from 8 Gyr to 16 Gyr with 0.2 Gyr increment for each set of input parameters. Each isochrone is calibrated using HIP 46120 and HIP 106924, two single, main-sequence stars with accurate colors and absolute magnitudes from HST ACS photometry and Gaia EDR3 parallaxes.
Each calibrated isochrone is used to generate a sCMD with 4,000,000 data points. Using the Voronoi Binning method, 800 bins are generated for each sCMD. HST ACS data for M92 is fit by each set of Voronoi bins with a shift in distance modulus ranging from 14.62 to 14.82 with an increment of 0.01, and reddening ranged from 0.0 to 0.05 with an increment of 0.01. A χ^2 goodness-of-fit parameter was calculated (see eqn. <ref>) and compared to the empirical χ^2 distribution generated using HST ACS data combined with artificial star test.
We find that 1,100 isochrones from the 20,000 sets of isochrones constructed were within 3 σ of the mean of the empirical distribution. The age of M92 is determined by the mean age of the 1,100 isochrones, weighted by the result from single star calibrations and χ^2 comparison. We find the age of M92 to be 13.80 ± 0.75 Gyr, an error of 5.4%. The dominant contributor to this uncertainty is the distance modulus, with the metallicity, α enhancement, and treatment of helium diffusion being the other sources of non-negligible error. The fact that the distance to M92, and not stellar physics, dominates the uncertainty points to the importance of precise and accurate distance measurements for further improvements in absolute age measurements. In future papers, we will present absolute age measurements for additional metal-poor GCs and their implications for our understanding of stellar physics and cosmology.
§ ACKNOWLEDGMENTS
We thank the anonymous referee for a careful review of the paper and helpful comments that improved the presentation of the paper. This material is based upon work supported by the National Science Foundation under Award No. 2007174, by NASA through AR 17043 from the Space Telescope Science Institute (STScI), which is operated by AURA, Inc., under NASA contract NAS5-26555, and from The William H. Neukom Institute for Computational Science at Dartmouth College. MBK acknowledges support from NSF CAREER award AST-1752913, NSF grants AST-1910346 and AST-2108962, NASA grant 80NSSC22K0827, and HST-AR-15809, HST-GO-15658, HST-GO-15901, HST-GO-15902, HST-AR-16159, HST-GO-16226, HST-GO-16686, HST-AR-17028, and HST-AR-17043 from STScI.
Dartmouth Stellar Evolution Program <cit.>; Topcat <cit.>
aasjournal
|
http://arxiv.org/abs/2306.03133v2
|
20230605180003
|
Krylov complexity in a natural basis for the Schrödinger algebra
|
[
"Dimitrios Patramanis",
"Watse Sybesma"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.str-el",
"hep-th"
] |
=10000
=10000
sf
usualmathcalOMScmsymn
usualmathcal
|
http://arxiv.org/abs/2306.02071v1
|
20230603102250
|
DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation
|
[
"Felipe Garrido-Lucero",
"Benjamin Heymann",
"Maxime Vono",
"Patrick Loiseau",
"Vianney Perchet"
] |
cs.AI
|
[
"cs.AI",
"cs.GT",
"stat.CO",
"stat.ML"
] |
Ebola transmission dynamics: will future Ebola outbreaks become cyclic?
[
July 31, 2023
=======================================================================
Many machine learning problems require performing dataset valuation, i.e. to quantify the incremental gain, to some relevant pre-defined utility, of aggregating an individual dataset to others.
As seminal examples, dataset valuation has been leveraged in collaborative and federated learning to create incentives for data sharing across several data owners.
The Shapley value has recently been proposed as a principled tool to achieve this goal due to formal axiomatic justification.
Since its computation often requires exponential time, standard approximation strategies based on Monte Carlo integration have been considered.
Such generic approximation methods, however, remain expensive in some cases.
In this paper, we exploit the knowledge about the structure of the dataset valuation problem to devise more efficient Shapley value estimators.
We propose a novel approximation of the Shapley value, referred to as discrete uniform Shapley () which is expressed as an expectation under a discrete uniform distribution with support of reasonable size.
We justify the relevancy of the proposed framework via asymptotic and non-asymptotic theoretical guarantees and show that tends towards the Shapley value when the number of data owners is large.
The benefits of the proposed framework are finally illustrated on several dataset valuation benchmarks. outperforms other Shapley value approximations, even when the number of data owners is small.
§ INTRODUCTION
One of the main challenges for training machine-learning (ML) models with enough generalisation capabilities is to access a sufficiently large set of labeled training data.
These data often exist but are commonly spread across many parties impairing their usage in a direct and simple way.
A seminal example lies in the advertising industry, where consented data about browsing and shopping habits of individual users are distributed and owned by several websites including advertisers' and publishers' ones, each of them holding a set of observations with either similar or complementary features.
By collaborating with each other and pooling their individual data, these websites could learn better ML models for their applications than by only leveraging their local data.
However, such collaboration naturally raises questions regarding the additional (positive or negative) values each party would obtain by participating in this joint machine-learning effort.
In order to compute or estimate compensating rewards allowing to incentivise parties to share data, a first stage that is commonly considered in the literature is to perform so-called dataset valuation <cit.>.
Dataset valuation aims at quantifying the marginal contribution of a specific dataset to a given ML task with respect to (w.r.t.) datasets brought by other data owners.
Motivated by natural properties expected for equitable data valuation, value notions from cooperative game theory <cit.> have been leveraged to achieve this objective, including the core <cit.>, the Shapley value <cit.> or the Banzhaf value <cit.>.
Among the latter, the Shapley value has received a large attention and is arguably the most widely studied data valuation scheme, since it is the unique value notion that satisfies a set of four important axioms <cit.>.
For instance, under the specific setting where all parties only possess one datum, several works leveraged the Shapley value or close variants to measure the average change in a trained ML model's performance when a particular datum is removed e.g., <cit.>, <cit.>, <cit.> or <cit.>.
To cope with the computational intractability of the Shapley value, the typical technique consists in using Monte Carlo integration, possibly enhanced with variance reduction techniques e.g., antithetic sampling <cit.>.
However, such generic approximation strategies might still remain expensive in some cases.
It is notably the case in ML applications involving complex models since each sample requires re-training them, hence drastically limiting the number of Monte Carlo samples that can be drawn.
Instead of relying on generic Monte Carlo approximation schemes of the Shapley value, we aim at finding more efficient estimators by leveraging the actual underlying structure of the dataset valuation problem.
More precisely, we propose to explicitly exploit the dependence of the utility function, used to define the Shapley value, on the number of data points of a given dataset.
This leads to a new Shapley value approximation, referred to as discrete uniform Shapley (), which stands for an expectation under a discrete uniform distribution whose support size corresponds to the number of data owners.
Compared to computing the exact Shapley value of a given dataset, has an exponential reduction of the number of utility evaluations.
Interestingly, admits the appealing property of converging almost surely towards the exact Shapley value when the number of data owners increases; a setting where generic Monte Carlo strategies fail at providing good estimates under a limited budget.
On the other hand, for a fixed number of data owners and under mild assumptions on the utility function, we show that the error between and the Shapley value is bounded and depends explicitly on key quantities of the dataset valuation problem, namely (i) the average number of data points in parties' datasets and associated variance, and (ii) constants associated to the growth of the utility function w.r.t. the
dataset size.
Contributions. We summarise our main contributions as follows:
* We propose , an efficient proxy of the Shapley value to perform dataset valuation. This is the first dataset valuation approach leveraging the specific structure of the utility function.
* We provide both asymptotic and non-asymptotic guarantees for , showing notably that it converges almost surely towards the Shapley value in the specific regime where the number of parties is large.
* We instantiate and justify all our statements and assumptions using a running example, standing for a linear regression problem commonly studied in the literature <cit.>.
In particular, we obtain closed-form expressions regarding the dependence of the utility function w.r.t. the dataset size.
* We assess the benefits of the proposed methodology using numerical experiments on both toy and real-world dataset valuation problems. In particular, we show that outperforms generic Monte Carlo approximations of the Shapley value, and its variants.
Related Works.
The Shapley value has been recently applied to several ML problems including variable selection <cit.>, feature importance <cit.>, model interpretation <cit.> or to provide data sharing incentives in collaborative ML problems <cit.>.
Regarding datum or dataset valuation, several approaches have been considered in the literature.
The main lines of work include Shapley-based methods <cit.>, leave-one-out-based methods using influence functions <cit.>, the use of other solutions concepts from cooperative game theory by relaxing the efficiency axiom verified by the Shapley value <cit.>, and volume-based methods <cit.>.
Conventions and Notations.
For n ≥ 1, we define [n]:={1,…,n}.
The d-multidimensional Gaussian probability distribution with mean μ∈ and covariance matrix Σ∈ℝ^d × d is denoted by μ,Σ.
The Uniform distribution over a set 𝒜 is referred to as U(𝒜).
§ PRELIMINARIES
This section presents the dataset valuation problem we aim at solving, along with preliminaries including the definition of the Shapley value.
§.§ Problem Formulation
We consider a collaborative machine learning setting involving a set ℐ of I=|ℐ| ∈ℕ^* data owners (also referred to as players in the sequel) who are willing to cooperate in order to solve a common machine learning problem.
These I players are assumed to possess individual datasets {D_i}_i ∈ℐ such that, for any i ∈ℐ, D_i = {(x_i^(j),y_i^(j))}_j ∈ [n_i] where x_i^(j) stands for a feature vector, y_i^(j) is a label and n_i = |D_i| refers to the number of data points in D_i.
We are interested in quantifying the incremental contribution that a given player i ∈ℐ brings by sharing its dataset D_i with other players towards solving the ML task at stake.
To meet this objective, we assume the availability of a trusted central entity that aims at collecting and aggregating data from these players to quantify the value of their individual datasets.
To measure the individual value of a dataset, many valuation metrics, such as the Shapley value, compute the marginal contribution of a given player i to an existing coalition of players 𝒮⊆ℐ, defined by u(𝒮∪{i}) - u(𝒮), where u: 2^I →ℝ stands for a utility function.
For any coalition 𝒮⊆ℐ of players, u(𝒮) quantifies how well players in 𝒮 could solve the considered ML task.
As an example, for a classification problem, u(𝒮) is typically chosen as the prediction accuracy, calculated on a hold-out testing dataset and associated to the best model the parties in 𝒮 could train by aggregating their training data.
Without loss of generality, we assume in the sequel that u(∅) = 0.
Running Example: Linear Regression. In order to ease understanding and illustrate our statements, we describe the following running example, that corresponds to a simple linear regression problem. This example was used for instance in <cit.> to characterise players' incentives regarding data sharing under the federated learning paradigm.
For any i ∈ℐ, we consider the following generative linear model regarding the dataset D_i:
Y_i = X_iθ + η_i, η_i ∼N(0_n_i,ε_i^2I_n_i),
x_i^(j)∼ p_X, for any j ∈ [n_i], and ε_i ∼ p_ε,
where θ∈^d is a ground-truth parameter, X_i ∈^n_i × d is defined by X_i = ([x_i^(1)]^⊤,…,[x_i^(n_i)]^⊤)^⊤ and Y_i ∈^n_i is defined by Y_i = (y_i^(1),…,y_i^(n_i))^⊤.
For the sake of simplicity, both the one-dimensional p_ε distribution and the d-dimensional distribution p_X are common to the I players.
Under the linear regression framework defined in (<ref>), the utility function of a set 𝒮⊆ℐ of players is defined by the negative expected mean square error over a hold-out dataset, that is
u(𝒮) = -𝔼x^⊤θ̂_𝒮 - x^⊤θ^2,
where the expectation is taken over the distribution p_X^test of a hold-out testing datum x ∈^d and θ̂_𝒮, defined by θ̂_𝒮 = ( X_𝒮^⊤ X_𝒮)^-1 X_𝒮^⊤ Y_𝒮, stands for the maximum likelihood estimator.
The notations X_𝒮 and Y_𝒮 refer to the concatenation of {X_i}_i ∈𝒮 and {Y_i}_i ∈𝒮, respectively.
Similarly to <cit.>, we chose to define the utility as a negative prediction error since there is no equivalent notion of accuracy for regression tasks.
§.§ Shapley Value
Definition. The Shapley value <cit.> is a classical solution concept in cooperative game theory to allocate the total gains generated by a
coalition of players.
Given a utility function u, the Shapley value of a player i is defined as the average marginal
contribution of her dataset D_i to all possible subsets of D_-i = {D_j}_j ∈ℐ∖{i} built by aggregating datasets of other players.
Formally, the Shapley value φ_i of player i writes
φ_i(u) = 1/|Π(ℐ)|∑_π∈Π(ℐ) [u(𝒫_i^π∪{i}) - u(𝒫_i^π)],
where Π(ℐ) refers to the set of permutations over ℐ and by 𝒫_i^π to the set of predecessors of player i ∈ℐ in permutation π∈Π(ℐ).
The Shapley value of player i can be equivalently expressed as
φ_i(u) = 1/I∑_𝒮⊆ℐ∖{i}1/I-1|𝒮| [u(𝒮∪{i}) - u(𝒮)].
The Shapley value has been commonly used for data valuation and more generally in cooperative game theory as it uniquely satisfies the following set of desirable properties.
* Efficiency. ∑_i=1^I φ_i(u) = u(ℐ). The sum of Shapley values for each data owner is the value of the grand coalition ℐ.
* Symmetry. If, for any 𝒮⊆ℐ∖{i_1,i_2}, u(𝒮∪{i_1}) = u(𝒮∪{i_2}), then φ_i_1(u) = φ_i_2(u). If two players have the same marginal effect on each coalition, their Shapley values coincide.
* Dummy. If, for any 𝒮⊆ℐ∖{i}, u(𝒮∪{i}) = u(𝒮), then φ_i(u) = 0. The player whose marginal impact is always zero has a Shapley value of zero.
* Linearity. φ_i(u_1 + u_2) = φ_i(u_1) + φ_i(u_2). The Shapley values of sums of games are the sum of the Shapley values of the respective games.
Monte Carlo Approximation.
Evaluation of the Shapley value is known to be computationally expensive in general <cit.>.
As such, many works <cit.> proposed to approximate it via Monte Carlo by sampling with replacement T terms from the sum of either (<ref>) or (<ref>).
Regarding (<ref>), this boils down to considering the estimator
φ̂_i(u) = 1/T∑_t=1^T [u(𝒫_i^π_t∪{i}) - u(𝒫_i^π_t)],
where π_t ∼U(Π(ℐ)).
By using Hoeffding’s bound <cit.>, one can show that a lower bound on the
number of permutations T such that ℙ(φ_i(u) - φ̂_i(u)_2 ≤ε) ≥ 1 - δ is given by T_perm(ε,δ) = (2r_u^
2I/ε^2)log(2I/ δ), where ε, δ∈ (0,1) and r_u = max_𝒮_1, 𝒮_2 ⊆ℐ{|u(𝒮_1) - u(𝒮_2)|} is the range of the utility function u.
Note that T_perm(ε,δ) = O(I log(I)), which is much lower than the 2^I terms involved in the sum of the Shapley value in (<ref>) in the regime where the number of players I is large.
Such Monte Carlo approximation has for instance been used in the seminal paper introducing for datum valuation in ML <cit.>.
§ PROPOSED APPROACH
In this section, we revisit the generic definition of the Shapley value in (<ref>)-(<ref>) by exploiting the dependence of the utility function on the number of data points brought by a set of players.
This leads to the introduction of a novel proxy of the Shapley value referred to as . All proofs are postponed to the supplementary material.
§.§ A Novel Perspective of the Shapley Value for Dataset Valuation
In Section <ref>, we have considered a generic but commonly used definition for the Shapley value using a utility function u defined over a set of players.
However, recall that players are cooperating by sharing their datasets {D_i}_i ∈ℐ.
The latter are contributing to the performances of the considered ML model via their cardinality {n_i}_i ∈ℐ and quality (e.g., defined by the discrepancy between the distribution of D_i and that of the hold-out testing dataset).
For the sake of clarity of exposure, we shall consider in the remaining that the qualities of the I datasets are identical so that the only lever of performance to the ML model is the number of data points brought by each player.
This statement is illustrated below, thanks to the running example described in Section <ref>.
Running Example. Under mild assumptions on p_X and p_X^test, the following result shows that a set of players 𝒮 only contributes to the utility u(𝒮) via its aggregated number of data points n_𝒮 = ∑_i ∈𝒮 n_i.
Define C = 𝔼_x ∼ p_X^test[x x^⊤] and assume that p_X = N(0_d, Σ) where Σ∈ℝ^d × d is a positive definite matrix.
Then, whenever n_𝒮 > d + 1,
u(𝒮) = σ^2_ε/d+1 - n_𝒮TrC·Σ^-1,
where u is defined in (<ref>) and σ_ε^2 = 𝔼_ε∼ p_ε[ε^2 ].
For the specific choice p_X^test = p_X then, whenever n_𝒮 > d + 1, we have,
u(𝒮) = dσ^2_ε/d+1-n_𝒮.
Such a result motivates a refined version of the utility function u taking into account the explicit dependence on the aggregated number of data points associated to a set of players.
As such, we introduce a function w: ℝ_+ → such that for any 𝒮⊆ℐ,
u(𝒮) = w(n_𝒮).
Under this re-parametrisation, the Shapley value defined in (<ref>) can be equivalently written as
φ_i(u)
= 1/I∑_k=0^I-1∑_𝒮⊆ℐ∖{i}: |𝒮| = k1/I-1|𝒮| [w(n_𝒮 + n_i) - w(n_𝒮)]
= 𝔼_K ∼U({0,…,I-1})𝔼_𝒮∼U([2^ℐ∖{i}_K])w(n_𝒮 + n_i) - w(n_𝒮),
where the first equality is obtained by re-arranging all the possible sets of players by their cardinality k ∈{0,…,I-1}, and the second one by introducing the notation 2^ℐ∖{i}_K to refer to all subsets of ℐ∖{i} of cardinality K, with K ∼ U({0,...,I-1}).
§.§ Discrete Uniform Shapley Value
Equation (<ref>) explicitly reveals a key random variable, namely n_𝒮, obtained by first drawing uniformly a set cardinality k, then drawing a subset 𝒮 of k players uniformly from ℐ∖{i}, and finally setting n_𝒮 = ∑_j ∈𝒮 n_j.
Interestingly, we show in the next result that this random variable tends towards a uniform distribution as the number of players grows.
Let n = {n_i}_i∈ [I] be a sequence of positive integer numbers such that the following limits exist,
lim_I→∞1/I∑_j=1^In_j= μ and lim_I→∞1/I∑_j=1^I (n_j -μ)^2=σ^2,
where μ, σ > 0.
For any i ∈ℐ, let K ∼U({0,…,I-1}) and 𝒮_K^(i)∼U([2^ℐ∖{i}_K]); and define n_𝒮_K^(i) = ∑_j ∈𝒮_K^(i) n_j to be the random variable corresponding to the total number of data points brought by the random set 𝒮_K^(i) of K players.
Then, for any i ∈ℐ, it holds,
n_𝒮_K^(i)/∑_j ∈ℐ∖{i}n_jU([0,1]), almost surely.
Such a result, illustrated in Figure <ref>, motivates to approximately consider that n_𝒮 in (<ref>) stands for a uniform random variable supported on {0,…,∑_j ∈ℐ∖{i}n_j}.
Albeit smaller than the size of all subsets of ℐ, the size of this new support may intractably increase when the aggregated number of data of all players is important.
Building upon the fact n_𝒮 has a uniform distribution over this support, this naturally leads us to approximate the Shapley value in (<ref>) by (i) calculating the quantities w(n_𝒮 + n_i) - w(n_𝒮) taking n_𝒮 on a regularly-spaced grid between 0 and ∑_j ∈ℐ∖{i}n_j and, then, (ii) average them.
A natural candidate for the number of discretisation bins is the total number of players when excluding the i-th one, i.e. I-1.
Indeed, whenever all agents have the same number of data points, i.e., n_j = n, for any j ∈ℐ, the random variable n_𝒮 is uniformly distributed on {0,n, 2n,...,(I-1)n}.
This leads to the proposal of a novel approximation for the Shapley value defined hereafter.
For any i ∈ℐ, the discrete uniform Shapley value () of the i-th player, denoted by ψ_i, is defined by
ψ_i(u) = 1/I∑_k=0^I-1w(kμ_-i + n_i) - w(kμ_-i),
where μ_-i = 1/I-1∑_j ∈ℐ∖{i}n_j.
Compared to the Shapley value defined in (<ref>), which involves 2^I terms to compute, note that only involves I terms and hence an exponential reduction of the number of utility evaluations.
Of course, these computational savings come at the cost of some bias.
The latter is quantified precisely in the next section.
§.§ Non-Asymptotic Theoretical Guarantees
In order to provide statistical guarantees on the described procedure, we need to make some structural assumptions. Precisely, we are going to consider utility functions satisfying the following:
Let w: ℝ_+ → such that,
* the function w is increasing;
* the function w is twice continuously differentiable and its second derivative w^(2) satisfies lim_n →∞ n^2|w^(2)(n)| ∞.
The first assumption is not restrictive, as we expect that the more data, the more precise the ML prediction will be.
The second one aims at controlling the growth behavior of the utility function w; it is also not that restrictive, as it is for instance automatically satisfied if w is bounded (and w^(2) monotone), thanks to the mean value theorem.
Under, these assumptions, we have the following non-asymptotic result:
Assume H<ref>.
Then, there exists a constant ρ 0 such that, for any i ∈ℐ, the approximation error of for player i ∈ℐ is upper bounded by
|φ_i - ψ_i | ≤ρ |w(n_ℐ∖{i})|/(I-1) μ_-i^2(9σ_-i^2 (1+ln(I-1)) + 2R_-i^2n^max_-i),
where, φ_i and ψ_i are defined in (<ref>) and (<ref>), respectively; and where μ_-i = 1/I-1n_ℐ∖{i}, σ^2_-i = 1/I-1∑_j∈ℐ∖{i} (n_j-μ_-i)^2, R_-i := max_j ∈ℐ∖{i} |n_j - μ_-i|, and n^max_-i := max_j ∈ℐ∖{i} n_j.
It is worth mentioning that the upper bound in (<ref>) depends on natural quantities related to the dataset valuation problem described in Section <ref>.
Indeed, it depends on the first two moments μ_-i and σ_-i of the datasets' size distribution.
More precisely, the error increases when there are some outlier players with a very small or large dataset size.
This behavior is expected since, in this particular setting, the random variable n_𝒮 defined in Section <ref> differs from a uniform random variable.
In addition, we can interestingly observe that the error vanishes in two specific regimes, namely (i) when the number of players I tends towards infinity as showcased in Theorem <ref>, and (ii) when all players have the same number of data points.
Running Example. We emphasise that Assumption H<ref> is verified by our running example.
Recall from (<ref>) that w(n) = dσ_ε / (d+ 1 - n).
Then, w is indeed increasing, twice differentiable with second derivative given by w^(2)(n) = 2dσ_ε / (d+ 1 - n)^3 which satisfies H<ref>-<ref>.
It is worth mentioning that computing the of a player i ∈ℐ requires I function evaluations. As pointed out in Section <ref>, the Monte Carlo approximation incurs an approximation error of ε(T) (with probability 1-δ), with T being the number of sampled permutations, equal to ε(T) = 2/Tr_w^2Ilog(2I/δ). In particular, fixing the sampling budget to I, i.e., equal to the number of terms needed to compute , the Monte Carlo approximation error becomes
ε(I) = 2w(n_ℐ)^2log(2I/δ),
where we have supposed that 0 = w(n_∅) ≤ w(n_𝒮) ≤ w(n_ℐ) for any 𝒮⊆ℐ, so that r_w = w(n_ℐ). Notice that ε(I) is increasing with I, while the bias of in Theorem <ref> decreases as the number of players grows.
Figure <ref> compares (<ref>) with 's bias (<ref>) as the number of players increases.
By considering the same computational budget for both approaches, we can observe that Monte Carlo approximation of the Shapley value is associated with a larger approximation error than , when the number of players becomes sufficiently large.
This result confirms that is indeed relevant for dataset valuation in this regime.
§ NUMERICAL EXPERIMENTS
In this section, we illustrate the benefits of our methodology on several dataset valuation benchmarks associated to both synthetic and real data.
More precisely, we aim at assessing numerically how well the proposed methodology approximates the Shapley value by considering two experiments.
The first one is associated to synthetic toy datasets.
On the other hand, the second experiment considers classical real-world datasets associated to both classification and regression problems, which have been considered in <cit.>.
For tractability reasons, it is unfortunately not possible to compute the exact Shapley values when I is large, so we can only measure approximation errors for small values of I, say smaller than 20. This case is actually the least favorable for , whose approximation improves with the number of players. The purpose of this section is to illustrate that, even in those “worst-case” instances for , the latter performs as good (or even better) that state-of-the-art techniques that would not profit for a larger number of players (as expected in motivating applications).
§.§ Synthetic Data
We consider a toy dataset valuation problem associated to the linear regression problem of our running example.
The corresponding utility function is defined in (<ref>), and we set d=10 and σ_ε = 1. In order to benchmark the performances of , we are considering four competitive approaches, relying on Monte Carlo (MC) approximation strategies <cit.>. The first one, referred to as is the standard MC approximation defined in (<ref>).
The second one, coined is a variance-reduced version of that considers antithetic sampling.
The third one coined stands for the multilinear extension of <cit.> which represents the Shapley value as two nested expectations.
Finally, the fourth approach, coined , relies on efficient permutation sampling techniques on the hypersphere to draw permutations in (<ref>) in a dependent way <cit.>.
All these approaches are detailed in the supplementary material.
To assess the performance of the aforementioned Shapley value estimators, we used the mean square error (MSE) averaged over all players.
For , we used (<ref>) and for each MC-based estimator, we performed 25 estimations to compute the MSE, and did it 10 times to obtain confidence intervals for the MSE.
Figure <ref> depicts associated results for various dataset size distributions and numbers of players I ∈{5,10,20}; we could not go beyond I=20 because the computation of the exact Shapley value needed for the MSE became intractable.
The top row corresponds to a setting where the dataset size is sampled from a uniform distribution U({10,…,10^3}).
We can see that, even for the small numbers of players considered, provides competitive Shapley value approximations compared to MC-based approaches for an equivalent budget, i.e. for a number of value function evaluations equal to I (illustrated by the vertical black line).
On the other hand, the bottom row corresponds to a setting where the dataset size of player i ∈ [I] is 2^i, leading to a scenario where the maximum discrepancy between datasets' sizes becomes large.
As highlighted by Theorem <ref>, this scenario is not favorable to .
To address this issue, we propose an enhanced version of the proposed methodology, referred to as and defined by
ψ_i^++(w) = 1/I(w(n_i) + w(n_ℐ) - w(n_ℐ∖{i})) + 1/I∑_k=1^I-2w(kμ_-i + n_i) - w(kμ_-i),
which corresponds to treating the extreme cases separately and approximating the rest of the terms with an expectation associated to a discrete uniform random variable; this technique can be seen as finding a different bias/variance tradeoff in the estimation of the Shapley value.
An illustration of why considering extreme cases might benefit our approximation is given in Figure <ref> (see left figure).
Again, we can denote that competes with MC-based approaches at iso-budget, even in the unfavorable approximation regime where the number of players is small.
§.§ Real-World Data
In this second experiment, we consider real-world datasets, also considered in <cit.>, and whose details are provided in Table <ref>.
To tackle these problems, we are considering logistic regression models and gradient-boosted decision trees (GBDT).
For classification tasks, the utility function has been taken as the expected accuracy of the trained logistic regression model over a hold-out testing set corresponding to 10% of the size of the training dataset.
For regression tasks, the utility function corresponds to the averaged MSE over a hold-out testing set corresponding to 10% of the training dataset.
For each dataset, we considered two worst-case scenarios for benchmarking the proposed methodology, namely I=10 players and I=20 players.
Starting from the initial dataset in Table <ref>, we affect a subset of random size to each player.
As in Section <ref>, we consider several competitors leveraging MC approximation strategies.
Since standard data valuation approaches in the ML literature, such as <cit.>, only consider simple MC sampling based on permutation sampling, we compare ourselves with and , which correspond to standard baselines.
To assess the benefits of the proposed methodology, namely and , we compute as in Section <ref> the averaged MSE across all players between the true Shapley value and each estimator.
In contrast to Section <ref> where we considered an utility function in closed form and not requiring to re-train a ML model, evaluating the two utility functions considered in this experiment requires re-training each ML model.
Regarding the computation of the Shapley value in (<ref>), it is clearly not feasible to train a ML model for a large number of epochs.
As such, we chose to restrict ourselves to 20 steps of stochastic gradient descent for logistic regression and 20 boosting iterations for GBDTs.
For MC-based approaches, we only considered I MC samples to compare those approximations with the proposed methodology on a fair basis, i.e. associated to the same computational budget.
Table <ref> depicts the results. Again, we clearly see that even in the worst-case scenario where the number of players is small, already competes favorably with MC-based approximations.
r10.5cm
Datasets considered in Section <ref>.
Dataset Size d Task
adult <cit.> 48,842 107 classif.
breast-cancer <cit.> 699 30 classif.
bank <cit.> 45,211 16 classif.
cal-housing <cit.> 20,640 8 regression
make-regression <cit.> 1,000 10 regression
year <cit.> 515,345 90 regression
§ CONCLUSION
We proposed a general dataset valuation methodology based on the Shapley value, by exploiting the underlying structure of the utility function.
The proposed framework, referred to as and , allows for efficient dataset valuation, especially in the usual setting where many data owners are willing to collaborate by sharing their data.
In addition, we have shown that has favorable convergence properties via both asymptotic and non-asymptotic results; justifying and illustrating those claims on a standard running example.
Interestingly, our numerical experiments showcases that the proposed methodology also competes with other state-of-the-art Shapley value approximations when the number of data owners is small; a regime where the bias of our approximation does not vanish.
Finally, some limitations associated to the proposed methodology pave the way for more advanced dataset valuation techniques.
As an example, we could extend to heterogeneous data scenarii where both the quantity, quality and types of data points brought by players play a role in the utility function.
This envisioned setting would imply, in particular, to consider multi-dimensional utility functions taking into account the size of the dataset and key parameters associated to their local distributions.
As a seminal example, this generalisation of our methodology would allow to provide data sharing incentives under the federated learning paradigm, where local data distributions of players could widely differ.
plainnat
SUPPLEMENTARY MATERIAL
–
DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation
Notations and conventions.
We denote by ℬℝ^d the Borel σ-field of ℝ^d, 𝕄ℝ^d the set of all Borel measurable functions f on ℝ^d and · the Euclidean norm on ℝ^d.
For the sake of simplicity, with little abuse, we shall use the same notations for
a probability distribution and its associated probability density function.
For n ≥ 1, we refer to the set of integers between 1 and n with the notation [n].
The d-multidimensional Gaussian probability distribution with mean μ∈ and covariance matrix Σ∈ℝ^d × d is denoted by μ,Σ.
Equations of the form (1) (resp. (S1)) refer to equations in the main paper (resp. in the supplement).
tocsection
PART:
unlemmaLemma S
unpropositionProposition S
uncorollaryCorollary S
untheoremTheorem S
§ ADDITIONAL DETAILS REGARDING THE RUNNING EXAMPLE
In this section, we provide additional details regarding the running example which has been considered in the main paper, see Section <ref>.
Note that we are considering a more general setting than in the main paper, which obviously encompasses the specific instance associated to the running example.
§.§ Problem formulation
Context. For the sake of completeness, we first briefly recall the problem we are considering.
We are investigating a framework involving I ∈^* players respectively owning a local dataset D_i= {x_i^(j),y_i^(j)}_j=1^n_i of size n_i = |D_i| where for any j ∈ [n_i], x_i^(j)∈𝖷⊆^d and y_i^(j)∈𝖸⊆.
For any i ∈ [I], we consider the following generative linear model regarding the dataset D_i owner by the i-th player:
Y_i = X_iθ + η_i, η_i ∼N(0_n_i,ε_i^2I_n_i),
x_i^(j)∼ p_X, ∀ j ∈ [n_i] ,
ε_i ∼ p_ε,
where X_i ∈^n_i × d is defined by X_i = ([x_i^(1)]^⊤,…,[x_i^(n_i)]^⊤)^⊤ and Y_i ∈^n_i is defined by Y_i = (y_i^(1),…,y_i^(n_i))^⊤.
For the sake of simplicity, note that we do not consider further sources of heterogeneity across players such as feature spaces with heterogeneous semantic and dimension (e.g. one player having image features and another one text features).
Loss function. Based on the data brought by each player, we are interested in learning a linear prediction function g_θ: x ↦ x^⊤θ, where θ∈Θ⊆^d stands for a weight vector. This boils down to finding an estimator θ̂ of θ based on {D_i}_i ∈ [I].
Without loss of generality, we assume that θ̂ is found by minimising a weighted sum of empirical risk functions given by
F(θ) = λ h(θ) + ∑_i=1^I α_i f_i(θ),
where, λ≥ 0, h:Θ→ is a regularisation term and for any i ∈ [I], α_i ∈ [0,1] are weights such that ∑_i ∈ [I]α_i = 1 and f_i: Θ→ only depends on D_i.
We define
σ^2_ε = ∫_ℝ_+ε^2 p_ε(ε),
X = [(α_1/n_1)^ X_1^⊤,…,(α_I/n_I)^ X_I^⊤]^⊤,
Y = [(α_1/n_1)^ Y_1^⊤,…,(α_I/n_I)^ Y_I^⊤]^⊤.
The following proposition provides a closed-form expression of the minimum of F in (<ref>) denoted by θ_1:I^⋆ when the functions {f_i}_i ∈ [I] are chosen to be quadratic.
For any i ∈ [I] and θ∈^d, let f_i(θ) = (1/n_i) Y_i - X_i θ^2 and h(θ) = (1/2)θ^2.
Then, the global minimiser of F defined in (<ref>) writes
θ_1:I^⋆ = λI_d + X^⊤ X^-1 X^⊤ Y, for λ > 0,
θ_1:I^⋆ = X^† Y, for λ = 0,
where {Y_i,X_i}_i ∈ [I] are defined in (<ref>), X in (<ref>), Y in (<ref>), and X^† refers to the Moore-Penrose inverse of the matrix X.
As a sum of differentiable functions, F in (<ref>) is differentiable and its gradient writes for any θ∈^d,
∇ F(θ) = λθ + ∑_i=1^I α_i/n_i X_i^⊤ (X_i θ - Y_i) .
The proof is concluded using the fact that F is strongly convex and by using the first-order guarantee ∇ F(θ_1:I^⋆) = 0_d.
Let n = ∑_i ∈ [I] n_i and consider Y ∈^n and X ∈ℝ^n × d the vertical concatenations of the I datasets owned by each agent, respectively.
Then, defining F(θ) = (1/n)Y - Xθ^2 is equivalent to set for any i ∈ [I], α_i = n_i/n and f_i(θ) = (1/n_i)Y_i - X_i θ^2.
§.§ Technical lemmata
We have the following result.
Let λ≥ 0, π be a probability distribution defined on (^d,ℬ(^d)), and C = ∫_ℝ^dx x^⊤π(dx).
In addition, let θ̂ = θ_1:I^⋆ defined in (<ref>) and define X_λ = λI_d + ∑_i=1^I α_i/n_i X_i^⊤ X_i with X_i defined in (<ref>).
If λ > 0, then X_λ is invertible and X_λ^-1 exists with probability 1.
On the other hand, if λ = 0, we additionally assume that X_λ is positive definite with probability 1.
Under these assumptions, consider
E_1:I = ∫I_d - ∑_i=1^I α_i/n_i X_i^⊤ X_iX_λ^-1θθ^⊤I_d - X_λ^-1∑_i=1^I α_i/n_i X_i^⊤ X_i⊗_i=1^I ⊗_j=1^n_i p_X(dx_i^(j))
+σ^2_ε∫X_λ^-1∑_i=1^Iα_i^2/n_i^2 X_i^⊤ X_iX_λ^-1⊗_i=1^I ⊗_j=1^n_i p_X(dx_i^(j)),
where σ^2_ε is defined in (<ref>).
Then, for any i ∈ [I] and x ∼π, we have
𝔼x^⊤θ̂ - x^⊤θ^2 = TrC · E_1:I,
where the expectation is taken over the randomness of x and θ̂.
Let λ≥ 0, i ∈ [I] and x ∼π.
Using (<ref>) and (<ref>), notice that
x^⊤θ - x^⊤θ_1:I^⋆ = x^⊤I_d - λI_d + ∑_i=1^I α_i/n_i X_i^⊤ X_i^-1∑_i=1^I α_i/n_i X_i^⊤ X_i θ
- x^⊤λI_d + ∑_i=1^I α_i/n_i X_i^⊤ X_i^-1∑_i=1^I α_i/n_i X_i^⊤η_i .
By using the notation
X_λ= λI_d + ∑_i=1^I α_i/n_i X_i^⊤ X_i,
the previous element is a scalar, therefore,
(x^⊤θ - x^⊤θ_1:I^⋆)^2
= θ^⊤I_d - ∑_i=1^I α_i/n_i X_i^⊤ X_iX_λ^-1 x x^⊤I_d - X_λ^-1∑_i=1^I α_i/n_i X_i^⊤ X_iθ
- 2θ^⊤I_d - ∑_i=1^I α_i/n_i X_i^⊤ X_iX_λ^-1 x x^⊤X_λ^-1∑_i=1^I α_i/n_i X_i^⊤η_i
+ ∑_i=1^I α_i/n_iη_i^⊤ X_iX_λ^-1 x x^⊤X_λ^-1∑_i=1^I α_i/n_i X_i^⊤η_i ,
where we have used that X_λ is a symmetric matrix, so its inverse is symmetric as well.
We now focus on the third term in the previous equality, namely (<ref>).
Taking the trace operator and using cyclic permutations, we have
∑_i=1^I α_i/n_iη_i^⊤ X_iX_λ^-1 x x^⊤X_λ^-1∑_i=1^I α_i/n_i X_i^⊤η_i
= Tr[∑_i=1^I α_i/n_iη_i^⊤ X_iX_λ^-1 x x^⊤X_λ^-1∑_i=1^I α_i/n_i X_i^⊤η_i]
= Tr[ x x^⊤X_λ^-1∑_i=1^Iα_i/n_i X_i^⊤η_i∑_i=1^I α_i/n_iη_i^⊤ X_iX_λ^-1]
= Tr[ x x^⊤X_λ^-1∑_i=1^Iα_i^2/n_i^2 X_i^⊤η_i η_i^⊤ X_i + ∑_i ≠ jα_i/n_iα_j/n_j X_i^⊤η_i η_j^⊤ X_jX_λ^-1] .
For any k ∈ [n_i] and l ∈ [n_j], notice that the (k,l) entry of the n_i × n_j matrix η_iη_j^⊤ is η_i^(k)η_j^(ℓ) where η_i^(k)∼N(0,ε_i^2) and η_j^(l)∼N(0,ε_j^2).
Therefore, for any (k,l) ∈ [n_i] × [n_j], 𝔼[η_i^(k)] = 𝔼[η_j^(l)] = 0, 𝔼[(η_i^(k))^2] = ε_i^2 and 𝔼[(η_j^(l))^2] = ε_i^2.
It follows, for any i,j ∈ [I] such that i ≠ j, that η_iη_j^⊤ has expected value equal to 0.
By denoting, for any i ∈ [I], p_η^(i) = N(η_i ; 0_n_i,ε_i^2I_n_i), we obtain
∫_^n_1×…^n_I∑_i=1^I α_i/n_iη_i^⊤ X_iX_λ^-1 x x^⊤X_λ^-1∑_i=1^I α_i/n_i X_i^⊤η_i⊗_i ∈ [I] p_η^(i)(η_i)
= Tr[ x x^⊤X_λ^-1∑_i=1^Iα_i^2/n_i^2ε_i^2 X_i^⊤ X_iX_λ^-1].
In addition, we have
∫__+^I∫_^n_1×…^n_I∑_i=1^I α_i/n_iη_i^⊤ X_iX_λ^-1 x x^⊤X_λ^-1∑_i=1^I α_i/n_i X_i^⊤η_i⊗_i ∈ [I] p_η^(i)(η_i)⊗_i ∈ [I] p_ε(ε_i)
= σ^2_εTr[ x x^⊤X_λ^-1∑_i=1^Iα_i^2/n_i^2 X_i^⊤ X_iX_λ^-1],
where σ^2_ε is defined in (<ref>).
For (<ref>) and (<ref>), we follow similar steps.
The proof is concluded by integrating against the probability measures of x and {x_i^(j) ; j ∈ [n_i]}_i ∈ [I], namely π and p_X.
Note that when λ→∞, we have
lim_λ→∞𝔼x^⊤θ̂ - x^⊤θ^2 = TrC·θθ^⊤.
Let π be a probability distribution defined on (^d,ℬ(^d)), and let C = ∫_ℝ^dx x^⊤dπ(x).
Set λ = 0, let θ̂ = θ_1:I^⋆ defined in (<ref>) and assume for any i ∈ [I] that α_i = n_i/n.
In addition, suppose that ∑_i=1^I X_i^⊤ X_i is positive definite with probability 1 with X_i defined in (<ref>).
Then, for any i ∈ [I] and x ∼π, we have
𝔼x^⊤θ̂ - x^⊤θ^2 = σ^2_ε·TrC ·∫∑_i=1^I ∑_j=1^n_i x_i^(j) [x_i^(j)]^⊤^-1⊗_i=1^I⊗_j=1^n_i p_X(dx_i^(j)).
The proof directly follows from <Ref>.
A similar proof was presented in <cit.>.
§.§ Proof of Proposition <ref>
For some particular choices of the probability measures p_X and π, the following results show that we can end up with closed-form expressions for the expected mean square error defined in (<ref>).
Let π be a probability distribution defined on (^d,ℬ(^d)), and define C = ∫_ℝ^dx x^⊤dπ(x).
In addition, set λ = 0, let θ̂ = θ_1:I^⋆ defined in (<ref>) and assume for any i ∈ [I] that α_i = n_i/n and p_X = N(0_d, Σ) with Σ∈ℝ^d × d a positive definite matrix.
Then, for any i ∈ [I] and x ∼π, we have for n_1:I > d + 1,
𝔼x^⊤θ̂ - x^⊤θ^2 = σ^2_ε/n_1:I -d -1TrC·Σ^-1,
where n_1:I = ∑_i=1^I n_i. For the specific choice π = N(0_d,Σ) then, for n_1:I > d + 1, we have,
𝔼x^⊤θ̂ - x^⊤θ^2 = dσ^2_ε/n_1:I -d -1.
By definition of the Wishart probability distribution, we have X_i^⊤ X_i ∼Wishart(Σ, n_i).
Therefore, it follows that
X_1:I∼Wishart(Σ, n_1:I).
Since n_1:I≥ d and Σ is invertible, then X_1:I^-1 exists with probability 1 and X_1:I^-1∼Inverse Wishart (Σ^-1, n_1:I). Moreover, we have
𝔼∑_i ∈ [I] X_i^⊤ X_i^-1 = Σ^-1/n_1:I- d - 1,
which concludes the proof by plugging this result in Lemma <ref>, and using the fact that C = Σ and Tr(I_d) = d.
§ PROOF OF OUR MAIN RESULTS
In this section, we prove the major results stated in the main paper, namely Theorems <ref> and <ref>.
§.§ Proof of Theorem <ref>
For the sake of completeness, we recall below the full statement of Theorem <ref>.
Theorem <ref>.
Let n = {n_i}_i∈ [I] be a sequence of positive integer numbers such that the following limits exist,
lim_I→∞1/I∑_j=1^In_j= μ and lim_I→∞1/I∑_j=1^I (n_j -μ)^2=σ^2,
where μ, σ > 0.
For any i ∈ℐ, let K ∼U({0,…,I-1}) and 𝒮_K^(i)∼U([2^ℐ∖{i}_K]); and define n_𝒮_K^(i) = ∑_j ∈𝒮_K^(i) n_j to be the random variable corresponding to the total number of data points brought by the random set 𝒮_K^(i) of K players.
Then, for any i ∈ℐ, it holds,
n_𝒮_K^(i)/∑_j ∈ℐ∖{i}n_jU([0,1]), almost surely.
Introduce, for any t,t_0 ∈ (0,1) and any s 0,
μ_-i(I) = 1/I∑_j ∈ℐ∖{i}n_j,
Y(t,I) = n_𝒮_⌊ It⌋^(i),
R^⋆(I,t_0,s) = ℙ( sup_t>t_0| Y(t,I)/⌊ I t⌋ - μ_-i(I)| > s).
By construction, Y(t,I) is the sum of a sampling without replacement of ⌊ It⌋ elements in {n_j : j ∈ℐ∖{i}}. Therefore, by Corollary 1.3 in <cit.>, for s fixed, there exists I_0 ∈ℕ such that,
R^⋆(I,t_0,s)≤(1-t_0)σ^2/⌊ It_0⌋ s^2 + ε, ∀ I≥ I_0.
Hence, for I ≥ I_0 large enough,
R^⋆(I,t_0,s)≤ 2ε,
and then, almost surely,
lim_I→ +∞Y(t,I)/∑_j ∈ℐ∖{i}n_j = t.
We conclude remarking that K = ⌊ I U⌋ with U∼U([0,1]).
§.§ Proof of Theorem <ref>
To prove Theorem <ref>, we need two preliminary results: Lemma <ref>, which itself needs two supplementary results (Lemmas <ref> and <ref>), and Lemma <ref>, which is directly proved.
Lemma <ref> deduces the existence of the constant ρ stated in Theorem <ref> from H-<ref>.
Lemma <ref> bounds the expected difference between w(n_𝒮_K^(i)) and w(Kμ_-i).
§.§.§ Technical lemmata
Consider a set of I values N = {n_1, …, n_I}. Let X_1, …, X_k and Y_1, …, Y_k denote, respectively, k random samples with and without replacement from N. For any continuous and convex function f, it follows,
𝔼[f(∑_i = 1^k Y_i)] ≤𝔼[f(∑_i = 1^k X_i)]
The proof follows from <cit.>.
Let I ∈, N:= {n_1,… ,n_I}∈_+^I, μ=1/I∑_i=1^I n_i be their mean value and σ^2=1/I∑_i=1^I (n_i-μ)^2 be their variance.
For k ∈{0,…,n}, let 𝒮_k ∼ U({S_k ⊆ [I]: |S_k| = k}) be a uniform random variable on the subsets of {1,…,I} of size k, and n_𝒮_k =∑_i∈𝒮_k n_i be the random variable defined by the sum of the elements of 𝒮_k.
Let K ∼U({0,…,I}) and define 𝐘 = n_𝒮_K. Then,
[ - μ K | K = k] = 0,
[( - μ K)^2| K = k]≤ kσ^2.
We prove (<ref>) directly. Notice that,
[| = k] = ∑_S_k ⊆ [I]: |S_k| = k n_S_k1/Ik = 1/Ik∑_S_k ⊆ [I]: |S_k| = k∑_i ∈ S_k n_i
= 1/Ik∑_i ∈ [I]∑_S_k ⊆ [I] : |S_k| = k
i ∈ S_k n_i = 1/Ik∑_i ∈ [I] n_i I-1k -1
= (I-1)!/(k-1)!(I-k)!·(I-k)!k!/I!∑_i ∈ [I] n_i = μ k.
Thus, (<ref>) follows as [μ| = k ] = μ k.
To prove (<ref>), let (_i)_i=1^k be k independent samples from the set N. From Lemma <ref> it holds,
[( - μ)^2| = k] ≤[(μ-∑_i=1^_i)^2 | = k] = [(∑_i=1^(μ- _i))^2 | = k].
Therefore,
[( - μ)^2| = k] ≤[(∑_i=1^∑_j=1^(μ- _i)(μ- _j)) | = k]
= [(∑_i=1^∑_j=1^(μ^2 - μ(_i + _j) + _i_j)) | = k]
= ∑_i=1^k∑_j=1^k (μ^2 - μ([_i| = k] + [_j| = k]) + [_i_j| = k])
= ∑_i=1^k∑_j=1^k (μ^2 - μ([_i] + [_j]) + [_i_j])
= ∑_i=1^k (μ^2 - 2μ[_i] + [_i^2])
+ ∑_i=1^k∑_j=1
j≠ i^k (μ^2 - μ([_i] + [_j]) + [_i][_j])
= ∑_i=1^k [(μ - _i)^2] + ∑_i=1^k∑_j=1
j≠ i^k (μ^2 - 2μ^2 + μ^2)
= ∑_i=1^k [(μ - _i)^2] = ∑_i=1^k Var(μ - _i) = k σ^2.
The steps come from rearranging the terms, using the independence of _i with respect to , the independence of _i, _j for i ≠ j, and finally that [_i] = μ and Var(_i) = σ^2.
Let I∈, N := {n_1,… ,n_I}∈_+^I, and define,
μ = 1/I∑_i=1^I n_i, σ^2 = 1/I∑_i=1^I (n_i-μ)^2,
R := max_i ∈ [I] |n_i - μ|, n^max = max_i∈ [I] n_i.
Consider 𝒮_k, n_𝒮_k, K, and as in Lemma <ref>.
Let w:_+→ be a smooth and increasing function,
and ρ > 0, such that,
|w^(2)(n)|≤ρ|w(n)|/n^2, ∀ n 0,
where w^(k) is the k-th derivative of w. Then, it holds,
|[w(μ) - w()]| ≤ρ |w(μ I)|/2 μ^2I(9σ^2 (1+ln(I)) + 2R^2n^max).
The proof considers a second-order Taylor extension of w at μ k to recover the expected value of [w(μ) - w()]. Noticing that the first derivative has a null expected value, the upper bound stated on the Lemma comes from bounding the expected value of the second derivative.
The Taylor-Lagrange Theorem on w at μ k>0 provides,
w(y) = w(μ k ) + w^(1)(μ k )(μ k - y ) + w^(2)(τ)(μ k - y )^2 /2,
for some τ between y and μ k. Therefore, there exists a random variable T, almost surely between μ_+ and , such that,
[w() - w(μ_+)] = [ w^(1)(μ_+)(μ_+ -) + 1/2 w^(2)(T)(μ_+ -) ^2].
where _+ corresponds to conditioned to be positive. To avoid overcharging the notation, we drop the index from _+. We observe that,
[w^(1)(μ)(μ -)] = [[w^(1)(μ)(μ -)| = k ] ]
= [w^(1)(μ k) [(μ -)| = k] ] = 0,
by Lemma <ref>, Equation (<ref>). Therefore,
|[w() - w(μ)]| = 1/2| [ w^(2)(T)(μ -)^2]| ≤1/2[ | w^(2)(T) | (μ -) ^2 ]
≤ρ/2[|w(T)|/T^2(μ - )^2 ] ≤ρ |w(Iμ)|/2[1/T^2(μ - )^2 ].
Setting :={|μ -|≤1/2(μ+)}, the previous expected value can be expressed as,
[1/T^2(μ - )^2 ] = [ 1/T^2(μ -) ^2 ·] + [ 1/T^2(μ -) ^2 ·^c].
We deal with each term separately. Notice that, as T is almost surely between and μ,
|μ - | ≤1/2(μ + ) ⟹T≥1/3μ.
Thus,
[ (μ -) ^2/T^2·]
≤[ (μ -) ^2/(μ/3)^2·]
= 9/μ^2∑_k=1^I 1/I·[ (μ k -) ^2/k^2·|=k]
≤9/Iμ^2∑_k=1^I[ (μ k -) ^2/k^2|=k]
≤9/Iμ^2∑_k=1^I kσ^2/k^2
= 9σ^2/I μ^2∑_k=1^I1/k≤9σ^2/Iμ^2·(1+ln(I)).
Regarding the second term, as ≤min{μ,}≤T, we have,
[ (μ -) ^2/T^2·^c]
≤[(R)^2/T^2·^c] ≤[ (R)^2/^2·^c]
= R^2/I∑_k=1^I [1/k^2 k^2 ·^c| =k]
=R^2/I∑_k=1^I ℙ(|μ k -| > 1/2(μ k+)| =k)
≤R^2/I∑_k=1^I ℙ(|μ k -| > μ k/2| =k)
≤R^2/I∑_k=1^I exp(- μ^2 k/2n^max)
= 2R^2n^max/Iμ^2∑_k=1^Iμ^2/2n^maxexp(- μ^2 k/2n^max)
≤2R^2n^max/Iμ^2∫_0^∞μ^2/2n^maxexp(- μ^2 k/2n^max) dk = 2R^2n^max/Iμ^2,
as the integral corresponds to the cumulative distribution function of an exponential random variable of parameter λ = μ^2/2n^max. The upper bound on the theorem's statement is obtained when putting all together.
Let w: ℝ_+ →ℝ_+ be a smooth and increasing function such that
lim_n→∞ n^2 |w^(2)(n)| ∞.
Then, there exists ρ 0 such that n^2 |w^(2)(n)|≤ρ w(n).
Notice that the assumptions imply, in particular, that |w^(2)(n)| is bounded. We argue by contradiction. Suppose that for any m 0, there exists n_m such that
n_m^2 |w^(2)(n_m)| > m w(n_m).
Suppose the sequence (n_m)_m converges to a point n^*. Then,
lim_m→∞ n_m^2 |w^(2)(n_m)| lim_m→∞ m w(n_m) = ∞,
which is a contradiction with |w^(2)(n_m)| being bounded. Therefore, necessarily (n_m)_m has to diverge. However, this implies,
lim_n→∞ n^2 |w^(2)(n)| = lim_m→∞ n_m^2 |w^(2)(n_m)| lim_m→∞ m w(n_m) = ∞,
as w is increasing, obtaining again a contradiction.
§.§.§ Proof of Theorem <ref>
We are ready to prove Theorem <ref>.
Theorem <ref>.
Assume H<ref> holds. Then, there exists a constant ρ 0 such that, for any i ∈ℐ, the approximation error of for player i ∈ℐ is upper bounded by
|φ_i - ψ_i | ≤ρ |w(n_ℐ∖{i})|/(I-1) μ_-i^2(9σ_-i^2 (1+ln(I-1)) + 2R_-i^2n^max_-i),
where, φ_i and ψ_i are defined in (<ref>) and (<ref>), respectively; and where μ_-i = 1/I-1n_ℐ∖{i}, σ^2_-i = 1/I-1∑_j∈ℐ∖{i} (n_j-μ_-i)^2, R_-i := max_j ∈ℐ∖{i} |n_j - μ_-i|, and n^max_-i := max_j ∈ℐ∖{i} n_j.
Under Assumption H<ref>, Lemma <ref> implies the existence of ρ 0 such that the value function w satisfies all assumptions from Lemma <ref>. Theorem <ref> comes from (a) noticing that
φ_i = 𝔼[w(_-i + n_i) - w(_-i)]
ψ_i = 𝔼[w(Kμ_-i + n_i) - w(Kμ_-i)]
where K ∼U([I-1]) and _-i = n_𝒮^(i)_K with 𝒮^(i)_K taking values on the subsets of ℐ∖{i} of size K, (b) writing
|φ_i - ψ_i| ≤ |𝔼[w( + n_i) - w(Kμ_-i + n_i) ]| + |𝔼[w() - w(Kμ_-i)]|,
and (c) applying Lemma <ref> to each of the expected values, as the function n → w(n + n_i) also satisfies H<ref>.
§ DU-SHAPLEY++ DEDUCTION
In this section we briefly provide an intuition on how we shifted from DU-Shapley to DU-Shapley++,
ψ_i^++(w) = 1/I(w(n_i) + w(n_ℐ) - w(n_ℐ∖{i})) + 1/I∑_k=1^I-2w(kμ_-i + n_i) - w(kμ_-i).
First of all, observe that DU-Shapley++ is at least as good as DU-Shapley. Recall the Shapley value expression
φ_i(u) = 1/I∑_k=0^I-1∑_𝒮⊆ℐ∖{i}: |𝒮| = k1/I-1|𝒮| [w(n_𝒮 + n_i) - w(n_𝒮)],
which is obtained from Equation (<ref>) when sorting coalitions per cardinality. Notice there exists only one coalition of size 0 (the empty set) and only one of size I-1 (the whole coalition ℐ∖{i}), thus,
φ_i(u) = 1/I(w(n_∅ + n_i) - w(n_∅) + w(n_ℐ∖{i} + n_i) - w(n_ℐ∖{i}))
+
1/I∑_k=1^I-2∑_𝒮⊆ℐ∖{i}: |𝒮| = k1/I-1|𝒮| [w(n_𝒮 + n_i) - w(n_𝒮)]
= 1/I(w(n_i) + w(n_ℐ) - w(n_ℐ∖{i}))
+
1/I∑_k=1^I-2∑_𝒮⊆ℐ∖{i}: |𝒮| = k1/I-1|𝒮| [w(n_𝒮 + n_i) - w(n_𝒮)],
where we have used that both w(n_∅) and n_∅ are null. DU-Shapley++ comes from approximating only the middle terms, i.e., from imposing:
∀ k ∈{1,...,I-2}: ∑_𝒮⊆ℐ∖{i}: |𝒮| = k1/I-1|𝒮| [w(n_𝒮 + n_i) - w(n_𝒮)] ≈ w(kμ_-i + n_i) - w(kμ_-i),
obtaining Equation (<ref>). Notice that ψ_i^++ needs the same number of valuations than ψ_i. An illustration of why considering extreme cases might benefit our approximation is given in Figure <ref>.
§ ADDITIONAL DETAILS REGARDING NUMERICAL EXPERIMENTS
In this section, we provide further details regarding the numerical experiments conducted in the main paper.
§.§ Owen's Shapley value approximation
In Section <ref>, we considered the Shapley value approximation referred to as as a state-of-the-art competitor to .
We provide in the following additional details regarding .
For the other competitors, we directly refer the interested reader to <cit.>.
Owen <cit.> studied the multilinear extension of a cooperative game and an alternative way to express the Shapley value. Formally, a cooperative game G = (ℐ,u) consists on a set of I players ℐ = {1,2,...,I} and a value function u: 2^ℐ→ℝ such that, for any S ⊆ℐ, u(S) corresponds to the value generated by the coalition S. The multilinear extension of G, denoted G̅ = (ℐ,u̅), is obtained when considering the value function u̅ : [0,1]^ℐ→ℝ given by,
u̅(x_1,x_2,...,x_I) = ∑_S ⊆ℐ∏_i ∈ S x_i ∏_j∉ S (1-x_i) u(S).
Intuitively, u̅(x_1,x_2,...,x_I) corresponds to the expected value of a coalition when each player i ∈ℐ joins the coalition with probability x_i. Theorem 5 in <cit.> gives an alternative way to compute the Shapley value φ_i(u) of player i in game G, namely,
φ_i(u) = ∫_0^1 ∂u̅/∂ x_i(τ,...,τ) dτ = ∫_0^1 ∑_S ⊆ℐ∖{i}τ^|S|(1-τ)^I-|S|-1[u(S∪{i}) - u(S)] dτ
=∫_0^1 𝔼[u(ℰ_i(τ) ∪ i) - u(ℰ_i(τ))]dτ = 𝔼_τ∼U([0,1])[ 𝔼[u(ℰ_i(τ) ∪ i) - u(ℰ_i(τ))]],
where ℰ_i(τ) is a random subset of ℐ∖{i}, such that, each agent belongs to it with probability t. In words, the Shapley value of player i corresponds to the expected marginal contributions of the random set ℰ_i(τ), when τ is uniformly distributed on [0,1]. This brings an alternative way to use Monte Carlo to approximate the Shapley value φ_i(u), coined Owen-Shapley, as,
φ̂_i^Owen(u) = 1/T∑_t=1^T u(ℰ_i(τ_t) ∪ i) - u(ℰ_i^t(τ_t)),
where for each t ∈{1,...,T}, we draw τ_t independently and uniformly in [0,1] and then, create a random set ℰ_i(τ_t) by adding each agent j ∈ℐ∖{i} to it with probability τ_t.
|
http://arxiv.org/abs/2307.00055v1
|
20230630180006
|
Amplitudes and Renormalization Group Techniques: A Case Study
|
[
"Diego Buccio",
"John F. Donoghue",
"Roberto Percacci"
] |
hep-th
|
[
"hep-th"
] |
11 pt l
|
http://arxiv.org/abs/2306.09084v1
|
20230615122938
|
Asymptotics for the Laplace transform of the time integral of the geometric Brownian motion
|
[
"Dan Pirjol",
"Lingjiong Zhu"
] |
q-fin.PR
|
[
"q-fin.PR",
"math.PR"
] |
Asymptotics for the Laplace transform of the time integral of the geometric Brownian motion
Dan Pirjol
[School of Business, Stevens Institute of Technology, Hoboken, NJ 07030, United States of America; [email protected]],
Lingjiong Zhu
[Department of Mathematics, Florida State University, 1017 Academic Way, Tallahassee, FL-32306, United States of America; [email protected]]
July 31, 2023
We present an asymptotic result for the Laplace transform of the time integral of
the geometric Brownian motion F(θ,T) = 𝔼[e^-θ X_T]
with X_T = ∫_0^T e^σ W_s + ( a - 1/2σ^2)s ds, which is
exact in the limit σ^2 T → 0 at fixed σ^2 θ T^2 and aT.
This asymptotic result is applied to pricing zero coupon bonds in the Dothan
model of stochastic interest rates. The asymptotic result provides an approximation
for bond prices which is in good agreement with numerical evaluations in a wide
range of model parameters.
As a side result we obtain the asymptotics for Asian option prices in the Black-Scholes model, taking into account interest rates and dividend yield contributions in the
σ^2T→ 0 limit.
§ INTRODUCTION
The Laplace transform F(θ,T)=𝔼[e^-θ X_T] of the
time-integral of the geometric Brownian motion X_T=∫_0^T e^σ W_t + (a-1/2σ^2)t dt appears in many problems of applied probability and
mathematical finance. This expectation gives the prices of zero coupon bonds
in the Dothan model of stochastic interest rates <cit.>
P_0,T =
𝔼[e^- ∫_0^T r_s ds] .
The Dothan model is a short rate model which assumes that the short rate
r_t follows a geometric Brownian motion (gBM) r_t = r_0e^σ W_t +(a- 1/2σ^2) t in the risk-neutral measure ℚ .
The Dothan model can be regarded as the continuous time limit of the Black-Derman-Toy model <cit.> which is a discrete time model where the one-period interest rate is a geometric Brownian motion sampled at the start of the period.
The expectation (<ref>) appears also in credit risk, in default intensity models where the default of a company is modeled as the arrival of a Poisson process with intensity following a geometric Brownian motion. In these models the expectation P_0,T denotes the survival probability up to time T, conditional on survival up to time 0.
The time-integral of the asset price A_T := ∫_0^T S_t dt plays an
important role in Asian options pricing where it determines the payoff of these options. In particular, in the Black-Scholes model,
the asset price S_t=S_0 e^σ W_t + (r-q-1/2σ^2)t follows a gBM, such that
A_T = S_0 X_T with a=r-q
(see <cit.> for a survey). This time-integral also appears in the statistical mechanics of disordered media <cit.>.
The evaluation of the distribution of X_T and of its Laplace transform
has received a great deal of attention in the literature.
An explicit expression for the distribution of X_T was given by <cit.>. However, direct evaluation of the expectation (<ref>)
using the distribution of X_T given in <cit.> is numerically inefficient.
Several alternative computational methods have been proposed for the numerical
evaluation of the Laplace transform.
(1). The Feynman-Kac PDE method. This method uses the fact that the Laplace transform satisfies a parabolic PDE.
This has been solved by Dothan in <cit.>.
The result for non-zero drift was corrected in <cit.>.
(2). Monte Carlo methods. A probabilistic representation for the Laplace
transform F(θ,T) which is more amenable to MC numerical evaluation
was given in <cit.>. Its evaluation was studied in <cit.>.
An importance sampling MC simulation method using a change of measure
determined by large deviations theory was given recently in <cit.>,
using a method proposed in <cit.>.
The paper makes two main novel contributions.
First, in Section <ref> we give an analytical result for the Laplace transform of the time-integral of the gBM in a certain asymptotic limit σ^2 T→ 0 at fixed combinations of model parameters (<ref>).
The result is given by the solution of a variational problem which was studied in a different context in <cit.>. However, the new result is not simply a consequence of the result in <cit.> since the setting is different (continuous-time vs discrete-time average of the gBM).
This result has practical applications to bond pricing in the Dothan model, with non-zero drift,
which are explored in detail in Section <ref>.
Second, in Section <ref> we obtain an extension of the small-maturity asymptotics for Asian option prices with continuous-time averaging presented in <cit.>, which allows for finite interest rates effects. This is the continuous-time counterpart of a result obtained in <cit.> for Asian options with discrete-time averaging.
This result has applications to pricing Asian options in the Black-Scholes model
with non-zero interest rates and dividends. Numerical study in <cit.> shows that the effects of the interest rates can be significant and their inclusion improves considerably agreement with exact (numerical) computations. Our results extend these asymptotic
results to Asian options with continuous-time averaging.
Finally, the theoretical analysis in this paper relies on the large deviations method <cit.>,
which has been used in similar contexts in the previous literature <cit.>.
The rate function for the large deviations in our context can be expressed
as a variational problem which does not have a simple closed form in general. This
technical challenge makes practical implementation of the asymptotic results less efficient. Our main theoretical contribution is to show that in the context
of time-integral of gBM (with drift), one can still solve this variational problem analytically, and thus extends the existing results in <cit.>.
§ MAIN RESULT
We prove here an asymptotic result for the Laplace transform:
F(θ,T) =
𝔼[e^-θ∫_0^Te^σ W_s+(a-1/2σ^2)sds]
in a particular limit of the model parameters defined by taking
σ^2 T → 0 at fixed
b^2 := 1/2σ^2 θ T^2 , ζ := a T .
The zero coupon bond price in the Dothan model (<ref>) corresponds to identifying θ↦ r_0. This limit covers several cases of practical relevance
including small-volatility σ at fixed maturity T and large interest rate
r_0, and it also covers the case of short-maturity T at fixed volatility σ, and large interest rate r_0. In the context of credit risk modeling, θ corresponds to the initial intensity of default, which can become large for distressed companies.
(When the interest rate r_0 is small, the analysis is much simpler,
and for the sake of completeness, we include this regime in Appendix A.
We also discuss the large-maturity T regime in Appendix B.)
Our main result is the following theorem.
Consider the σ^2 T→ 0 limit with the constraint (<ref>).
Then,
lim_σ^2 T→ 0(σ^2T)log F(θ,T)
= - J_B(b,ζ),
where
J_B(b,ζ) :=
inf_h∈𝒜𝒞[0,1]: h(0)=0{2 b^2
∫_0^1e^h(t)dt
+1/2∫_0^1(h'(t) - ζ)^2dt},
where 𝒜𝒞[0,1] denotes the space of absolutely continuous functions from [0,1]
to ℝ.
By letting t:=s/T, we get
F(θ,T)=𝔼[e^-θ T∫_0^1e^σ W_tT+ζ t-1/2σ^2tTdt]
=𝔼[e^-θ T∫_0^1e^Z_tT+ζ tdt],
where
dZ_t=-1/2σ^2dt+σ dW_t, X_0=0,
which is equivalent to:
dZ_t/σ^2=-1/2dt+dB_t, with X_0=0,
where B_t=σ W_t/σ^2 is a standard Brownian motion by the Brownian scaling property.
Let Y_t:=Z_t/σ^2.
From the large deviations theory for small time diffusions (see e.g. <cit.>),
ℙ(Y_·(σ^2 T)∈·) satisfies
a sample path large deviation principle on L_∞[0,1],
the space of functions from [0,1]
to ℝ equipped with the supremum norm topology,
with the speed 1/(σ^2T) and the good rate function (we refer to the definition of large deviation principle
and good rate function to <cit.> and general background of large deviations theory to <cit.>)
I(g)=1/2∫_0^1(g'(t))^2dt,
with g(0)=0 and g∈𝒜𝒞[0,1], the space of absolutely continuous functions from [0,1]
to ℝ and I(g)=+∞ otherwise.
Since we have θ = 2b^2/σ^2 T, we obtain from (<ref>) that
F(θ,T)=𝔼[e^-2b^2/σ^2T∫_0^1e^X_tT+ζ tdt]
=𝔼[e^-2b^2/σ^2T∫_0^1exp{Y_t(σ^2T)+ζ t}dt] .
By using the fact that e^-2b^2/σ^2T∫_0^1exp{Y_t(σ^2T)+ζ t}dt is uniformly
bounded between 0 and 1
and the map g↦∫_0^1e^g(t)+ζ tdt is continuous from L_∞[0,1] to ℝ,
we can apply Varadhan's lemma <cit.> to obtain:
lim_σ^2 T→ 0(σ^2T)log F(θ,T)
=sup_g∈𝒜𝒞[0,1]: g(0)=0{-2 b^2 ∫_0^1e^g(t)+ζ tdt
- 1/2∫_0^1(g'(t))^2dt} ,
where the supremum is taken over all functions g:[0,1]→ℝ which are absolutely
continuous and satisfy g(0)=0.
By defining h(t) = g(t) + ζ t, the extremal problem in (<ref>)
reproduces (<ref>) and hence completes the proof.
The variational problem (<ref>) giving the rate function J_B(b,ζ) is identical with the variational problem appearing in Theorem 3 in <cit.>, where it
gives the Lyapunov exponent associated with the moment generating function of the
discrete sum of a geometric Brownian motion.
This variational problem was solved completely in <cit.>, and the solution was reduced to the solution of a calculus problem in Proposition 4 of this paper. We quote the solution in the notations used here,
adding the explicit condition on ζ,b distinguishing the two cases.
The result of Proposition 4 of <cit.> is mapped to our case by substituting
a↦ 2b^2, b↦ 1, ρ↦ζ.
The rate function of Theorem <ref> is given explicitly by
J_B(b,ζ) = - 2b^2 R(b,ζ), which is defined as follows.
i) For b ≤ζ/2+ζ we have
R(b,ζ) = 1 + sinh^2(δ/2) ( 1 + ζ(ζ-4)/δ^2)
- (2-ζ)sinhδ/δ
+ 1/b^2ζlog( cosh(δ/2) + ζ/δsinh(δ/2) ) - ζ^2/2b^2 ,
where
δ∈ [0,ζ] is the solution of the equation
ζ^2 - δ^2 = 4b^2 ( cosh(δ/2) + ζ/δsinh(δ/2) )^2 .
ii) For b ≥ζ/2+ζ we have
R(b,ζ) = 1- sin^2ξ( 1 + ζ(4 - ζ)/4ξ^2)
+ ζ - 2/2ξsin(2ξ) +
ζ/b^2log( cosξ + ζsinξ/2ξ) - ζ^2/2b^2 ,
where ξ is the unique solution ξ∈ (0,π/2) of the equation
2ξ^2 (4ξ^2 + ζ^2) = 2b^2 (2ξcosξ + ζsinξ)^2 .
We note that in Proposition <ref>,
for b = ζ/2+ζ, the two cases i) and ii) give the common result
R(b,ζ) = -1 + ζ - 1/2 (2+ζ)^2 +
1/ζ(2+ζ)^2 log(1+ζ/2).
§.§ Limiting case a=0
The solution simplifies greatly in the driftless GBM case a=0 which corresponds to ζ=0.
For ζ=0 the rate function J_B(b,0) is given by case (ii)
of Proposition <ref> (see also Corollary 5 in <cit.>):
J_B(b,0)
= 2 b^2 ( sin 2λ/λ - cos^2 λ) ,
where λ is the solution of the equation
λ^2/cos^2λ = b^2 .
We give the asymptotics of J_B(b,0) for small and large b, which is convenient for efficient numerical evaluation.
The rate function J_B(b,0)=2b^2 R(b,0) has the following asymptotic expansions:
i) small-b asymptotics. As b→ 0 we have
R(b,0) = 1 - 1/3 b^2 +4/15 b^4 - 92/315 b^6 + 1072/2835 b^8
+ O(b^10).
ii) large-b asymptotics. As b→∞ we have
R(b,0) = 2/b - π^2/16b^2 - π^2/8b^3 + O(b^-4).
i) As b→ 0 the solution of (<ref>) approaches λ→ 0, and is expanded iteratively as
λ^(j) = bcos(λ^(j-1))
starting with λ^(0)=0. Substituting the result into (<ref>) reproduces (<ref>).
ii) As b →∞ the solution of the equation (<ref>) approaches
λ→π/2 from below. Denote λ = π/2-ε,
and invert (<ref>) written as
sinε/π/2-ε = 1/b.
This gives an expansion in 1/b for ε:
ε = π/2b - π/2b^2 + π(24+π^2)/48b^3 + O(b^-4) .
Substituting into (<ref>) and expanding in 1/b reproduces (<ref>).
The function R(b,0) appears also in the short maturity expansion of the at-the-money (ATM)
implied volatility in the β=1 SABR model in the combined small vol-of-vol
and large volatility limit, see Proposition 23 in <cit.>.
An examination of the singularities of the function R(b,0) in the b complex plane shows that the series expansion (<ref>) has a finite convergence radius. By Proposition 2 in <cit.>, the series for R(b,0) converges for
|b| < R_b = y_0/cosh y_0 = 0.662743 ,
where y_0=1.19968 is the positive solution of the equation y tanh y = 1.
In the context of bond pricing in the Dothan model, the condition (<ref>) can
be put into a more explicit form as σ^2 r_0 T^2 < 2R_b^2 = 0.582.
We show in Figure <ref> the range of the model parameters where this
condition is satisfied. The curves in this figure
show the maximum maturity T_ max(r_0,σ) vs r_0 for several
values of σ = {0.3, 0.5, 1.0}. The maximum maturity for which convergence holds decreases with σ and r_0.
Outside of the circle of convergence the series expansion (<ref>) cannot be used,
and the exact result in (<ref>) must be used. However we emphasize that the asymptotic result R(b,ζ) exists and is well behaved for all real values of b, not only within the region of convergence.
§ LARGE DEVIATIONS FOR THE TIME-INTEGRAL OF THE GBM
The asymptotic result of the previous section is related to another application in mathematical finance: the large deviations property of the time average of the geometric Brownian motion. Denoting X_T := ∫_0^T e^σ W_t + (a-1/2σ^2) t dt,
it was shown in <cit.>
that ℙ( X_T/T ∈· ) satisfies a large deviation principle on
ℝ with speed 1/T and the rate function 1/σ^2 J_BS(·),
where J_BS(x) is given in explicit form in Proposition 12 in <cit.>.
As is shown in <cit.>, this result implies a short maturity asymptotics for out-of-the-money (OTM) Asian options in the Black-Scholes model where a:=r-q with r denoting the interest rate and q the dividend yield,
and neither r or q appears in the asymptotic result,
which is a general feature of the short-maturity limit T→ 0 at fixed r and q.
We study here the large deviations for X_T in a different limit:
σ^2T→ 0 at fixed aT = ζ with a:=r-q.
The advantage of this limit is that it includes interest rates effects at leading order
in the short maturity expansion. The asymptotic regime
is valid when σ^2T is small, which is typically the case in practice.
ℙ( X_T/T ∈· ) satisfies a large deviation
principle as σ^2 T→ 0 at fixed ζ := a T
with speed 1/(σ^2 T) and rate function
I_ BS(x) = inf_h∈𝒜𝒞[0,1]: h(0)=0, ∫_0^1 e^h(y) dy = x1/2∫_0^1 ( h'(y) - ζ)^2 dy ,
where 𝒜𝒞[0,1] denotes the space of absolutely continuous functions from [0,1]
to ℝ.
Following a similar argument as in the proof of Theorem <ref>, we have
X_T/T=1/T∫_0^T e^σ W_t + (a-1/2σ^2) t dt
=∫_0^1e^Z_tT+ζ tdt=∫_0^1e^Y_tσ^2T+ζ tdt,
in distribution where Z_t is defined in (<ref>) and Y_t:=Z_t/σ^2 satisfies
a sample path large deviation principle on L_∞[0,1],
the space of functions from [0,1]
to ℝ equipped with the supremum norm topology,
with the speed 1/(σ^2T) and the good rate function
I(g)=1/2∫_0^1(g'(t))^2dt
with g(0)=0 and g∈𝒜𝒞[0,1], the space of absolutely continuous functions from [0,1]
to ℝ and I(g)=+∞ otherwise.
Since the map g↦∫_0^1e^g(t)+ζ tds is continuous from L_∞[0,1] to ℝ,
we can apply contraction principle from large deviations theory <cit.> to
conclude that ℙ(X_T/T∈·) satisfies a large deviation principle
with speed 1/(σ^2 T) and rate function
I_ BS(x) = inf_g∈𝒜𝒞[0,1]: g(0)=0, ∫_0^1 e^g(t)+ζ t dt = x1/2∫_0^1 ( g'(t))^2 dt .
By introducing h(t) := g(t) + ζ t, we complete the proof.
Using Theorem <ref>, one can obtain the out-of-the-money (OTM)
asymptotics for Asian call and put options.
Denote the Asian call and put option prices as:
C(T):=e^-rT𝔼[(A_T-K)^+]
and P(T):=e^-rT𝔼[(K-A_T)^+],
where A_T:=1/T∫_0^TS_tdt
with S_t=S_0e^(r-q)t+σ W_t-1/2σ^2t.
We have the following result which is the analog of Theorem 2 in <cit.>,
improved by keeping terms of O((rT)^n) to all orders.
(i) When K>S_0, lim_σ^2T→ 0(σ^2T)log C(T)=-I_ BS(K/S_0).
(i) When K<S_0, lim_σ^2T→ 0(σ^2T)log P(T)=-I_ BS(K/S_0).
The corollary follows from Theorem <ref> by using a similar argument
as in the proof of Theorem 2 in <cit.>.
The variational problem in Theorem <ref> is identical to the variational problem appearing in Proposition 6 in <cit.>. This variational problem was solved in closed form and the solution is given in Proposition 9 in <cit.>. This solution is mapped to our case by the substitutions 2β↦ 1, ρ↦ζ, S_0↦ 1.
We get the following result for I_BS(x), which is the continuous-time counterpart of the discrete-time result of <cit.>.
i) For x ≥ 1 + 1/2ζ we have
I_BS(x) =1/2(δ^2 - ζ^2) ( 1 - 2tanh(δ/2)/δ + ζtanh(δ/2))
- 2ζlog(
cosh(δ/2) + ζsinh(δ/2)/δ) + ζ^2 ,
where δ is the solution of the equation
sinhδ/δ + 2 ζsinh^2(δ/2)/δ^2 = x.
ii) For 0 < x ≤ 1 +1/2ζ we have
I_BS(x)= 2(ξ^2 + 1/4ζ^2) ( tanξ/ξ + 1/2ζtanξ
- 1 ) - 2ζlog( cosξ + 1/2ζsinξ/ξ) + ζ^2 ,
where ξ∈ (0,π/2) is the solution of the equation
sin(2ξ)/2ξ( 1 + 1/2ζtanξ/ξ) = x.
The properties of I_BS(x) are studied in detail in Section 4.1 of <cit.>.
We mention here only two properties.
i) The rate function I_BS(x) vanishes at x=1/ζ(e^ζ-1).
ii) For ζ = 0 we have I_BS(x) = J_BS(x) where J_BS(x) is the rate function for OTM Asian options studied in <cit.>.
Corollary <ref> can be used to obtain an approximation for Asian option prices similar to the approach followed in Section 4.2 of <cit.>. Under this approximation, Asian options can be priced as European options with the same
strike and maturity and an equivalent log-normal volatility Σ_LN(K,S_0)
given by
Σ_LN^2(K,T) = log^2 (K/A_fwd)/2I_BS(K/S_0) .
This reduces to the result of Proposition 18 in <cit.> in the ζ=0 limit, and improves it by taking into account interest rates effects.
Numerical tests of this
improved approximation performed in Sec. 4.2 of <cit.> demonstrate good agreement with precise benchmarks, which is better than that given by the
asymptotic result in <cit.> which neglects interest rates effects. The asymptotic result gives an alternative to other proposed pricing methods for Asian options, such as the spectral approach <cit.>, the Laplace transform method <cit.>, the small-time expansion method <cit.>.
§ NUMERICAL TESTS FOR BOND PRICING IN THE DOTHAN MODEL
The asymptotic result of Proposition <ref> can be used to obtain an approximation for the bond prices in the Dothan model
B_ asympt(T) = e^-r_0 T R(b,ζ),
where R(b,ζ) is given by Proposition <ref>. In the limiting case ζ=0 this simplifies further as shown in (<ref>).
In this section we present tests of this approximation under several scenarios.
Scenario 1. We start by considering scenarios with a=0.
An exact solution for B(T):= P_0,T was given in <cit.> and is represented as a double integral. For a=0 this
reduces to a single integral
B(T) = √(y)∫_0^∞sin(2 √(y)sinh z )
[ e^-z(s - 2 z/2√(s)) -
e^z (s + 2 z/2√(s)) ]dz +
2 √(y) K_1(2√(y)) ,
where K_1 is the modified Bessel function of order 1 with
y := 2r_0/σ^2 and
s = σ^2/2(T-t).
The direct numerical evaluation of the integral in
(<ref>) becomes unstable for large y due to fact that the integrand is rapidly oscillating. We found it convenient to add and
subtract in the square brackets the term 2e^-z. The term proportional
to +2e^-z can be evaluated in closed form using the relation
∫_0^∞ dz e^-zsin (a sinh z) = 1/a - K_1(a),
which gives
B(T) = √(y)∫_0^∞sin(2 √(y)sinh z )
·[ e^-z(s - 2 z/2√(s)) -
e^z (s + 2 z/2√(s)) - 2 e^-z]dz +1.
The integrand in this expression is still oscillatory, but its amplitude
falls off much faster, and the calculation of the integral is more stable.
We used (<ref>) for the numerical evaluation of B(T).
Figure <ref> shows the results for
-1/r_0 Tlog B(T) obtained by evaluation from (<ref>) (red dots),
comparing them with the asymptotic result of Proposition <ref> (solid blue curves).
The dashed blue curves show the
series expansion of the asymptotic result, Eq. (<ref>), keeping the first 8 terms.
The upper three plots in Figure <ref> correspond to a moderate interest rates regime r_0=5% and the lower plots to a high interest rates regime r_0=10%. Selected numerical evaluations with r_0=10% are shown in
Table <ref>.
From an examination of these tests we make a few observations:
i) The asymptotic result is most precise at small volatilities σ and small maturities T. As either of these parameters increases, the agreement of the asymptotic result with the exact values worsens. Still, the asymptotic result gives a reasonably good approximation, better than 1%, for all volatilities less than 20%,
at all maturities less than 10Y, which corresponds to many cases of practical
interest.
ii) As the interest rate r_0 increases, the agreement of the asymptotic result
with the exact result improves, as expected from the scaling of the model parameters
assumed in the asymptotic limit considered here.
iii) The series expansion (<ref>) truncated to a finite order explodes at a certain threshold to unphysical values (R(b,0) becomes larger than 1, or smaller than 0).
The explosion is to +(-)∞ when the series is truncated to even/odd order,
corresponding to the sign of the last term included.
However, the exact asymptotic result (<ref>) does not show any explosion.
As explained above, the failure of the series expansion is due to its finite convergence
radius, and not to a failure of the asymptotic expansion itself, which remains well behaved over the entire range of parameter values.
Overall, the error of the asymptotic expansion is below 3.5%
for volatilities below σ=0.4 and maturities up to 10 years, which covers
a wide range of the parameters relevant for practical applications.
Scenario 2.
We present also numerical tests with a≠ 0 using the benchmark scenarios of <cit.>, which considered the pricing of a zero
coupon bond in the Dothan model with (a-1/2σ^2) = 0.045 which
corresponds to a=0.09 in our notations.
The initial interest rate is r_0=0.06 and volatility σ =0.3.
In these papers the zero coupon bond prices have been evaluated in two ways:
i) by Monte Carlo evaluation of the expectation (<ref>) with importance
sampling and control variate <cit.> and optimal change of drift <cit.>,
and ii) by evaluation of an alternative probabilistic representation of this
quantity as an expectation of a function of a generalized hyperbolic secant
random variable <cit.>. The results of the two approaches are shown in
Figures 3.2 and 3.4 of <cit.>, respectively.
For our tests we use the asymptotic result of Proposition <ref> to compute
R(b), which is used to obtain B_ asympt(T) using (<ref>). For this test we use the scenario of <cit.> (r_0=0.06,
σ=0.3, a=0.09).
The asymptotic results are shown in columns 3 and 4 of Table <ref> for several values of the maturity up to T=20 years. The second column shows ξ,
the solution of the equation (<ref>) determining the rate function in Proposition <ref>. The last column shows the exact numerical evaluation of B(T) obtained using the methods of <cit.>.
The approximation error of the asymptotic result is about 2% at T=10 and increases to 15% at T=20.
§ DISCUSSION AND COMPARISON WITH THE LITERATURE
In this note we derived an asymptotic result for the Laplace transform
of the integral of the geometric Brownian motion F(θ,T) in a new limit
σ^2 T→ 0 at fixed σ^2 θ T^2.
This result was applied to the pricing of zero coupon bond prices in the
Dothan model. For this case the limit includes the case of small volatility σ at fixed
maturity T and large interest rate r_0.
The asymptotic result can be used to obtain an efficient numerical
evaluation of the bond prices. We demonstrate good
agreement in these regimes with a numerical evaluation
using an integral representation proposed by <cit.>.
The method proposed
here requires only the solution of a non-linear equation and
can be used in practical applications for a fast and precise evaluation.
Several authors presented asymptotic expansions of bond prices in models with
log-normal rates, including the Dothan model. We discuss briefly
the relation to our work.
Ref. <cit.> derived a Taylor expansion of the log of the bond prices in maturity T, see also <cit.> for a survey. The approach was applied to several one-factor short rate models with constant coefficients, including the Dothan model. Expressed in our notation, the first few terms in this expansion read (for simplicity we assume a=0)
-1/r_0 Tlog B(T) = 1 - 1/3!σ^2 r_0 T^2 -
1/4!σ^4 r_0 T^3
- ( 1/5!σ^6 r_0 -
1/15σ^4 r_0^2 ) T^4 + O(T^5).
Taking the limit σ^2 T→ 0 at fixed b^2 = 1/2σ^2 r_0 T^2 this expansion becomes 1-1/3 b^2 + 4/15 b^4 + ⋯, which reproduces indeed the first few terms in the expansion (<ref>). This shows that the
asymptotic approximation (<ref>) corresponds to summing a subset of the terms in the exact expansion, to all orders in T.
As shown in Sec. <ref> this subset of the full series converges with a finite convergence radius. It would be interesting to study the convergence of the full series of log B(T) in powers of T. We mention <cit.> which studied the small-T expansion of bond prices in several short rate models.
Several groups derived small volatility expansions for zero coupon bond prices in short rate models.
<cit.>
presented an expansion of zero coupon bonds in powers of volatility in a
short rate model with short rate r_t = r_0 (1 +ν X_t)^1/ν where X_t is an Ornstein-Uhlenbeck (OU) process.
This model recovers the Black-Karasinski model in the limit ν→ 0, which in turn reduces to the Dothan model in the limit of positive mean-reversion.
A similar expansion was proposed by <cit.>; their approach rescales simultaneously the mean-reversion level and the volatility of the OU process.
<cit.> derived an expansion for the Arrow-Debreu functions ψ(r_0,r_T,T) := 𝔼[e^-∫_0^T r_s ds δ(r_T-r_0)]
using the so-called Exponent Expansion. This recovers the zero coupon bonds in the
Dothan model by an integration.
Expansion-based approximations truncated to a finite order will typically fail as the maturity or volatility exceeds a certain value. This is similar to the failure of the expansion of the function R(b,0) in powers of b seen in the numerical tests in Figure <ref>.
Our results suggest that these failures are not necessarily associated with the small-maturity or volatility limits considered, but are due to: i) truncation of a convergent series within its convergence radius, or ii) the finite convergence radius of the expansion. Using the full asymptotic result removes the singular behavior observed in the truncated series.
§.§ Acknowledgements
We are grateful to the Associate Editor and two anonymous referees for helpful comments and suggestions.
We thank Nicolas Privault and Wayne Uy for
providing details of the numerical evaluations in their work <cit.>.
Lingjiong Zhu is partially supported by the grants NSF DMS-2053454, NSF DMS-2208303 and a Simons Foundation Collaboration Grant.
§ SMALL INTEREST RATE (R_0) ASYMPTOTICS
In this Appendix, we obtain an asymptotic expansion for the zero coupon bond
price B(T):= P_0,T in the Dothan model for small interest rate r_0, where P_0,T=𝔼[e^-∫_0^Tr_sds] with r_t=r_0e^σ W_t+(a-1/2σ^2)t follows
a geometric Brownian motion (gBM).
Ref. <cit.> studied the numerical evaluation of
bond prices in the Dothan model by expansion of the expectation in powers of r_0. This expansion has the form B(T) = ∑_k=01/k!(-1)^m r_0^k m_k, where m_k is the k-th moment of the time integral of the gBM. The positive integer moments of the
time integral of the gBM are known in closed form to all orders <cit.>.
Numerical evaluation of the series expansion in powers of r_0 in <cit.>
shows good convergence for a few benchmark cases. However, the series expansion
B(T) = ∑_k=01/k! r_0^k m_k has zero convergence radius at r_0=0, and diverges for any r_0>0. The vanishing of the convergence radius follows by noting that B(T)=∞ for any r_0<0.
Thus the series expansion for B(T) is strictly asymptotic.
The practical application of such series requires that they are truncated to an optimal truncation order k_ opt, see <cit.> for an overview. The optimal order can be determined by empirical numerical evaluation, as the order at which the next term in the series is minimal,
and in general will depend on σ,T,a.
We reformulate here the series expansion in r_0 as an exact limiting result in the
r_0→ 0 limit. We have the following proposition.
We have the asymptotics:
lim_r_0→ 01/r_0^2(B(T)-1+r_0e^aT-1/a)
=1/a+σ^2(e^(2a+σ^2)T-1/2a+σ^2-e^aT-1/a).
First, notice that for any x≥ 0, we have
1-x+x^2/2-x^3/6≤ e^-x≤ 1-x+x^2/2 .
By Jensen's inequality, (1/T∫_0^Te^σ W_s+(a-1/2σ^2)sds)^3≤1/T∫_0^T(e^σ W_s+(a-1/2σ^2)s)^3ds, which implies that
𝔼[(∫_0^Te^σ W_s+(a-1/2σ^2)sds)^3]
≤
T^2𝔼[∫_0^Te^3σ W_s+3(a-1/2σ^2)sds]
=T^2e^3(a+1/2σ^2)T-1/3(a+1/2σ^2).
The first two moments are evaluated in closed form as 𝔼[∫_0^Te^σ W_s+(a-1/2σ^2)sds]=e^aT-1/a
and
1/2𝔼[(∫_0^Te^σ W_s+(a-1/2σ^2)sds)^2]=1/a+σ^2(e^(2a+σ^2)T-1/2a+σ^2-e^aT-1/a).
The stated results follows by combining (<ref>), (<ref>) and the definition of B(T).
Proposition <ref> can be extended to
arbitrarily high order in r_0 as:
lim_r_0→ 01/r_0^k(B(T)-∑_j=0^k-1(-1)^j/j!r_0^jm_j)=m_k/k!
for any arbitrary k∈ℕ, where m_k is the k-th moment of ∫_0^Te^σ W_s+(a-1/2σ^2)sds.
§ LARGE MATURITY (T) ASYMPTOTICS
In this Appendix we discuss the asymptotics for the zero coupon bond
price B(T) in the Dothan model for large maturity T.
If a<1/2σ^2, the bond price approaches a finite limit in the infinite maturity limit. The perpetual bond price is
lim_T→∞ B(T) = B(∞):=
2/Γ(1-2a/σ^2)
(2r_0/σ^2)^1/2 - a/σ^2
K_1-2a/σ^2(2√(2r_0/σ^2)) ,
where K_α is the modified Bessel function of order α.
This follows from the well-known result of <cit.>: the time integral of the geometric Brownian motion converges in distribution as T→∞
to an inverse gamma distribution. The limiting result (<ref>) is the Laplace transform of the resulting inverse gamma distribution.
Let us consider the asymptotics of the exact perpetual bond price B(∞)
as 2r_0/σ^2→∞.
Using the asymptotic expansion of the Bessel function of large argument
K_α(z) ∼√(z/2π) e^-z(1+O(z^-2)) for any α>0 gives
B(∞) = C e^- 2 √(2r_0/σ^2) .
We show next that the asymptotic result of Proposition 1 in the main paper reproduces the exponential factor of the exact result (<ref>), as expected, since the
asymptotic limit σ^2 T→ 0 at fixed b^2 = 1/2σ^2 r_0 T^2 corresponds to 2r_0/σ^2 = 2b^2/(σ^2 T)^2→∞.
For simplicity we consider a=0. Using the large-b expansion of the rate function
R(b,0) = 2/b - π^2/16b^2 - π^2/8b^3 + O(b^-4) in Proposition 2 in the main paper, we have
-log B_ asympt(T) = r_0 T R(b,0)
= - 2 √(2r_0/σ^2) + π^2/2σ^2 T + O(T^-2) ,
which reproduces indeed the exponential factor in (<ref>).
alpha
|
http://arxiv.org/abs/2306.02607v1
|
20230605054030
|
Effect of global shrinkage parameter of horseshoe prior in compressed sensing
|
[
"Yasushi Nagano",
"Koji Hukushima"
] |
cond-mat.dis-nn
|
[
"cond-mat.dis-nn"
] |
Graduate School of Arts and Sciences,
The University of Tokyo,
Komaba, Meguro-ku, Tokyo 153-8902, Japan
Graduate School of Arts and Sciences,
The University of Tokyo,
Komaba, Meguro-ku, Tokyo 153-8902, Japan
Komaba Institute for Science, The
University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8902, Japan
In sparse signal processing, this study investigates the effect of the global shrinkage parameter τ of a horseshoe prior, one of the global-local shrinkage prior, on the linear regression. Statistical mechanics methods are employed to examine the accuracy of signal estimation. A phase diagram of successful and failure of signal recovery in noise-less compressed sensing with varying τ is discussed from the viewpoint of dynamic characterization of the approximate message passing as a solving algorithm and static characterization of the free-energy landscape. It is found that there exists a parameter region where the approximate message passing algorithm can hardly recover the true signal, even though the true signal is locally stable. The analysis of the free-energy landscape also provides important insight into the optimal choice of τ.
Effect of global shrinkage parameter of horseshoe prior in compressed sensing
Koji Hukushima
July 31, 2023
==============================================================================
§ INTRODUCTION
Recently, sparse signal estimation has played an important role in signal processing, machine learning, image processing, and communication. Sparse signals have a very small number of non-zero components, and the purpose of sparse signal estimation is to accurately estimate the non-zero components. It has attracted much attention because of its high accuracy and efficiency in applications such as signal reconstruction. In the field of physics, the technique has been applied to experimental measurements, and theoretical performance evaluation of the efficiency of solving algorithms has been studied using statistical mechanics methods.
There are various methods for estimating a sparse signal, among which the use of a global-local shrinkage prior has been extensively studied. This prior is characterized by the unique structure of the variance parameters within a normal distribution, composed of a product of two distinct types: a global variance parameter and a local variance parameter. The global variance parameter is independent of the index, representing a universal attribute, whereas the local variance parameter is dependent on the index, capturing individual characteristics. This family of prior distributions includes well-known priors used in sparse signal estimation, such as the Laplace prior and the Automatic Relevance Determination prior. Among them, the horseshoe prior is one of the most promising in this family<cit.>. Theoretical analysis of the properties of sparse signal estimation using the horseshoe prior has been a topic of interesting research. It is shown that the horseshoe prior asymptotically achieves the minimax rate, known as the lower bound of sparse signal estimation with respect to the estimated ℓ_2 risk<cit.>, and this result is being extended to the general global-local shrinkage parameter<cit.>. The horseshoe prior is also known to have asymptotic Bayes optimality under sparsity with respect to the risk of variable selection, which is an advantage over the widely-used LASSO, the case for the Laplace prior<cit.>.
In constructing the global-local shrinkage prior, it is crucial to select an appropriate prior for the local shrinkage parameters and to adjust the global shrinkage parameter correspondingly. Several prior distributions have been proposed for the local shrinkage parameter, with particular emphasis on the utility of heavy-tailed distributions. There are also practical methods for determining the appropriate global shrinkage parameters, such as the maximum marginal likelihood estimator and the fully Bayesian approach<cit.>. However, the theoretical understanding of the sparse linear regression with the horseshoe prior has not yet been fully explored<cit.>. To be specific, the effect of the choice of the global shrinkage parameter on the estimation accuracy in sparse linear regression remains unclear.
One of the key challenges in sparse modeling is compressed sensing, a subfield of sparse linear regression. Compressed sensing is a technique for precisely inferring an unknown vector for which sparsity is assumed from a smaller number of observations than the dimension of the vector. Therefore, it is extremely important to develop algorithms that can effectively estimate the signal with a limited number of observations.
The Belief Propagation (BP) algorithm, or its approximation, Approximate Message Passing (AMP), is an important algorithm for solving the problem of compressed sensing<cit.>.
The typical performance of the algorithms for compressed sensing has been widely discussed in the field of statistical physics using the replica method. Analysis of the l_1-norm regularization has found that linear stability of the replica symmetric (RS) free energy in the vicinity of the true signal is an important criterion for successful signal restoration<cit.>.
It is also valuable to be able to track the dynamics of the algorithm through State Evolution (SE), which describes the macroscopic temporal evolution of AMP<cit.>. Under appropriate assumptions about the threshold function, a formal equivalence can be shown between the time evolution of SE and the saddle-point equation for the RS free energy. In particular, the fixed points of SE for AMP coincide with the saddle point of the RS free energy.
In the Bayesian optimal setting with the Gaussian-Bernoulli distribution, the free-energy landscape as a function of the mean squared error reveals the potential existence of spurious local solutions in addition to the true solution. This indicates that solving algorithms such as BP and AMP may fail to recover the true signal even when the true signal satisfies the linear stability condition, implying the existence of an algorithmic transition point at which the local fixed point of SE vanishes<cit.>.
These statistical physics results are obtained by taking the expected value of the free energy with respect to the randomness of the problem. The method of computation used is mainly based on the replica method of spin glass theory, which requires close attention to its mathematical justification. The exactness of the method is known for the mean squared error in the case of Gaussian i.i.d. randomness, which is the generative model addressed in this study<cit.>.
In this paper, extending our previous work<cit.> focusing on the local shrinkage parameter, we study the role of the global shrinkage parameter in compressed sensing with the horseshoe prior using a statistical mechanics method based on the replica method and demonstrate that the signal recovery performance is characterized by a complex free-energy structure. By appropriately adjusting the global shrinkage parameter, it is found that the signal recovery limit with the horseshoe prior is competitive with that of the Bayes optimal method, even though it does not assume knowledge of the generative distribution of the true signal. As a byproduct, we have also been able to give an interpretation of damping in the solving algorithms in terms of the free-energy landscape. This result highlights the importance of considering the free energy landscape as a multivariable function.
The remainder of this paper is organized as follows. In Sec. <ref>, we describe the three setups based on the theoretical performance evaluation of the compressed sensing with the horseshoe prior, the approximate message passing as a solving algorithm, the state evolution characterizing the macroscopic behavior of the algorithm, and the formulation of the statistical mechanics with the replica method. In Sec. <ref>, we theoretically analyze the dynamical behavior of the solving algorithm and derive the corresponding phase diagram. In Sec. <ref>, the free-energy landscape is introduced to give a static interpretation to the phase diagram obtained in Sec. <ref>, and the effects of damping in the algorithm are discussed in terms of the free-energy landscape. Finally, Sec. <ref> contains a summary and discussion of our results. Some details of the calculations are presented in Appendices A, B, C, and D.
§ PROBLEM SETTINGS AND FORMULATION
§.§ Compressed sensing with horseshoe prior and Approximate Message Passing
Compressed sensing is usually defined as an optimization problem of a normalization term ℋ( w) with a constraint:
ŵ_ wℋ( w) s.t. y = X w,
where ŵ∈ℝ^N is an estimator for the true signal w_0∈ℝ^N, X∈ℝ^M N is a design matrix, and y∈ℝ^M is observations. In the noiseless case, y is obtained by the product of the design matrix X and the true signal w_0 as
y = X w_0,
where the fraction of non-zero element in w_0 is denoted by ρ.
The mean-squared error (MSE), which is often used to determine the accuracy of the estimated signal, is expressed using the ℓ_2 norm as
MSE = 1/N|ŵ- w_0|_2^2.
In Bayesian statics, the optimization problem is interpreted as a Maximum a Posterior (MAP) estimation with the likelihood δ( y- X w) and a prior distribution p( w). Clearly, the optimization problem and the MAP estimation are connected by
ℋ( w) = -ln p( w).
In this study, the horseshoe prior is taken as the prior distribution, where there are two different parameters, a local and a global shrinkage parameter. In our model, the local shrinkage parameters {λ_i|i=1,..., N} are determined by a MAP estimation, and the global shrinkage parameter τ is given as a hyperparameter. The detail of the horseshoe prior is introduced later.
For X_ij∈{1/√(N),-1/√(N)}, Approximate Message Passing (AMP) is known to simplify BP in the large system size limit N→∞ with constant αM/N, generally given by a set of iterative equations with
h^t_i = α w^t_i+∑_b X_biz^t_b,
w^t+1_i = η(h^t_i;χ_t),
z^t+1_a = y_a-∑_jX_ajw^t+1_j+z^t_a1/N∑_iη'(h^t_i;χ_t),
χ_t+1 = χ_t1/N∑_iη'(h^t_i;χ_t).
The threshold function η(h;τ) in Eqs. (<ref>) is defined for the horseshoe prior as
η_HS(h;χ,α,τ) = χ^-1h/χ^-1α+τ^-2(λ^*)^-2,
where
λ^* = _λχ^-2h^2/2(χ^-1α+τ^-2λ^-2)-ln(1+λ^2).
When there is no risk of confusion, we abbreviate η(h;χ,α,τ) as η(h;χ). Fig. <ref> shows η_HS for some parameters τ^2α/χ.
When |h| is small enough, η_HS returns 0, which means the elimination of small coefficients. For τ^2α/χ≥0.5, η_HS is a continuous function, and η_HS=0 when τχ^-1|h| < √(2). For large |h|, the functions asymptotically reach a straight line with y=x, meaning no shrinkage for large inputs. Meanwhile, when τ^2α/χ<0.5, η is a discontinuous function, thus AMP is not valid there.
§.§ State Evolution
The AMP algorithm, shown in Eq. (<ref>), refines the estimation of signals by iterative calculations, starting from an appropriate initial condition. It is known that the estimation performance of AMP is described by the deterministic evolution of a few parameters, called state evolution (SE)<cit.>. To be more specific, the state evolution is an iterative equation with “time" t for the MSE, denoted σ^2_t, and the variance parameter χ_t of the estimation of AMP, given by
σ^2_t+1 = 𝔼_w_0,ξ[(η(α w_0+√(ασ^2_t)ξ;χ_t)-w_0)^2],
χ_t+1 = χ_t𝔼_w_0,ξ[η'(α w_0+√(ασ^2_t)ξ;χ_t)],
where ξ∼𝒩(0,1) and w_0 obeys to a distribution of the true parameter.
Fig. <ref> illustrates the time evolutions of AMP for a randomly generated instance and those of SE, which is the macroscopically averaged result. The time evolutions of the macroscopic quantities such as MSE and χ are in good agreement for SE and AMP. This allows us to discuss the nature of the convergent solutions of AMP by analyzing the fixed points of SE. In the statistical physics of random systems, the counterpart of SE is the saddle point equation for the RS free energy discussed below.
§.§ Formulation in Statistical Physics
From the viewpoint of Bayesian statistics, compressed sensing is defined as the MAP estimation of the parameters w and the local shrinkage parameter λ∈ℝ^N from the posterior distribution
p( w,λ| y, X) = 1/Z(β| y, X)δ( y - X w)e^-βℋ( w,λ),
where ℋ( w,λ) satisfies
ℋ( w,λ) ln∏^N_i=1π(λ_i)𝒩(w_i|0,τ^2λ_i^2).
When the prior distribution for the local shrinkage parameter is the half-Cauchy prior given by π(λ)=2/π1/1+λ^2, the prior distribution for w given by marginalizing λ is called the horseshoe prior. Here, it is noted that {λ_i}, treated as hidden variables, are determined by the MAP estimation as β→∞. Free entropy ϕ(β) is defined as an average of the logarithm of the normalization constant Z(β|y,X), which is known as the partition function in physics, over the randomness of the true signal and the design matrix, expressed as
ϕ(β) 1/N𝔼_ w_0,X[ln Z(β| y, X)].
In our study, the random variables w and X are generated from the Gaussian-Bernoulli distribution as
w_0,i∼ρ𝒩(w_0,i|0,1) + (1-ρ)δ(w_0,i),
and the Gaussian distribution as
X_ij∼𝒩(X_ij|0,1/N).
Free energy is then defined as
f(β) = -1/βϕ(β),
and the limit of β→∞ is our interest. Using the replica method as in the spin glass theory<cit.>, the average of the logarithm of the partition function is evaluated with the identity
[ln Z(β| y, X)] = lim_n→ 0n[Z^n(β,| y, X)],
where the right hand size is explicitly calculated for n∈ℕ^+, taking the limit of n→0 as the analytic continuation.
§ MACROSCOPIC DYNAMICS OF APPROXIMATE MESSAGE PASSING
§.§ Saddle point equations for the replica symmetric free energy
Assuming the replica symmetry condition, we obtain the RS free energy as
-1/βϕ(β) = -max_q,χ,mQ̂,χ̂,m̂ ψ̂(Q̂,χ̂,m̂) + αψ(q,χ,m)
+ 1/2(Q̂ q- χ̂χ) - m̂ m,
where
ψ̂= 𝔼_w_0,ξ[max_λ(m̂ w_0 + √(χ̂)ξ)^2/2(Q̂ + τ^-2λ^-2) + lnπ(λ)],
and
ψ = -ρ-2m+q/2χ.
The extremum condition for the RS free energy yields the saddle-point equations:
q = 𝔼_w_0,ξ[⟨(w_0m̂+√(χ̂)ξ/Q̂+τ^-2λ^-2)^2⟩],
m = 𝔼_w_0,ξ[⟨w_0^2m̂+w_0√(χ̂)ξ/Q̂+τ^-2λ^-2⟩],
χ = 𝔼_w_0,ξ[⟨ξ^2+w_0m̂/√(χ̂)ξ/Q̂+τ^-2λ^-2⟩],
Q̂ = α1/χ,
m̂ = α1/χ,
χ̂ = αρ-2m+q/χ^2,
where ⟨⋯⟩ denotes the thermal average defined as
⟨ f(λ)⟩ = f(λ^*),
with
λ^* = _λ(m̂ w_0 + √(χ̂)ξ)^2/2(Q̂ + τ^-2λ^-2) + lnπ(λ).
Details of the derivation of the RS free energy are given in Appendix. <ref>.
Under the RS assumption, the MSE averaged with respect to the random variables is then expressed as
MSE = ρ -2m +q.
Furthermore, the saddle point equations for the RS free energy are shown under a plausible assumption to be equivalent to the SE mentioned above. See Appendix <ref> for details.
§.§ Linear Stability
The stability of the true signal w=w_0 with respect to perturbations can be discussed using the linear stability of the solution of the saddle point equations characterized by q=m=ρ and χ→0. By examining this limit, we obtain the following linear stability conditions:
α > ρ+2(1-ρ)Φ(√(1/Δ)),
where Φ is the cumulative distribution function of the normal Gaussian distribution and Δ is the solution of the following equation:
αΔ = ρ(Δ+[u(λ^*(w_0))/u(0)]_w_0∼𝒩(0,1))
+2(1-ρ)((1+Δ)Φ(√(1/Δ))-√(Δ/2π)exp(-1/2Δ))
for
u(λ) -(lnπ(λ))'/λ,
and λ^* satisfies
u(λ^*) w_0^2/τ^2(λ^*)^4.
The derivation of this condition is provided in Appendix. <ref>. The smallest α for which the true signal solution is stable under a given ρ is of practical importance. We define α_ls as the smallest α that satisfies Eq. (<ref>) and will discuss it later.
§.§ Convergent Solution
For successful recovery, the true solution must be stable, but this is not a sufficient condition. By examining the iterative dynamics and convergent solutions, one can identify regions where the recovery by the AMP algorithm can succeed. In particular, since the macroscopic time evolution of AMP is equivalent to an iterative substitution of the saddle point equations, the convergence condition to the iterative solution satisfying MSE=0 corresponds to the successful recovery region of the true signal.
The iterative solution is obtained by repeating in the forward iterations of Eq. (<ref>), which are saddle point equations of the order parameters, given by
q_t = 𝔼_w_0,ξ[⟨(m̂_tw_0+√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2)^2⟩],
m_t = 𝔼_w_0,ξ[⟨m̂_t w_0^2+w_0√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2⟩],
χ_t = 𝔼_w_0,ξ[⟨ξ^2+m̂_t w_0/√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2⟩],
Q̂_t+1 = α1/χ_t,
m̂_t+1 = α1/χ_t,
χ̂_t+1 = αρ-2m_t+q_t/χ_t^2.
It should be noted that this time evolution is equivalent to SE under a suitable condition, as shown in Appendix <ref>, which allows us to evaluate the performance of AMP.
The following two types of initial conditions are considered in applying AMP:
* non-informative: q_0 = m_0 = 0 and χ_0=10^-1,
* informative: q_0 = m_0 = ρ - 10^-6 and χ_0 = 10^-3.
The non-informative initial condition is MSE=ρ, which may correspond to the initial state of AMP being the zero vector, while the informative condition is MSE≃ 0, which corresponds to the true vector.
A phase diagram of compressed sensing using the AMP algorithm can be obtained by numerically solving the saddle point equations. As an example,
Fig. <ref> shows the phase diagram on the τ-α plane at ρ=0.1. It can be seen that there are three distinct phases in common: Easy phase, Impossible phase, and Hard phase. In the Easy phase, the iterative solution converges to the true solution with MSE=0, indicating that the AMP algorithms successfully recover the true signal. In the Impossible phase, the true signal is unstable, and then any algorithm which searches for a solution with the free-energy minimum fails to recover the true signal. In the Hard phase, the true signal is stable as in the Easy phase, but the AMP algorithms with an initial state of no information about the true signal converge to a solution with MSE≠0. For convergence to the true signal, an initial state close to the true signal is required. This situation is similar to that of the Bayes optimal case<cit.>. The minimal value of α in the Easy phase is denoted by α_c(τ), which is the limit where the true signal can be recovered from the non-informative initial condition for a fixed τ.
Fig. <ref> shows α dependence of the MSE of the convergent solution of the saddle point equations starting from the two initial conditions, the informative and non-informative conditions. At τ=0.4, as α increases, the phase transition from the Impossible phase to the Easy phase occurs, which is a second-order transition. There is no dependence of converged MSE on the initial state at any α. The second-order phase transition is supported by the finite-size simulations and their finite-size scaling analysis presented in Appendix. <ref>. On the other hand, at τ=0.3, as α increases, the phase transition from the Impossible phase to the Hard phase and then from the Hard phase to the Easy phase occurs, where the MSE behaves like a first-order transition, showing strong initial condition dependence. In the Hard phase, the converged MSE depends on the initial state and stays at MSE=0 under the informative condition as long as the linear stability is satisfied, while it is trapped at a local minimum solution with MSE≠0 under the non-informative condition. The details of this picture are discussed in Sec.<ref>.
Fig. <ref> is the phase diagram of successful and unsuccessful signal recovery in the ρ-α plane. The phase boundaries in the phase diagram differ depending on the estimation method used, such as the l_1-norm regularization or the horseshoe prior. The phase boundary by the horseshoe prior with the global shrinkage parameter τ=1 was obtained in our previous study<cit.>.
The achievable boundary α_c(τ_opt) for the AMP algorithm when τ is chosen optimally is obtained by analyzing the iterative solutions of the saddle point equations under the non-informative condition, while the optimal value of τ obtained numerically, as shown in Fig. <ref>, significantly depends on ρ.
The boundary is well below that of the l_1-norm regularization for any ρ, which means that the signal can be recovered with less observed data using the horseshoe prior.
§ FREE ENERGY LANDSCAPE
Analysis of the saddle point equations of RS free energy indicates the existence of the Hard phase in the phase diagram of the AMP algorithm for the compressed sensing with the horseshoe prior. In Sec. <ref>, it was found that the AMP algorithm with the non-informative initial condition fails to recover the true signal, although the solution of MSE=0 is stable in the Hard phase. There, the definition of the phases depends on the initial state of the iterative algorithm and is thus based on a dynamical perspective. In this section, we provide the phase diagram from a static viewpoint characterized by a free-energy landscape, using a definition of the phases by a free-energy landscape that does not have the arbitrariness such as the initial states of the dynamics. Because the AMP algorithm converges to the extremum of RS free energy, the landscape of the RS free energy is closely related to the recoverability limit of the AMP algorithm.
§.§ RS Free Energy as a function of χ and χ̂
According to the saddle-point equations of Eq. (<ref>), which determine the RS free energy, we see that Q̂ and m̂ are determined only by χ, and then q and m by χ and χ̂. Therefore, the RS free energy can be given formally only by χ and χ̂, taking extreme values of the other variables.
As a result, the free-energy landscape as a function of χ and χ̂ is expressed as
1/βϕ(χ,χ̂) = q,m,Q̂,m̂ ψ̂(Q̂,χ̂,m̂) + αψ(q,χ,m)
+ 1/2(Q̂ q- χ̂χ) - m̂ m.
Because the RS free energy has a minimum with respect to χ and extremum with respect to χ̂, the number and location of saddle points in the free energy landscape are important to understand the behavior of the algorithm.
Fig. <ref> shows the free-energy landscape at ρ=0.1 and τ=0.4 as an example of the phase transition from the Impossible phase to the Easy phase with α changing. In the figure, the lines represent the iterative dynamics of the iterative equations of Eq. (<ref>) for the two initial conditions, the informative and non-informative conditions, respectively. In the Impossible phase, both initial conditions converge to the saddle point where χ≠ 0, while in the Easy phase, both converge to a saddle point with χ→ 0 and χ̂<∞, corresponding to the successful recovery. These are characteristic of the two phases.
As another example of the phase transition from the Hard to Easy phase, Fig. <ref> shows the free-energy landscape at ρ=0.1 and τ=0.3. The Hard phase has a local maximum in the free-energy landscape at (χ,χ̂)≃ (0.04,7), which prevents the convergence of iterations from the non-informative condition to the successful recovery solution, while iterations from the informative condition converge stably to the successful solution. In the Easy phase, the local maximum still exists, but it is far from the two initial conditions and does not obstruct the convergence to the successful solution. This case implies that the MSE and χ converge discontinuously to zero with increasing α.
The above observations strongly suggest that the convergence solution is determined by the free energy on the curve χ̂_opt(χ) in the free-energy landscape, defined by
f(χ) = f(χ,χ̂_opt(χ)),
with
χ̂_opt(χ) = _χ̂f(χ,χ̂).
Fig. <ref> shows the free-energy landscape on χ̂_opt at ρ=0.1, α=0.23, and τ=0.3, which is in the Hard phase. The non-monotonic landscape implies that the local search minimization of the free energy with respect to χ fails to converge to χ→ 0, meaning that the AMP algorithm cannot recover the true signal although χ=0 is a stable solution.
Fig. <ref> shows the χ dependence of the free energy f(χ) in the three phases defined from the convergence conditions of the AMP algorithm in Sec. <ref>. One can see a characteristic behavior in the χ dependence of the free energy. From this, we redefine the phases by this static characterization as follows:
* Easy phase: f(χ) is a monotonically increasing function of χ
* Hard phase: f(χ) is not a monotonically function of χ and f'(χ)|_χ→0≥0
* Hard-possible phase: f(0) is a global minimum
* Hard-impossible phase: f(0) is not a global minimum
* Impossible phase: f(χ) is not a monotonically function of χ and f'(χ)|_χ→0<0
The Hard phase has been divided into two phases by the property of the global minimum of the free energy f(χ). If a global search could be performed, it would be possible to recover the true signal even in the Hard possible phase.
Since the monotonicity of the free energy landscape along χ̂_opt(χ) is defined by static characterization, the definition of these phases has no arbitrary such as the initial conditions for the time evolution of the algorithms. However, the correspondence between the monotonicity of the free energy and the phases is highly non-trivial, since the iterative updates of the algorithm are not necessarily local searches.
Therefore, we compare the static phase boundaries based on the monotonicity of the free-energy landscape with the dynamic phase boundaries characterized by the convergence value of the AMP algorithm. As shown in Fig. <ref>, it demonstrates that the two boundaries coincide with each other over a wide range, suggesting that in the Hard phase, the AMP algorithm does indeed converge to a local minimum of the free energy.
The correspondence of phase boundaries always holds for 0<ρ<1.
§.§ Path of damped dynamics
In the previous subsection, the static characterization of the phases is given by the monotonicity of the free-energy landscape along the optimal curve χ̂_opt, where the free-energy landscape is optimized first with respect to χ̂, and then with respect to χ. On the other hand, if χ is first and χ̂ second, another optimal curve χ_opt(χ̂) is obtained as
χ_opt(χ̂) = _χf(χ,χ̂).
We discuss the meaning of this curve in relation to the dynamics of SE. The iterative update of χ with a damping term in SE is given by
χ_t = r_χχ_t-1+(1-r_χ)χ(Q̂_t,χ̂_t,m̂_t),
where r_χ is a parameter representing the strength of the damping. When updating χ with r_χ>0, the extreme condition with respect to χ̂ is not satisfied except at fixed points, while the extreme condition of χ is satisfied when updating χ̂.
In the limit of the damping parameter r_χ→1, χ_t moves very slowly, and χ̂ also moves slowly. Note that only the extreme condition for χ is satisfied by the update of χ̂, indicating that SE with the heavy damping parameter moves along χ_opt.
Fig. <ref> shows trajectories of the SE dynamics with some damping parameters and the two optimal curves, χ_opt and χ̂_opt on the free-energy landscape with α=0.23 and ρ=0.1, which is in the Hard phase. It can be seen that the iterative dynamics converges to the local minimum of χ̂_opt for r_χ=0, and behaves in a complicated manner when r_χ is finite and not very large. However, when r_χ=0.99, which is heavy damping, the trajectory of the iterative dynamics moves very slowly along χ_opt and eventually reaches χ→ 0. This means that even in the Hard phase where there exists a local minimum on χ̂_opt, the dynamics with damping parameters moving along the path χ_opt may successfully estimate the true signal. In this case, a necessary condition for success is that the optimal curve χ_opt reaches zero.
Here, the additional phase is defined as the region where χ_opt reaches zero and is called the Damping phase, as distinguished from the Hard-possible and Hard-impossible phases. Fig. <ref> presents the phase boundary of the Easy phase and of the Damping phase, showing that the reconstruction limit can be extended by appropriately choosing τ and by using heavy damping. We also plot the recoverable boundaries of the SE dynamics with heavy damping parameters, which is in good agreement with the Damping phase boundary. It should be noted that the Damping phase is not characterized by the saddle point property of free energy and is therefore different in nature from the phase of an equilibrium system in the physics sense. As seen in Fig. <ref>, it is interesting to notice that the reconstruction limit line with the heavy damping is significantly below the phase boundary between the Hard-possible and Hard-impossible phases, indicating that the damping has better estimation performance than the global search.
§ SUMMARY AND DISCUSSION
One of the central issues in the context of the method of the horseshoe prior distribution has been to uncover the role of the global shrinkage parameter in estimation accuracy. In this paper, we defined compressed sensing using a horseshoe prior and revealed the effect of the global-shrinkage parameter τ on the phase diagram through statistical physics analysis. According to the phase diagram, the local stability of the true signal improves with decreasing τ in terms of the linear stability boundary. However, for τ < τ_opt, there exists the Hard phase, in which the algorithm converges to a fixed point other than the true signal even though the true signal solution is linearly stable. This makes the dependence of the algorithmic phase transition point α_c(ρ,τ) on τ nonmonotonic. Furthermore, such dynamics of the algorithm is found to be explained by whether or not there is a local fixed point in the free-energy landscape. Then, the phase diagram obtained from the dynamics is successfully characterized statically by the free energy landscape. This structure is similar to the picture of the Bayes optimal case.
There remains much work to be conducted on the dependence of optimization problems on the solving algorithm. Our analysis has demonstrated that there exists a region in the Hard phase where the true signal can be efficiently recovered by adding a damping term to the AMP algorithm, which slightly broadens the algorithmic limit for the recovery of the sparse signal. In particular, we emphasize the importance of the path followed by the dynamics, in addition to the free-energy saddle point, in comprehending the performance of the algorithmic reconstruction from a macroscopic viewpoint, although this has not previously received much attention. To confirm the existence of the path to the true solution that avoids local solutions, it is necessary to examine a free-energy landscape expressed in at least two variables, suggesting that the conventional discussion of a free-energy landscape with only the mean squared error as a variable is insufficient. It is also confirmed numerically that the free-energy landscape can explain the behavior of the standard AMP algorithm without damping on the optimal curve χ̂_opt, although this is not theoretically obvious due to the discontinuous updates. While it is useful to understand optimization problems statically using the free-energy landscape, it should be well noted that the dynamics of the algorithm is characterized not only by the number of saddle points in the free-energy landscape but also by a more global structure.
Eventually, reconstruction limits for several existing algorithms are shown in Fig. <ref>. Except for the ℓ_1-norm regularization corresponding to the Laplace prior, and the Bayesian optimal case where the generating distribution and the prior coincide, the promising methods exhibit nearly equivalent recovery performance. As with other methods such as SCAD, the horseshoe prior method is efficient in the sense that its performance is similar to the Bayes optimal reconstruction even without assuming prior knowledge of the true signal. One of the main results of this study is to clarify the efficiency of the turning of the global parameter. This suggests that compressed sensing utilizing the horseshoe prior is one of the most promising methods, although the relative superiority of the methods is expected to depend on the specific generative distribution.
We would like to thank T. Obuchi and A. Sakata for the fruitful discussions and for providing numerical data for Fig. <ref>.
This work was supported by MEXT as the Program for Promoting Research on the Supercomputer Fugaku (DPMSD, Project ID: JPMXP1020200307). One of the authors, YN, was supported by the SPRING-GX program at the University of Tokyo.
unsrt
§ DERIVATION OF REPLICA SYMMETRIC FREE ENERGY
Here, we explain the derivation of the replica symmetric (RS) free energy in Eq. (<ref>), which is almost the same as in our previous paper<cit.>.
According to Eq. (<ref>), the free energy is obtained by the replicated partition function [Z^n] with the true signal w_0, which is given by
[Z^n] = [∫(∏^n_a=1w_aδ( X( w_0- w_a))λ_aπ(λ_a))e^-β∑^n_i=1ℋ( w_aλ_a)].
Since X_i,j is a Gaussian variable following the i.i.d Gaussian distribution 𝒩(0,N^-1), y^i_a x_i^⊤ w_a can also be treated as a Gaussian variable. Order parameters m_a and q_ab are defined as elements of the variance-covariance matrix for y_a as
𝔼_ x[y_0y_0] = 1/N w_0^⊤ w_0 = ρ,
𝔼_ x[y_0y_a] = 1/N w_0^⊤ w_a m_a,
𝔼_ x[y_ay_b] = 1/N w_a^⊤ w_b q_ab.
Inserting these order parameters as constraints with Lagrange multipliers for [Z^n], we obtain
[Z^n] = {q̃_ab,m̃_a}extr(∫_ℝ^n+1 y𝒩( y| 0,Σ)∏^n_a=1δ(y_0-y_a))^N
·∫∏^n_a=1w_aexp(-β∑^N_i=1∑^n_a=1ℋ(w_a,i))
·exp(∑_a≤ bq̃_ab(Nq_ab-∑^N_i=1w_a,iw_b,i))
·[exp(∑_am̃_a(Nm_a-∑^N_i=1w_0,iw_a,i))]_ w_0,
where
Σ = (ρ m^⊤
m Q)∈ℝ^n+1,( Q)_ab = q_ab, ( m)_a = m_a.
Under the replica symmetric assumption, the order parameters and corresponding conjugate parameters have no dependence on the replica index as
q_aa = q+χ/β,
q_ab = q,
m_a = m,
and
q̃_aa = βQ̂ -β^2χ̂,
q̃_ab = -β^2χ̂,
m̃_a = -βm̂.
By substituting Eq. (<ref>) and (<ref>) into (<ref>) and (<ref>), using Hubbard-Stratonovich transformation
e^β^2χ̂(∑_aw_a)^2 = ∫ Dξ e^β√(χ̂)∑_aw_aξ,
and taking the limits N→∞, n→0 and β→∞,
the RS free energy is obtained as Eq. (<ref>)
§ EQUIVALENCE BETWEEN SADDLE POINT EQUATIONS AND STATE EVOLUTION
We show here that the iteration equations for the saddle point equations for the RS free energy and the state evolution (SE) are equivalent.
The iteration equations for solving the saddle point equations for the RS free energy are given by
Q̂_t+1 = α1/χ_t,
m̂_t+1 = α1/χ_t,
χ̂_t+1 = αρ-2m_t+q_t/χ_t^2,
q_t =𝔼_w_0,ξ[⟨(w_0m̂_t+√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2)^2⟩],
m_t = 𝔼_w_0,ξ[⟨w_0^2m̂_t+w_0√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2⟩],
χ_t = 𝔼_w_0,ξ[⟨ξ^2+w_0m̂_t/√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2⟩].
From these equations, the iterative dynamics of the mean squared error σ^2 and the variance χ are derived in the following. First, substituting q_t and m_t into the mean squared error σ^2_tρ-2m_t+q_t, we have the iteration equation for σ_t as
σ^2_t = 𝔼_w_0,ξ[⟨w_0m̂_t+√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2 - w_0⟩^2],
=𝔼_w_0,ξ[⟨α w_0+√(ασ_t-1^2)ξ/α+χ_t-1τ^-2λ^-2 - w_0⟩^2],
=𝔼_w_0,ξ[(η(h_t-1;χ_t-1,α,τ) - w_0)^2],
where η is the threshold function and h_tα w_0 + √(ασ^2_t)ξ. Similarly, the iteration equation for χ_t is obtained as
χ_t = 𝔼_w_0,ξ[⟨ξ^2+w_0m̂_t/√(χ̂_t)ξ/Q̂_t+τ^-2λ^-2⟩],
=𝔼_w_0,ξ[ξ/√(χ̂_t)⟨m̂_t w_0 + √(χ̂_t)ξ/Q̂_t+τ^-2λ^-2⟩],
=𝔼_w_0,ξ[1/√(χ̂_t)ξ⟨m̂_t w_0 + √(χ̂_t)ξ/Q̂_t+τ^-2λ^-2⟩],
=𝔼_w_0,ξ[√(χ_t-1^2/ασ_t-1^2)ξ⟨α w_0 + √(ασ_t-1^2)ξ/α+χ_t-1τ^-2λ^-2⟩],
=χ_t-1𝔼_w_0,ξ[hη(h_t-1;χ_t-1,α,τ)].
Eq. (<ref>) and (<ref>) are the deterministic dynamics in SE.
From the second to the third line in Eq. (<ref>), a partial integral on ξ is calculated. When η is a discontinuous function, the partial integral is invalid. Therefore, the saddle point equation is equivalent to SE when τ^2α/χ>0.5, which is the continuity condition for η.
Note that some differences in the coefficients from the original paper<cit.> are due to the normalization of the design matrix.
§ LINEAR STABILITY OF TRUE SIGNAL
In the statistical-mechanics formulation, the true signal estimation, i.e., ŵ→ w_0, implies χ→0, where q→ρ, m→ρ and Q̂→∞ are satisfied. In this limit, only χ̂ has a non-trivial value, satisfying the relation
χ̂ = α(ρ-2m+q)/χ^2
=α[⟨m̂ w_0+√(χ̂)ξ/Q̂ + τ^-2λ^-2-w_0⟩^2],
=1/α(ρ(χ̂+[ u(λ^*(w_0))/τ^2]_w_0∼𝒩(0,1) ) .,
.+ 2(1-ρ)[(√(χ̂)ξ-√(τ^-2u_0))^2]_ξ>√(u_0/τ^2χ̂)).
Eq. (<ref>) was obtained by Eq. (<ref>) with the scaling Δτ^2χ̂/u_0 and the linear stability condition is derived by the stability of Eq. (<ref>) with respect to perturbation Δ^*→Δ^*+δΔ.
§ NUMERICAL EXPERIMENT: FINITE SIZE SCALING FOR THE SECOND-ORDER TRANSITION FROM THE IMPOSSIBLE TO THE EASY PHASE.
In this appendix, we present the numerical results with finite size N for the phase transition between the Impossible phase and the Easy phase.
Fig. <ref> shows the finite size scaling plots of MSE and recoverability, assuming α_c=0.2284, averaged over 10^3 instances solved by AMP at ρ=0.1 and τ=0.4, a near-optimal τ. When evaluating the recoverability, instances with MSE <10^-6 are regarded as successfully recovered. In our previous study<cit.>, the phase transition point of the compressed sensing was evaluated as α_c≃ 0.27 at ρ=0.1 and τ=1.0 for the purely local shrinkage case. The theoretical analysis in this study revealed that a more efficient recovery is possible for the global shrinkage τ<1. In fact, from Fig. <ref>, the second-order transition from the Impossible phase to the Easy phase occurs at α_c≃0.2284, which is consistent with our numerical analysis.
|
http://arxiv.org/abs/2306.10296v1
|
20230617085142
|
OpenSBT: A Modular Framework for Search-based Testing of Automated Driving Systems
|
[
"Lev Sorokin",
"Tiziano Munaro",
"Damir Safin",
"Brian Hsuan-Cheng Liao",
"Adam Molin"
] |
cs.SE
|
[
"cs.SE"
] |
OpenSBT: A Modular Framework for Search-based Testing of Automated Driving Systems
Lev Sorokin, Tiziano Munaro, Damir Safin
fortiss, Research Institute of the Free State of Bavaria
Munich, Germany
{sorokin, munaro, safin}@fortiss.org
Brian Hsuan-Cheng Liao, Adam Molin
DENSO AUTOMOTIVE Deutschland GmbH
Eching, Germany
{h.liao, a.molin}@eu.denso.com
July 31, 2023
=======================================================================================================================================================================================================================================================================================
Search-based software testing (SBT) is an effective and efficient approach for testing automated driving systems (ADS). However, testing pipelines for ADS testing are particularly challenging as they involve integrating complex driving simulation platforms and establishing communication protocols and APIs with the desired search algorithm.
This complexity prevents a wide adoption of SBT and thorough empirical comparative experiments with different simulators and search approaches.
We present OpenSBT, an open-source, modular and extensible framework to facilitate the
SBT of ADS.
With OpenSBT, it is possible to integrate simulators with an embedded system under test, search algorithms and fitness functions for testing.
We describe the architecture and show the usage of our framework by applying different search algorithms for testing Automated Emergency Braking Systems in CARLA as well in the high-fidelity Prescan simulator in collaboration with our industrial partner DENSO. OpenSBT is available at https://git.fortiss.org/opensbt.
Search-based software testing, metaheuristics, scenario-based testing, autonomous driving, automated driving
§ INTRODUCTION
Search-based software testing (SBT) is a promising approach for virtual system-level testing of Automated Driving Systems (ADS) since it is more effective and less time-consuming than doing on-road testing <cit.>.
A considerable amount of research has been conducted, such as for studying the reproducability of testing results across different simulator <cit.>, investigating the transferability of virtual testing to physical testing <cit.> or the development of effective test case generation approaches <cit.>.
Applying SBT to ADS is challenging, as it requires several steps such as the integration of a simulation environment, definition of a fitness function, definition of a search approach, as well analysis/visualization of the test outcome. Hence, implementing a testing pipeline from scratch is complex and time-intensive <cit.>.
Further, resulting implementations are often not accessible because of intellectual property concerns or cannot be easily extended to be used with other SUTs, simulators, fitness functions or testing scenarios <cit.>.
We present OpenSBT, a novel SBT framework for ADS which addresses the software engineering challenge of enabling a modular and flexible testing pipeline that can be applied in different use cases. OpenSBT offers the following functionalities:
1) it allows to apply existing or user-defined search algorithms for testing ADS and defining fitness functions, 2) it provides an interface to integrate different SUTs and simulators without affecting other parts of the pipeline, and 3) it visualizes and analyses the test outcome. Furthermore, OpenSBT is open-source and available for both academic and commercial use.
We describe the architecture of OpenSBT and demonstrate its usage for testing different automated emergency breaking systems (AEB) on both, the open-source simulator CARLA <cit.> as well the high-fidelity simulator Prescan <cit.> with distinct search techniques specified by the user. Further, we report about the experience of the application of OpenSBT by our industrial partner DENSO and outline a comprehensive validation study as part of our future work.
§ ARCHITECTURE
Our framework is based on the multi-objective optimization framework <cit.> and is implemented in Python. OpenSBT builds upon this framework given its modularity, extensibility and its support to numerous optimization methods.
In general, SBT applied to ADS requires
1) a SUT connected to a simulation environment,
2) a parameterized scenario <cit.>, which defines the behaviour of actors/environment and provides parameters with parameter ranges to modify this behaviour,
3) a fitness function to assess the quality of a test case simulation, and
4) a metaheuristic search algorithm.
In the following, we describe the interfaces OpenSBT provides, which allow users to integrate their search algorithm and to define its inputs based on their needs (Fig. <ref>).
Simulation/SUT. The interface for the simulator is represented by an abstract class 1 which provides a simulate method to be implemented to execute scenario instances in a specific simulator. We denote as scenario instance a scenario which specifies concrete values for each search parameter <cit.>.
Several scenario instances are passed to
simulate to allow to optimize the execution time for population-based search approaches. The result of one scenario simulation is returned in form of instance, which holds for each actor state information for each simulation time step, as e.g., location, orientation, velocity and meta information, such as sampling rate. The class can be extended with further (environmental) attributes, such as the position of static objects or colour of traffic lights.
Fitness. The fitness function assesses the quality of a test case and to should guide the search towards critical test cases. In OpenSBT, the fitness function is represented by the interface 2.
The class provides the method for the implementation of specific evaluation instructions, which receives as input a instance.
Since optimal solutions output by a Pareto-based search algorithm are not necessarily failure-revealing, OpenSBT in addition offers the abstract class . It allows to filter all critical test cases from the final solution set and can be used to guide the search <cit.>.
The fitness and criticality functions are domain and scenario specific, therefore they have to be defined manually by the user. Several guidelines exist on how to define the functions <cit.>.
Algorithm. The search algorithm is represented by the abstract class 3, which provides the methods
and .
OpenSBT provides three options to specify the search algorithm: a) An existing optimization algorithm in pymoo is used and instantiated in , b) A new algorithm is implemented by subclassing in pymoo and by instantiating it in ,
c) The method of is overridden and the search algorithm is implemented in run.
Scenario/Problem. The scenario is specified as part of the class , which is a subclass of pymoo's class . holds all information required to solve the underlying optimization problem such as the path to the scenario, the search space defined by the upper/lower bounds of the search variables, and instantiations of , , and .
Our framework supports scenarios represented in v1.2.0 (OSC), a standard introduced by ASAM[https://www.asam.net/standards/detail/openscenario] and supported by many simulators. The simulator interface can be extended to support other formats.
Scenarios can be derived manually by domain experts using so-called mental models or automatically, e.g., through clustering real driving data <cit.>.
It is out of scope of this framework to provide a complete list of scenarios for testing an ADS.
The search variables and the bounds for the variables are crucial for the successful search and can be chosen by using expert knowledge.
Output.
After SBT has been completed, outputs all critical and non-critical test cases and the corresponding fitness values in form of a file and two-dimensional search space plots, i.e. design space plots.
Additionally, conditions over the search variables are derived using the classification and regression trees algorithm <cit.> which help to characterize the critical scenarios (e.g., ego velocity ≥ 20m/s ∧ pedestrian velocity ≥ 5 m/s). This information can be used to specify the operational design domain (SAE J3016) or to debug the SUT <cit.>. To allow further debugging and test inspection,
the trajectories of all scenario actors of an executed test case are visualized in an animated 2D plot.
§ USAGE
We describe the usage of OpenSBT on two different usage scenarios: In the first usage scenario (S1) we use CARLA and apply OpenSBT for testing an ADS with an AEB developed in the fortiss Mobility Lab.[https://www.fortiss.org/en/research/fortiss-labs/detail/mobility-lab] We use NSGAII <cit.> for generating the test inputs.
In the second usage scenario (S2) we use Prescan and test a Simulink-based AEB developed as part of the Automated Valet Parking use case of the FOCETA project <cit.> led by DENSO. We use a different surrogate-assisted search technique NSGAII-DT <cit.>.
As testing scenario we consider an ego vehicle which drives on a straight road while an occluded pedestrian starts crossing its driving trajectory. We want to identify test cases where the AEB fails to avoid a collision with the pedestrian. Of course, more complex scenarios can be accommodated in our framework, with no impact on the usability of OpenSBT.
In the following, we explain how to configure the search to test the SUT in both usage scenarios.
A demo video is provided here.[https://youtu.be/oOyug8rwAB8]
Simulator Integration To integrate the SUT and the specific simulator for S1, we create the class CarlaSimulator which implements the simulate method of the abstract class Simulator to simulate test cases.
We package the execution logic for CARLA in a dedicated module[https://git.fortiss.org/opensbt/carla_runner] instead implementing the logic in simulate to avoid simulator-specific dependencies.
By deploying multiple CARLA servers, the CARLA interface is capable of running several scenario instances in parallel. Here, scenarios which need to be evaluated are stored in a thread-safe queue. This queue is processed by a pool of worker threads, each managing one of the available CARLA servers through their TCP/IP interfaces. Once all scenarios have been evaluated, a set containing the respective s is returned.
Besides executing embedded CARLA agents (based on the class), the CARLA Runner module provides generic and well-used ROS and FMI interfaces[https://fmi-standard.org/tools/].
Decoupling the SUT using ROS and FMI from the simulation environment further reduces the effort required to leverage SBT for systems which are provided with a ROS bridge such as Baidu's Apollo.[https://github.com/ApolloAuto/apollo/]
For S2, we integrate Prescan by using the simulator class Prescan Simulator and the dedicated component .[https://git.fortiss.org/opensbt/prescan_runner] The connects to MATLAB and triggers the execution of the Simulink-based SUT in Prescan.
Fitness/Criticality Function.
We select two fitness function we will use for both usage examples: 1) outputs the minimal distance between the ego vehicle and the pedestrian, 2) calculates the velocity of the ego vehicle at the time when the minimal distance is reached. We have to subclass and implement its function for computing both fitness values.
Additionally, we specify that we want to minimize and maximize to guide the search towards critical test cases.
The criticality function is defined in a similar way as the fitness function, the method of the corresponding class returns true, when = 0 and > 0.
Search Algorithm. NSGAII used in S1 is already implemented in pymoo, therefore we only need to implement the init method of using the method a) as explained in Section <ref> - algorithm.
For S2, we implement NSGAII-DT in the optimizer class using the method c), as the search algorithm is not implemented in pymoo.
Problem. In both usage scenarios three input variables are involved in the search process: : the velocity of ego, : the velocity of the pedestrian, : the distance to the ego vehicle, when the pedestrian starts walking. We set the parameter bounds in such a way that collisions can occur.
For S1, we provide an OSC file where the behaviour of the pedestrian/environment is specified and the described variables defined.
For S2, we pass a different Prescan-specific file, as Prescan has a limited support for OSC.
For both usage scenarios, we pass the search variables, the search bounds, and the scenario file to as done for S1 in Fig. <ref>. We configure the simulator (line 10) and pass the fitness/criticality function we defined before (line 9,11).
We configure the search algorithm and set the search configuration (e.g. search time, population size) by instantiating the class (for S1 s. Fig. <ref>, line 14-17).
The experiment definition for S2 is similar, the only difference is that NSGAII-DT is specified instead NSGAII.
Search Execution. For S1, we start the search by executing the command in the command line,
where the flag holds the experiment name. Further we can modify experiments. To restrict the search time to two hours and set the population size to 50 we need to execute . A description of further flags can be found in OpenSBT's documentation.
When the search has terminated, results are stored in the directory. The generated visualizations can be used to determine the domain of the search space in which the SUT behaves faulty. For instance, in Fig. <ref> is the design space plot from S1 for the variables and depicted, which contains one derived condition when the SUT behaves critical.
§ EVALUATION
In the previous section we have presented two usage examples which serve as a preliminary evaluation of OpenSBT.
In addition, we have provided OpenSBT to our industrial partner DENSO.
DENSO engineers have applied OpenSBT to benchmark different evolutionary algorithms such as (Multi-Objective) Particle Swarm Optimization on an AEB system. The integration of these algorithms has been done similarly to that of NSGAII-DT. Based on the feedback by DENSO engineers, testing the AEB with different search techniques and comparing the test outcomes have been facilitated by OpenSBT. Given the positive feedback, we are planning a more comprehensive user study by means of replication experiment <cit.> with computer science master students in the scope of a practical course. Here, the idea is to have two groups of 4-5 participants. One group will perform SBT an AEB by engineering a testing pipeline without the support of OpenSBT. Another group will use OpenSBT for testing. By interviewing study candidates, we will try to make sure that selected participants have similar software engineering skills.
To evaluate OpenSBT w.r.t. its usability/flexibility we are planning to use the following metrics for each of the groups: The time required a) to perform testing the AEB with a predefined test configuration, b) to modify/extend the testing pipeline when using a different test configuration with/out using OpenSBT. The times will be collected based on time-tracking version control issues. To mitigate incorrect time tracking, we will perform in addition a qualitative evaluation and query participants about the perceived complexity implementing the testing pipeline.
§ RELATED WORK
Considerable amount of research has been conducted to support virtual testing of ADS.
S-Taliro <cit.>, a temporal logic falsification tool is used to generate critical test cases for ADS by falsifying safety requirements given as temporal logic specifications. The SUT needs to be provided as a Simulink model, which limits the applicability of this framework.
As part of the SBST Tools Competitions 2021/2022 <cit.>, several tools have been developed for the generation of road topologies for testing lane keeping assist systems (LKAS). These frameworks are coupled to one simulator and one specific ADS type. On the contrary, OpenSBT can accommodate different simulators and ADS which include LKAS or collision avoidance systems such as AEBs.
SafeBench <cit.> allows for testing an ADS on predefined driving scenarios and is coupled to CARLA. Also, it is not possible to integrate a different critical test-case generation algorithm or to use a SUT which must be implemented using Simulink. Further, the framework can only evaluate ADS based on reinforcement learning. Wang
et al. <cit.> propose a tool to generate safety-critical test cases by perturbing the behaviors of other actors. The framework focuses on Lidar-based ADS and uses one specific simulator.
The tool ADEPT <cit.> simulates an ADS only in CARLA and generates adversarial attacks on deep neural networks to provoke critical scenarios.
The open-source testing platform OpenPASS <cit.> is similar to OpenSBT, but does not allow the integration of different simulators.
§ CONCLUSION AND FUTURE WORK
We have presented a modular framework which tackles the engineering challenge of setting up a testing pipeline that is compatible with different simulators, search algorithms and fitness functions.
We have conducted a preliminary evaluation of OpenSBT on two usage scenarios with different search approaches and simulators.
A comprehensive validation study by means of a replication project is part of our future work.
Also, we are working on easing the integration of existing search algorithms implemented in different optimization frameworks than pymoo.
We believe that OpenSBT will contribute both to the development as well application of effective virtual testing approaches to increase the safety of ADS.
§ ACKNOWLEDGMENTS
< g r a p h i c s >
This paper has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 956123.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.